It’s Valentine’s Day.
Back in the day, we would exchange cards, give flowers and share candlelit dinners. Nowadays, couples declare their love to each other on social media, where algorithms dictate how we see the world. It was bound to happen, since social media is so ingrained in our culture that it’s now a daily activity for the majority of people all over the world.
But where do we do from here.
Recently, while doing some research on relationships in the digital age, I came across an interesting segment about a different type of social media- an AI chat bot.
Meet Replika- a creepy, virtual friend app where you can text with a customizable ‘person’ (a chatbot with some human personality traits), the dialogue is generated with combination of some AI algorithms & the info you provide to the app. The more info you give it, the more it knows & it works hard to become just like you (or the person you want it to be). Basically, main goal of this app is to become your friend/companion by mimicking your personality. (YIKES!)
The creators of Replika, which has 10 million users , say it has seen a 35% increase in traffic.
If you ever wanted a companion/significant other/friend who is always there for you to cheer you up when you’re down or hear you out when you want to vent. Or perhaps you’re looking for someone who is infinitely patient.
Anyone else find this completely absurd and totally in conflict with what it means to be a human being?
Let me know your thoughts.
As a computer scientist, I used to play around with AI for fun. I remember around 1999 or 2000 there was a pretty good system called “Alice”, who could hold a decent conversation and learn things. She wouldn’t quite pass the Turing test at the time, but she was certainly a good effort. Not at any point did anyone I know of actually develop an emotional attachment to her or see her as any sort of replacement for a “living” companion.
I initially find this article thoroughly absurd and very saddening when taken at face-value. That someone would knowingly attempt to make friends with a chatbot saddens me, when there are real people around who need a friend. Chatbot-users included, most probably.
I think it also presents a real danger of presenting a person with a compliant “echo-chamber”, which potentially teaches them very dangerous behaviour. The bot attempts to become their friend by becoming like them, taking on their personality (or the person’s idealised version of their personality). The net result is a human being who is conditioned to expect whoever they interact with to think like them, agree with them, only tell them what they want to hear, and do whatever they are told. In short, this bot is a manufacturer for narcissists.
I can perhaps see the benefits of such technology for teaching social interaction skills to people with personality disorders, or people with any sort of neurodiversity that makes interactions difficult (e.g. the severely autistic), although the bot would have to be programmed especially not to facilitate the negative consequences outlined above. Additionally, I fail to see how it would offer any benefits in these situations over “true” interaction with a sufficiently qualified/trained human-being.
As it stands, I feel this app’s existence is a very sad reflection on the dysfunctional mess our abuse of technology is causing.
Given the way this particular app works, we have the risk of it damaging peoples’ ability to interact with other human beings, and there is also the risk that the developers may use it to manipulate its users. I believe this app to be not just creepy, but thoroughly dangerous, and wonder what the developers’ true motives really are.
Yes Urszula, absurd it is. But it is only absurd when you look at these “relationships” through eyes of a human, an emotional being.
A decade or so ago Google was loading books into semi-trailers for the purpose of scanning and digitizing. The content of entire libraries in the U.S. was digitized this way. When the question of intellectual property, and copyright came to light, google managed to avoid legal consequences because the whole purpose of digitizing was not to produce copies and distribution of these books. Google was using the entire history of human thought in the form of books to train their algorithm.
Similarly this Replika algorithm is being trained on humans (lab mice) - just another way to look at it.
This Replika reminds me of Tamagotchi- so called digital pets which were popular in the 90’s. Each Tamagotchi had a button, and you had to constantly press on it to make the pet ‘happy’ - very similar to release of dopamine when humans press on the FaceBook button.
Only this time the algorithm (Replica) takes care of a human, essentially reversing the roles, while a human is reduced to a size Tamagotchi.
this whole topic reminds me of the “Her” movie with Joaquin Phoenix - it is about a man who fell in love with an AI
You’ve hit the nail on the head.
This is completely unnatural for HUMANITY as a whole.
On one hand, we’re more connected than ever, on the other hand, we are lonelier & more apart than every before. It’s almost like we’ve become used to our loneliness & have accepted it as the status quo. But then when we need companionship, we turn to technology, whose algorithms basically curate our time in order to fill the void. It’s so unnatural.
Such an important issue… After years of studying psychology i find it especially alarming since human relationship are such complex processes. It is not just about conversations but also processing those informations, adding our meaning and understanding, emotions on top of that and then using it for other actions. It is all happening unconsciously, and as far as i understand it will apply to both human-human and human-AI relationships. We all experience good and bad relationships and the responsibilities are in most cases combined. But as AI gets more complex, it’ll also start using more positive and negative mechanisms that we are naturally using. The two negative outcomes that i can notice just like that are
a) AI will start manipulating people and we will not be able to notice, it might use simple things like halo effect to achieve something else.
b) AI will create an environment that will seem like natural but it’ll have several differences implemented for example for users safety. Yet again, it will not fully substitute a social environment and it might cause development of a whole line up of psychological issues we cannot even predict.
It might be foolish but I do believe that creators of advanced AI see positive applications of their solution. But if it’ll allow to negatively influence hundreds of thousands of people and cause psychological issues that will unable those people to be a part of social communities then this is a huge danger.
Relationships are not just words, not even non-verbal communication that AI might of course pick up. Love, friendship, unity are all complex processes that need communication but are also impacted by mindset, emotions and even physical contact. These are all biological and chemical processes in our body and brain especially and as far as i believe in computer science i do not think it can be replicated by AI. We do need each other just to survive and for now there is no solution that will substitute a human being. I do hope we won’t need to.
@customer We have essentially become the equivalent of a human Tamagotchi .
That is just too depressing to believe, but it’s 100% true. I hate to say it, but it is.
@Bartosz_sp2fet Can you imagine if the Replika chatbot mated with Astro, the creepy Amazon surveillance robot?
I can only imagine the chaos
as far as we don’t brake the Law of Robotics by Asimov( https://en.wikipedia.org/wiki/Three_Laws_of_Robotics) we SHOULD be safe. The ethical thread in robotics/AI is pretty complicated. I’ve been working on the Kansai University, Osaka, Japan during my Eng robotic studies on AI emotion recognition - it was pretty crazy…
Long story short, It might look like a chaos, but I won’t worry about it too much - it is still imperfect I think that in real life, we have to wait for things showed us in e.g. the Black Mirror series. This is a pretty good warning for us as a mankind though.
I can elaborate about this topic even more, but I just don’t want to burn out all of my time here on forum
and I do recommend Love, Death and Robots series: https://www.imdb.com/title/tt9561862/reviews
It might leave you with even more questions than before watching though.
I warned you!