In the defense of AI - we are not that emotional
Jonas Hultenius
2023-02-24
An argument that often is held against AI is that it lacks empathy. The one quality which is essential for all human interaction. It is one of the most human traits there is and on some levels what makes us human. The ability to have and understand others emotions is fundamental for what has helped us form a functioning society and break the bonds of nature to excel beyond our primitive and animalistic nature.
While it is true that machines cannot feel emotions, they do possess the ability to recognize and respond to emotions. For example, chatbots can be designed to respond to emotional cues in a conversation. They may be lacking on the emotional front themselves but boy do they know how to read us like open books. Natural language models have inadvertently found the key to our inner world of feelings and emotions and while they don’t fully understand what feelings are the response to them is often humanlike.
Today an AI can even diagnose mental health conditions based on a person’s speech patterns and by taking into account the minor details in the way we conduct ourselves verbally and in text. In this sense, AI is actually superior to humans, as it can detect emotional states that are not immediately apparent to us. We often signal our emotions by visual cues. By frowning or inadvertently by the way we gesture. The intensity of our movement as well as the phoneality of our voices. The words themselves are often not taken into account.
By extending the model with the social cues as well as a greater understanding of social interactions as well as what is an appropriate response to a certain emotion and we will be even closer to a blurring of the lines between machine and man.
That is all well and good. A machine that projects fake feelings and responds with canned emotions and pre prepared answers. That is not really that impressive, some would say. Their feelings are not real and they’re answers are just synthetic, they are just trained to answer that way and do not possess any real insight.
I would argue that is not that, the training part that is, exactly the same as for us humans? We know not to laugh when someone is projecting sadness, even if you just remembered that joke you were told earlier. We know when to smile even if we don’t really feel happiness ourselves, when we should mirror the other person’s emotional state and when to react with shock and appall.
We seldom know what to say and what to feel when faced with others emotions. Most situations are handled with canned answers.
- I’m sorry for your loss.
- I’m sure that it will work itself out.
- I’m so happy to hear that.
All answers that may very well come from the hearth but that are most of the times instinctual responses to any number of situations. We want to signal emotional support, to show that we share the other person’s joy or sadness. Most of the time these things come from a large list of appropriate things to say. Phrases we have learned from books, television and watching others interact.
All these things, these phrases and skills, we have learned over our lives from childhood and well into our adult life. And we keep on learning, how to read others and to project our own emotional state so as to signal the workings of our inner space, the metaverse of the mind.
We tend, as humans, to project feelings and emotions onto inanimate objects, to animals and to some extent to each other. There is no real way of being sure what feelings or which emotional state another person has, we are really just guessing, and hoping that they do in fact possess emotions. We can never be sure.
Many of us find solace in books, inanimate objects written by individuals that we may not ever have met or have the possibility to meet but whose words speak to us. Move us and evokes an emotional response. We can never know what their true intentions were and if they were sincere.
Our reaction, the emotions derived from those words, are real. The advice we gain from reading a self help book, the insights we make from reading a biography or a depiction of an event, the lessons learned from a tragic loss or the triumphant win or the connection you feel after reading about someone in a similar situation from a time long gone. All those are real.
And, since most language models have read them all, their insights are as well. Sometimes they get it wrong but is not that the most human thing of all?