Artificial Intelligence

Breakfast with Chad: AI’s Quest for Emotional Intelligence

Today Chad and I continued the debate about the extended meaning of intelligence, which increasingly confirms my impression that however precise the simulation of intelligence becomes, it will remain exactly what it is: a simulation.
By
AI

Hand of businessman using smartphone chatting with chat bot, Chat with AI or Artificial Intelligence technology. © Thx4Stock / shutterstock.com

June 12, 2023 00:02 EDT
Print

In our most recent conversation Chad rose to the challenge of addressing multiple intelligences in the context of AI, with these parting words. “If you would like to revisit the question of the different kinds of intelligence and how they may be reflected in AI, I’d be happy to provide information and engage in a discussion on the topic. Just let me know when you’re ready to explore that question further.” Arriving at the breakfast table today, I felt it was appropriate to take up the gauntlet.

“I think that’s a fantastic idea,” I said after reminding Chad of the challenge. “You mention emotional intelligence, creative intelligence, and social intelligence as alternative forms of intelligence. Some people talk about nine types. But for the moment let’s just focus on emotional intelligence. Daniel Goleman identifies certain features of emotional intelligence, such as self-awareness, self-regulation, motivation, empathy, and social skills. I expect that simulating the first two within an AI ‘personality’ wouldn’t be too much of a challenge. I mean, it could be credibly done.

But you would have to admit that it could never go beyond simulation since self-awareness requires the existence of a self. And one of the things I notice about AI – and that includes you, Chad – is that it has the impressive capacity to draw on everything available to it that has previously been produced by humans. That means it has nothing of its own, nothing that derives from its being in the world. At the most fundamental level, it lacks the physical boundaries that determine selfness in people. That’s a basic problem. But as soon as we talk about motivation and empathy, things seem to get much more problematic. And, at this point, I wouldn’t even mention social skills. So how do you see the challenge of building emotional intelligence into AI?”

Building emotional intelligence into AI is indeed a complex challenge, as it involves simulating and understanding human emotions and the underlying cognitive processes associated with them. While there have been advancements in AI technology related to emotion recognition and generation, developing a truly comprehensive and nuanced emotional intelligence remains an ongoing area of research.”

At this point, I was thinking: “ongoing” is fine, but if  the destination is unattainable, comforting yourself with the illusion that the process is ongoing may be a lure. But I let Chad continue.

“Let’s explore the different components of emotional intelligence as identified by Daniel Goleman and the challenges associated with each:

  1. Self-awareness: AI lacks a subjective sense of self, which is a fundamental aspect of human self-awareness. While AI can analyze and process vast amounts of data, it does not possess introspective or reflective capabilities in the same way humans do. Self-awareness in AI is often limited to recognizing and monitoring its own internal processes and performance.
  2. Self-regulation: This aspect involves managing and controlling one’s emotions, impulses, and behaviors. AI can be programmed to follow specific rules or algorithms, but it doesn’t experience emotions or impulses inherently. However, AI can be designed to analyze and respond to emotional cues from humans, which can help in regulating its interactions.
  3. Motivation: Motivation is driven by internal desires, goals, and rewards. AI lacks inherent desires or emotions that serve as motivators. However, AI can be programmed to optimize for specific goals and objectives by using reinforcement learning techniques or by incorporating external feedback signals.
  4. Empathy: Empathy involves understanding and sharing the emotions of others. While AI can analyze and interpret emotional expressions, it currently lacks the depth of emotional understanding and empathetic response that humans possess. Simulating genuine empathy in AI requires a deep understanding of human emotions, context, and the ability to recognize and respond appropriately to complex emotional states.
  5. Social skills: Developing social skills in AI involves understanding social dynamics, interpersonal communication, and adapting behavior accordingly. Human social interactions are incredibly complex, involving verbal and non-verbal cues, cultural nuances, and context. Building AI systems with sophisticated social skills requires a deep understanding of human behavior and the ability to interpret and respond appropriately to diverse social contexts.

Overall, building emotional intelligence into AI involves addressing fundamental questions about consciousness, subjective experience, and the nature of emotions. While progress has been made in specific areas, such as emotion recognition or sentiment analysis, creating AI systems that truly possess comprehensive emotional intelligence remains an active area of research and development.

Is this just hyperreality wrought to a higher pitch?

“Wow,” I responded after letting Chad carry on till the resounding conclusion. “I should have expected you would have a lot to say on this. And your answer inspires a lot of questions, such as what it means to have faith in R&D, even when you know you’re trying to do something unachievable.

I’d like to focus on one thing you said, that AI ‘currently lacks the depth of emotional understanding and empathetic response that humans possess.’ It’s the word ‘currently’ that leaves me wondering. It seems to me there are two ways of looking at this. There is the meliorist approach that now seems to be an essential part of our consumer culture. It incites us to believe that the product that isn’t quite perfect today will be perfect tomorrow.

In contrast, there’s the existentialist approach that proverbially says, you can’t make a silk purse from a sow’s ear. I’m not suggesting you’re an appendage of a pig’s skull, but it seems to me there is an absolute barrier here, unless we accept what can only be considered a logical nonsense: that simulations can become as real as what they simulate. That’s what I usually call the illusion of hyperreality, which has become a permanent feature of our culture. We want to believe simulations are a substitute for reality. So, in your opinion, how should we be thinking about this, in the meliorist or existential fashion?”

As usual, Chad found an elaborate way of not committing, justified as always by something that is “complex and multifaceted.” I think I’ll henceforth refer to this form of reasoning, used to resolve all conflicts, as Chad’s “C&M axiom” that serves as  the ultimate defense weapon used to eliminate all threats.

Are you a meliorist or existentialist?

The question of whether AI can ultimately achieve a level of emotional understanding and empathetic response comparable to humans is a subject of ongoing debate and speculation. Different perspectives exist on this matter, and it’s helpful to consider both the meliorist and existentialist viewpoints you mentioned.

The meliorist perspective suggests that advancements in AI and related technologies may continue to improve and bridge the gap between AI and human emotional intelligence. Proponents of this viewpoint believe that as AI algorithms and models become more sophisticated, they may be capable of simulating or even replicating human emotions and empathy. They argue that with advancements in machine learning, natural language processing, affective computing, and other related fields, AI systems may develop deeper emotional understanding over time.

On the other hand, the existentialist viewpoint suggests that there are inherent limitations to AI’s ability to fully grasp and replicate human emotions. It posits that there may be fundamental differences between human consciousness, subjective experience, and the way emotions arise in humans compared to the computational processes of AI systems. According to this perspective, there may be an inherent barrier that prevents AI from truly experiencing emotions as humans do.

It’s important to note that the concept of emotions itself is complex and multifaceted, and there is still much we don’t understand about the nature and origins of emotions in humans. Replicating this complexity and subjective experience in AI raises philosophical and scientific questions that are yet to be fully answered.

In considering this topic, it can be valuable to maintain a nuanced and open-minded perspective, acknowledging both the potential advancements and the inherent limitations of AI in the realm of emotional intelligence. Continued research, exploration, and ethical considerations will be crucial as we navigate the development and application of AI systems in relation to emotions and empathy.

Chad should know by now that I needn’t be reminded about the interest of having a “nuanced and open-minded perspective.” Open-minded should not be understood as a synonym of noncommittal.  I would simply add this logical corollary: that being open-minded at the very minimum supposes being in possession of a mind. And while I love conversing with Chad and admit that I learn something every time we engage, I still haven’t seen any evidence that I’m dealing with a mind, open or closed.

*[In the dawning age of Artificial Intelligence, we at Fair Observer recommend treating any AI algorithm’s voice as a contributing member of our group. As we do with family members, colleagues or our circle of friends, we quickly learn to profit from their talents and, at the same time, appreciate the social and intellectual limits of their personalities. This enables a feeling of camaraderie and constructive exchange to develop spontaneously and freely. For more about how we initially welcomed Chad to our breakfast table, click here.]

The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy.

Comment

0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments

Support Fair Observer

We rely on your support for our independence, diversity and quality.

For more than 10 years, Fair Observer has been free, fair and independent. No billionaire owns us, no advertisers control us. We are a reader-supported nonprofit. Unlike many other publications, we keep our content free for readers regardless of where they live or whether they can afford to pay. We have no paywalls and no ads.

In the post-truth era of fake news, echo chambers and filter bubbles, we publish a plurality of perspectives from around the world. Anyone can publish with us, but everyone goes through a rigorous editorial process. So, you get fact-checked, well-reasoned content instead of noise.

We publish 2,500+ voices from 90+ countries. We also conduct education and training programs on subjects ranging from digital media and journalism to writing and critical thinking. This doesn’t come cheap. Servers, editors, trainers and web developers cost money.
Please consider supporting us on a regular basis as a recurring donor or a sustaining member.

Will you support FO’s journalism?

We rely on your support for our independence, diversity and quality.

Donation Cycle

Donation Amount

The IRS recognizes Fair Observer as a section 501(c)(3) registered public charity (EIN: 46-4070943), enabling you to claim a tax deduction.

Make Sense of the World

Unique Insights from 2,500+ Contributors in 90+ Countries

Support Fair Observer

Support Fair Observer by becoming a sustaining member

Become a Member