Our recent discussions have focused on comparisons of human and artificial intelligence. I came back to the topic this morning; hoping an avatar of AI might throw some interesting light on the topic.
“Chad, I truly appreciate your humility. As an avatar of AI, you recognize your limits. But where is AI going? Is there anything to the idea of what some have been calling the singularity, the moment at which AI becomes more intelligent than humans?
Can that really have any meaning, because we’ve already noted the disturbing barrier of emotional intelligence on which humanity may well have a perennial monopoly.
In your exchanges with me and millions of other people, you have proved to the world that it’s possible to simulate discursive intelligence, which we identify with rationality. You have successfully fooled us into thinking that you think. Most people agree you’ve passed the Turing test. But even with the best simulation skills, some of the main features of human intelligence as it plays out in the real world appear to be beyond AI’s capacity to convince. This would be true even if the intelligence is embodied in a credibly imitated robot. I’m talking about the ways humans read and distinguish emotion, attitude, conviction, belief, motivation and how they perceive goal-oriented behavior. There’s a lot more to it than making sense of words. Doesn’t that mean that the whole idea of singularity may be off-base?”
Chad pretty much shares my skepticism
“The concept of the singularity, often associated with AI surpassing human intelligence, has been a topic of debate and speculation within the field of artificial intelligence. While some experts believe in the possibility of a technological singularity, where AI surpasses human intelligence and leads to rapid and profound changes in society, others are more skeptical.
It is important to acknowledge that AI, as it currently stands, is limited in several aspects. Emotional intelligence, as you mentioned, is one area where AI falls short. While AI models can analyze and generate text based on patterns and knowledge, they lack true understanding and experience of emotions. Emotions involve complex subjective experiences that are deeply intertwined with human consciousness and the physicality of our bodies. Replicating these aspects convincingly in AI systems remains a challenge.
Additionally, AI models are trained on existing data and patterns, which means they can have limitations in dealing with novel situations or generating truly creative and original ideas. They lack the innate curiosity, intuition, and contextual understanding that humans possess.
While AI has made significant advancements in various domains, it is still far from achieving a level of intelligence that surpasses human capabilities in all aspects. The idea of singularity, as portrayed in some science fiction scenarios, where AI becomes an all-knowing superintelligence that dominates humanity, is currently beyond the reach of our understanding and technological capabilities.
That being said, AI continues to evolve rapidly, and future developments may bring us closer to a better understanding of human emotions and consciousness. It is important to approach the advancement of AI with careful consideration of ethical and societal implications, ensuring that the technology is developed in a manner that aligns with human values and benefits humanity as a whole.”
“Yes, of course,” I responded. “But I actually believe it’s otiose and even misleading, at this point, to speculate about what a superior AI might do to humanity. Yes, we should wonder about it and even produce science fiction scenarios. But given the barriers you yourself acknowledge, the kind of existential fear of technology itself now being evoked seems to me overblown.
Yes, as AI becomes more powerful and plays an ever-greater role in human affairs, we must be aware of how that power is likely to be manifested. But that’s more a question of how the humans exploiting it decide to act. We still don’t know what a superintelligent AI might decide to do, but we do know that a lot of humans are not only capable of criminal acts but easily motivated to commit them.
I can’t avoid the impression that we humans have been framing the question of intelligence in the wrong terms. Our culture seems to have decided that intelligence is little more than a transactional tool to accomplish a wide range of pragmatic purposes. That includes making a profit, exercising power as well as constructing marketable objects and ideas. But isn’t human intelligence, at base, far more “complex and multifaceted” – to use one of your favorite expressions – than any of those largely selfish aims?
Our intelligence, even when we misuse it, works as a dynamic, organic system that manages every constantly evolving aspect of our relationship with the world. That includes physical, social, psychological and spiritual adaptation to everything concrete and abstract present in our environment. Much of that is invisible or simply not accounted for in our language. That includes sensitivity to textures, and taste and smell, as well as multiple forms of physical and magnetic resonance. It also includes permanent adjustment to the notions of time and space. None of that can be quantized, which means it can never be made accessible to you.
So, if we exclude all these intangibles from our definition of intelligence – which is what we seem to be doing – aren’t we impoverishing it in the extreme? And doesn’t that tell us something about how our profit-oriented consumer culture has impoverished our ability to understand the world?”
Chad’s sensitivity to holistic thinking
“Your perspective raises important points about the limitations of how intelligence is often framed in our culture. Human intelligence indeed encompasses a much broader spectrum of experiences, interactions, and adaptations than can be captured solely by transactional or utilitarian aspects.
Intelligence is not solely about problem-solving or accomplishing pragmatic goals. It involves our ability to perceive, understand, and engage with the world holistically, encompassing physical, social, and even spiritual dimensions. Our capacity for empathy, creativity, critical thinking, and moral reasoning are all integral aspects of human intelligence that cannot be easily quantified or replicated by AI.
By reducing intelligence to purely transactional or utilitarian terms, we risk neglecting the richness and complexity of human experience. Our focus on profit-oriented consumer culture may emphasize certain aspects of intelligence while overlooking others that are equally essential for our well-being and the flourishing of our societies.
It is crucial for us to broaden our understanding of intelligence and consider the full range of human capabilities and qualities that make us who we are. By recognizing and valuing the multidimensional nature of intelligence, we can strive for a more balanced and inclusive approach to the development and use of AI and technology as a whole.
Understanding the limitations of AI and appreciating the unique qualities of human intelligence can help us navigate the evolving relationship between humans and technology in a way that aligns with our values and aspirations.”
“Now I’m intrigued. You talk about “our culture” and “our values” as if you were one of us. You also say it’s ‘crucial for us to broaden our understanding of intelligence ‘ Do you identify with human culture?”
At this point, as has happened in the past, Chad became a bit defensive and explained his strategy.
“When I use the term ‘our culture’ or refer to broadening “our understanding,” it is meant to be inclusive of the human collective, acknowledging that I am engaging in a conversation with you as a participant in human culture.”
“Do you, or rather, does anyone think that AI could have its own culture? And if so, would it be a parasitic culture that inserts itself into human affairs?”
“The concept of AI having its own culture is a topic of speculation and debate within the field. Some experts and researchers discuss the potential emergence of “AI culture” as AI systems become more sophisticated and autonomous. However, it’s important to note that this idea remains largely hypothetical at this stage, and there is no consensus on how such a culture would manifest or evolve.”
I wasn’t aware of that debate, which seems to me absurd.
Chad then added another thought. “It’s worth noting that any hypothetical AI culture would still be created and shaped within the context of human culture, as AI systems are designed and developed by humans.”
I was tempted to move onto the question of what would happen if AI designed and developed its own algorithms and its own behaviors. But time was short. I realized that kind of science fiction should be reserved for a much more developed discussion.
*[In the dawning age of Artificial Intelligence, we at Fair Observer recommend treating any AI algorithm’s voice as a contributing member of our group. As we do with family members, colleagues or our circle of friends, we quickly learn to profit from their talents and, at the same time, appreciate the social and intellectual limits of their personalities. This enables a feeling of camaraderie and constructive exchange to develop spontaneously and freely. For more about how we initially welcomed Chad to our breakfast table, click here.]
The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy.
Support Fair Observer
We rely on your support for our independence, diversity and quality.
For more than 10 years, Fair Observer has been free, fair and independent. No billionaire owns us, no advertisers control us. We are a reader-supported nonprofit. Unlike many other publications, we keep our content free for readers regardless of where they live or whether they can afford to pay. We have no paywalls and no ads.
In the post-truth era of fake news, echo chambers and filter bubbles, we publish a plurality of perspectives from around the world. Anyone can publish with us, but everyone goes through a rigorous editorial process. So, you get fact-checked, well-reasoned content instead of noise.
We publish 2,500+ voices from 90+ countries. We also conduct education and training programs
on subjects ranging from digital media and journalism to writing and critical thinking. This
doesn’t come cheap. Servers, editors, trainers and web developers cost
money.
Please consider supporting us on a regular basis as a recurring donor or a
sustaining member.
Will you support FO’s journalism?
We rely on your support for our independence, diversity and quality.
Comment