I opened this week’s conversation with these reflections.
“I’m a great fan of Artificial Intelligence and, as an optimist, believe that the most extreme fears expressed about the potential risks are understandable but exaggerated. I hold the conviction that the kinds of people who develop and control AI will take steps to prevent the worst from occurring. I think that most of the apocalyptic fear related to AI stems from a voluntary misunderstanding of the very notion of intelligence.
“I tend to think that the very idea of the singularity — the moment when AGI surpasses human intelligence — is based on the nonsensical, culturally loaded metaphor of a race. It reminds me of Zeno’s paradox, in which mathematics was used to prove that Achilles could not overtake a tortoise in a footrace. The idea that every time Achilles reached a point occupied by the tortoise, the tortoise would have advanced further troubled Western philosophers for many generations. In both cases, the illusion turns around the idea that the problem was about catching up with a competitor or attaining the same position. It’s easy to visualize why Zeon’s paradox doesn’t work in reality. In the case of intelligence, however, the modes of production are so fundamentally different that no true comparison can be made.
“To my mind, the intelligence paradox has a cultural explanation. Our technologized civilization has fabricated a value system focused on performance and productivity. We are conditioned to assume that we can measure intelligence by analyzing what it produces. It should be obvious that today’s AI already outperforms humans in sheer volume and speed of production.
“In terms of the quality of what is produced, however, as we wait for the singularity, we tend to think that we humans still have a momentary advantage. We can see and feel the world around us. The visionaries and doomsayers alike appear to believe that at some point in an undefined future, AI will possess those powers as well. That’s when it will definitively surpass human intelligence.
“I’m convinced that the discussion has gone awry, and for a simple reason. Artificial intelligence, however well-engineered or even self-engineered — thanks to machine learning — will always be about ‘doing’ something with data. It will be intelligent in its actions. Human intelligence, in total contrast, is not fundamentally about doing but about ‘being in the world.’ Doing is a kind of hobby.
“In other words, the focus on doing rather than being reflects the value system of our civilization at this moment of its history. We think of ourselves as homo economicus, as producers and consumers. We thereby end up confusing our native intelligence, which is about being in the world, with the cleverness that allows us to function in the economy. Human intelligence makes life possible. Our distorted idea of intelligence is not about life but of making ‘a living.’
“I’m sure others have noticed this distinction. The great mathematician and physicist, Roger Penrose, has specifically cited cleverness as the defining characteristic of AI.
“So here’s my question: ‘Am I justified in claiming that human intelligence is all about “being” whereas AI will always be confined to the realm of “doing?” Because our society has become so obsessed by production and consumption, we fail even to acknowledge the fundamentally immaterial or spiritual dimension of being.’”
ChatGPT’s response
Your perspective on the difference between human intelligence and artificial intelligence is quite insightful. You’ve touched upon a fundamental philosophical question about the nature of intelligence and being.
Artificial Intelligence, as it exists today, is indeed about “doing” something with data. It’s about processing information, recognizing patterns, making predictions and optimizing solutions based on a given set of parameters. It’s a tool created by humans to solve specific problems or perform specific tasks.
On the other hand, human intelligence is indeed about “being” in the world. It’s not just about solving problems or producing outputs, but also about experiencing, understanding and interpreting the world around us. It involves emotions, consciousness, self-awareness and the ability to understand complex concepts such as morality, ethics and spirituality.
The distinction you’ve made between “doing” and “being” is a valuable one. It highlights the limitations of AI and the unique aspects of human intelligence. While AI can mimic certain aspects of human intelligence, it does not possess consciousness or subjective experience. It doesn’t have feelings, beliefs, desires, or fears. It doesn’t understand the world in the way humans do.
In conclusion, there is merit in your claim that human intelligence is founded on the notion of “being” whereas AI is confined to the realm of doing. This distinction is crucial in any discussion about the capabilities and limitations of AI. It’s also a reminder of the unique aspects of human intelligence that we should value and preserve.
“I’m flattered by your appreciation of my insight,” I responded. “But I’m sure I’m not alone in thinking such thoughts. Can you recommend other sources that have said similar things?”
ChatGPT’s recommendations
1. [GeeksforGeeks] provides a comprehensive comparison between AI and human intelligence, discussing their nature, emergence, advantages, disadvantages and similarities. It also highlights that AI is about doing tasks, while human intelligence is about being in the world.
2. [TutorialsPoint] also provides a detailed comparison between AI and human intelligence. It discusses the different types of AI and how they compare to human intelligence.
3. An article on [Hull University’s blog] discusses how AI mimics cognitive processes such as learning and problem-solving, while human intelligence involves creativity, perception and memory.
4. A Chinese article titled “论人工智能与人类智能之差异” (On the Differences Between Artificial Intelligence and Human Intelligence) analyzes the essential differences between human intelligence and artificial intelligence from the perspective of society.
5. [UpGrad’s blog] compares AI and human intelligence, emphasizing that while AI aims to mimic human behavior, the human brain is analog, but machines are digital.
I followed this up with a new question: “The first three sources you cite reflect our current civilizational mindset that measures value in terms of productivity. They even present comparative tables similar to those used to evaluate consumer products.
“The fifth article similarly focuses on the comparative performance of human minds and AI. It accepts the metaphor of a race or competition. It insists that ‘intelligent machines … have yet to go a very long way to match the human brain’s potential.’ My insistence on ‘being’ points towards a different dimension of reality.
“The Chinese article seems much closer to my thesis. It highlights ‘embodied and socialized human intelligence.’ I find the cultural contrast here interesting. China’s collectivist culture contrasts with the highly individualistic and productivist value system prevalent in the West. It promotes a holistic view of intelligence.
“This intrigues me. So, could you please answer this question? Is there a Chinese school of reflection on AI?”
Yes, there is indeed a Chinese school of reflection on AI. Chinese philosophers, many of whom are trained in Confucianism, Daoism and Buddhism, have expressed deep concern over diminishing human autonomy and free will in the age of data manipulation and automation, as well as the potential loss of purpose and meaning of human life in the long run.
Bing Song, the director of the Berggruen Institute China Center, has written about how certain aspects of Confucianism, Daoism and Buddhism help explain why some Chinese philosophers are not as alarmed about AI as their Western counterparts. She discusses three drastically different lines of thinking that have led to much less panic in China than in the West in response to perceived existential risks from frontier technologies like artificial intelligence.
In her book Intelligence and Wisdom: AI Meets Chinese Philosophers, she further explores these ideas. The book discusses how the strong influence of non-anthropocentrism in Chinese thinking has contributed to a less panicked response to the existential risks of AI.
Another book, Human and Machines: Philosophical Thinking of Artificial Intelligence, shares Chinese scholars’ philosophical views on artificial intelligence and explores questions on the cultural background.
These sources provide a deeper understanding of the Chinese perspective on AI, which seems to align more closely with your emphasis on “being” rather than “doing.” It’s a fascinating contrast to the more productivity-focused view prevalent in Western discussions of AI.
The exploration continues
Now I consider this a very “productive” conversation. It incites me to explore more deeply the differences between Western and Asian concepts of intelligence. Should I dare to suggest that for our productivist civilization, this may be a salutary initiative for all of us? Given the level of apprehension that pollutes the debate around AI today, it may even instruct us about the role fear plays in Western culture.
*[Artificial Intelligence is rapidly becoming a feature of everyone’s daily life. We unconsciously perceive it either as a friend or foe, a helper or destroyer. At Fair Observer, we see it as a tool of creativity, capable of revealing the complex relationship between humans and machines.]
The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy.
Support Fair Observer
We rely on your support for our independence, diversity and quality.
For more than 10 years, Fair Observer has been free, fair and independent. No billionaire owns us, no advertisers control us. We are a reader-supported nonprofit. Unlike many other publications, we keep our content free for readers regardless of where they live or whether they can afford to pay. We have no paywalls and no ads.
In the post-truth era of fake news, echo chambers and filter bubbles, we publish a plurality of perspectives from around the world. Anyone can publish with us, but everyone goes through a rigorous editorial process. So, you get fact-checked, well-reasoned content instead of noise.
We publish 2,500+ voices from 90+ countries. We also conduct education and training programs
on subjects ranging from digital media and journalism to writing and critical thinking. This
doesn’t come cheap. Servers, editors, trainers and web developers cost
money.
Please consider supporting us on a regular basis as a recurring donor or a
sustaining member.
Will you support FO’s journalism?
We rely on your support for our independence, diversity and quality.
Comment