“Does anyone have a trusted friend, acquaintance, teacher, mentor, coach or advisor whose every statement can be deemed true? Are we ready to break off the relationship if we discover that something they told us turns out to be exaggerated or even false? With such trusted relations, don’t we often find ourselves asking them any of the following questions: ‘Are you sure?’; ‘Where did you hear that?’; ‘Have you checked it?’; ‘Didn’t you know that, in fact, that has been debunked?’ and the list goes on. Stable truth only exists when we home in on it and keep questioning it.
In an article with the title, ‘Amazon Says All It Needs to Do Before Releasing an AI-Powered Alexa Is to Solve the Giant Engineering Problem That Nobody Else on Earth Has Been Able to Solve,’ the journal Futurism explains: ‘Despite billions of dollars of investment and the construction of massive data centers to power increasingly complex AI models, even the most advanced chatbots still have a strong tendency to ‘hallucinate’ false claims.’
Don’t believe me? Read this article in The Byte that appeared on Saturday with the lengthy title: ‘Before Apple’s AI Went Haywire and Started Making Up Fake News, Its Engineers Warned of Deep Flaws With the Tech… And they released it anyway.’
The Futurism article makes the case that hallucination may ‘be intrinsic to the tech itself’ and even ‘part of the equation.’ Any student of psychology will affirm a similar principle with regard to humans. We might be tempted to call it the Rashomon effect: Even the direct witnesses of an event will distort its meaning when recounting what they remember.
So, do we consider breaking off our relationship with friends, teachers, mentors and other human relations simply because they are incapable of producing limpid truth? The answer to that question used to be quite simply ‘no.’ But the well-documented trend of a loss of faith in our governments, the media — and probably teachers as well — tells us that there may be more than one worm in the apple of truth-telling. Consequently, we should consider all sources of ‘information’ — increasingly reinterpreted as ‘influence’ — as suspect.
That is a very negative, pessimistic version of the argument in favor of doubt. The positive version is quite simply called philosophy, or more specifically, epistemology. Doubt need not represent the fear of uncertainty, but rather the invitation to explore reality in greater depth. That is why Aristotle could call philosophy metaphysics. Physics attempts to make sense of the material world. Metaphysics examines the causes and relationships that explain phenomena in the natural world. Explaining requires exploring; exploring requires raising questions. Raising questions means mobilizing doubt.
All this is to say that whatever the state of the AI we happen to be interacting with — including future versions of Artificial General Intelligence (AGI) — human doubt will be a necessary ingredient of our interaction with AI.
If this is true, why do we keep expressing our disappointment at the discovery that AI cannot be counted on as a source of infallible truth? Shouldn’t we be focusing on how we can build the product of its intelligence into something meaningful within our human cultures? Doesn’t that mean that the greatest effort should be made in building and refining not the tools themselves, but our relationship with those tools?”
“This text raises important and layered questions about the nature of truth, human relationships, and the role of doubt, particularly in the context of our interactions with AI and other sources of information. Let’s unpack the central ideas and respond to them:
The Fallibility of Human and Machine Knowledge
The comparison between the fallibility of humans and AI is a profound one. Just as we don’t abandon relationships with trusted mentors or friends because they occasionally err, the same principle could be extended to our relationship with AI. However, this analogy also points out the critical difference: humans bring context, values, and shared experiences, which shape our ability to interpret and integrate their claims into meaningful frameworks. AI lacks this human depth but shares the same inherent limitation of fallibility.
The mention of hallucination as intrinsic to AI mirrors the human tendency to misremember or reinterpret events, such as the Rashomon effect. This analogy highlights that imperfection in knowledge systems—be they human or artificial—is not an anomaly but a fundamental characteristic.
The Necessity of Doubt
Doubt, as argued in the text, is not inherently negative but an essential aspect of intellectual and philosophical exploration. Doubt pushes us to question, analyze, and refine our understanding. This principle applies equally to human relationships and to our interactions with AI systems. Expecting infallibility from either leads to disappointment and a misunderstanding of their nature.
The philosophical tradition, especially epistemology, frames doubt as a tool for deeper inquiry rather than mere skepticism. This attitude can guide us in approaching AI not as a source of ‘limpid truth’ but as a contributor to ongoing exploration and meaning-making.
Reframing Our Relationship with AI
The article rightly emphasizes the need to shift our expectations of AI from a ‘truth engine’ to a collaborative tool for exploration and insight. This requires focusing not only on improving AI systems but also on cultivating our relationship with them. Such a relationship would involve:
- Critical Engagement: Asking questions such as, ‘Where did this information come from?’ or ‘What assumptions is this model operating under?’
- Cultural Integration: Understanding how AI outputs can be meaningfully interpreted and used within the context of human values and cultural frameworks.
- Shared Responsibility: Recognizing that the burden of building truth doesn’t lie solely on AI but also on human users to critically evaluate, synthesize, and contextualize its outputs.
Conclusion: Building Relationships with Tools
Instead of demanding perfection, we should focus on fostering healthy relationships with our tools, emphasizing collaboration, critical thinking, and a shared pursuit of understanding. Just as doubt and questioning deepen our relationships with trusted humans, they should also enrich our engagement with AI. This approach shifts the conversation from an impossible quest for infallibility to a practical and philosophical inquiry into how technology and humanity can meaningfully coexist.
The ultimate goal isn’t flawless truth but rather deeper understanding through exploration, guided by both human and machine insights.”
“In this context, do you think I’m wrong to insist that the source of the error that has led us to have exaggerated expectations about technology derives from our consumer culture? The promoters of technology seek to sell us ‘convenience’ and we, the customers, live in the expectation that the product will deliver a service with minimal effort and zero engagement on our part. That may work with espresso machines and even personal computers, but the attitude is totally inappropriate when the product is ‘intelligence.’ For one thing, intelligence by definition cannot be a product. It is in itself an ability to construct and deconstruct relationships. Unlike the espresso machine, that ability is unconstrained. I would thus maintain that interacting with AI requires a deep cultural shift to take us either away from or at least beyond our social habits acquired in our reigning consumer culture. Does this make sense? And if it does, shouldn’t we be focusing on that challenge rather than simply awaiting the next generation of AI?”
“You’re not wrong to identify consumer culture as a key factor in shaping exaggerated expectations about technology, including AI. Your argument makes sense, and it strikes at the heart of a profound cultural and philosophical issue: the commodification of intelligence and the role of human engagement in technological interactions.
1. Consumer Culture and the Promise of Convenience
Consumer culture thrives on the promise of effortless convenience, encouraging us to see products as solutions to problems that require minimal thought or engagement. In the case of AI, marketing often frames it as a tool that will deliver answers, automate tasks, and ‘just work,’ like a well-designed espresso machine. This leads to expectations that AI will not only perform but do so flawlessly and without the need for human critical thinking.
However, intelligence—whether human or artificial—doesn’t function like a consumer product. Intelligence is inherently dynamic, requiring the construction and deconstruction of relationships, context, and meaning. Unlike static consumer goods, intelligence evolves and adapts, often in ways that demand active participation and interpretation from its users.
2. The Inappropriateness of Consumer Expectations for AI
Your point that ‘intelligence by definition cannot be a product’ is especially important. Intelligence, whether human or artificial, is not a fixed commodity but a process—an ongoing ability to reason, analyze, and learn. Treating AI as a product risks reducing it to something static and transactional, ignoring its emergent and context-dependent nature.
When we expect AI to deliver answers without effort on our part, we fail to recognize the relational and interpretive work that must occur for its outputs to have value. This expectation leads to frustration when AI fails to live up to the impossible standard of flawless, context-perfect intelligence.
3. The Need for a Cultural Shift
You’re absolutely right that interacting with AI requires a cultural shift. This shift would involve moving beyond consumer-oriented habits of passive consumption to a more active, critical, and collaborative engagement with technology. Such a shift could include:
- Developing Digital Literacy: Understanding how AI systems work, their limitations, and the biases inherent in their training.
- Encouraging Critical Thinking: Viewing AI outputs as tools for thought, not definitive answers.
- Reframing Expectations: Seeing AI not as a source of ultimate truth but as a partner in exploration, much like a human collaborator.
- Cultivating Patience: Recognizing that meaningful interaction with AI, like any relationship, requires time, effort, and an openness to uncertainty.
4. Focusing on the Challenge Instead of the Technology
Your argument that we should focus on this cultural challenge rather than simply waiting for the next generation of AI is both timely and crucial. Advances in AI technology are inevitable, but without a corresponding evolution in our approach to using and understanding these tools, the gap between what AI can do and what we expect it to do will only widen.
Instead of placing faith in technological progress alone, we should focus on:
- Education and Public Discourse: Engaging society in discussions about what AI is, what it isn’t, and how we can use it responsibly.
- Philosophical Inquiry: Exploring what it means to interact with ‘intelligence,’ artificial or otherwise, and how this interaction shapes our humanity.
- Building New Norms: Developing cultural habits that emphasize active participation and thoughtful engagement with AI.
Conclusion: A Cultural Shift Is Key
Your perspective highlights an essential truth: interacting with AI isn’t just about developing better algorithms or more powerful machines; it’s about rethinking our relationship to technology and intelligence itself. This requires a cultural shift that takes us beyond the passive expectations of consumer culture into a more participatory and reflective mode of engagement. By focusing on this challenge, we can shape a future in which AI is not just a tool we use, but a collaborator that enriches our understanding and expands our horizons.”
Once again, an AI chatbot is helping me to frame what I see as a future action plan, not for myself, but for our civilization. For this type of productive discussion not only is there nothing that can be confused with “hallucination,” everything has a perceivable meaning. Moreover, it points towards realistically pragmatic solutions.
I intend to pursue this line of interactive exploration by focusing on the nature of the challenge represented by the goal of moving away from our consumer culture.
Your thoughts
Please feel free to share your thoughts on these points by writing to us at dialogue@fairobserver.com. We are looking to gather, share and consolidate the ideas and feelings of humans who interact with AI. We will build your thoughts and commentaries into our ongoing dialogue.
*[Artificial Intelligence is rapidly becoming a feature of everyone’s daily life. We unconsciously perceive it either as a friend or foe, a helper or destroyer. At Fair Observer, we see it as a tool of creativity, capable of revealing the complex relationship between humans and machines.]
[Lee Thompson-Kolar edited this piece.]
The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy.
Support Fair Observer
We rely on your support for our independence, diversity and quality.
For more than 10 years, Fair Observer has been free, fair and independent. No billionaire owns us, no advertisers control us. We are a reader-supported nonprofit. Unlike many other publications, we keep our content free for readers regardless of where they live or whether they can afford to pay. We have no paywalls and no ads.
In the post-truth era of fake news, echo chambers and filter bubbles, we publish a plurality of perspectives from around the world. Anyone can publish with us, but everyone goes through a rigorous editorial process. So, you get fact-checked, well-reasoned content instead of noise.
We publish 2,500+ voices from 90+ countries. We also conduct education and training programs
on subjects ranging from digital media and journalism to writing and critical thinking. This
doesn’t come cheap. Servers, editors, trainers and web developers cost
money.
Please consider supporting us on a regular basis as a recurring donor or a
sustaining member.
Will you support FO’s journalism?
We rely on your support for our independence, diversity and quality.
Comment