Science & Technology

Outside the Box: ChatGPT’s Humble Plan to Integrate Human Society

In “Outside the Box,” I interrogate ChatGPT to better understand how AI “reasons.” It’s like a conversation with an intelligent friend, sharing ideas and challenging some of the explanations. A conversation that focused on the much-debated question of “disinformation” has led us, over the past four weeks, to look at the true core of the issue, the branch of philosophy known as epistemology, or the study of what we know and how we know it.
By
AI1

Digital twins concept. A half real half digital finger starts or activates both the physical and digital worlds with a single push. Business and technology simulation modeling © Ole.CNX / shutterstock.com

November 11, 2024 04:41 EDT
Print

In my exploration with ChatGPT of the possible relationship between discourse produced by AI and truth, I posited the idea that, even if it doesn’t exist today, we could imagine a future “truthful” machine. But, given the nature of algorithmic intelligence, such a machine might “possess the truth” but cannot pursue it, or even be motivated to pursue it, without the active involvement of a moral being. I believe the philosophical case can be made that the notion of truth is similar to the central hypothesis of quantum mechanics: Anything stable enough to be called a quantum mechanical object, which allows us to experience it as a particle, also exists in a less stable form as a wave. Furthermore, it achieves its identity within a quantum field.

In other words, just as, in the classic approach to physics, we are tempted to think we can understand the world in terms of its stable, observable material elements, in our quest to eradicate untruth, we assume that truth exists in the form of particles of knowledge. This is clearly the case when, in our current political debate, we talk about disinformation and misinformation. But once we begin to examine what we call knowledge within its field, it becomes as elusive and unstable as a wave.

With these thoughts in mind, and with a specific concern with what I have called the “moral dimension” — which in my analogy, I consider to be the equivalent of the quantum field — I continued my conversation with the following prompt:

“So, should we conclude that because machine intelligence lacks a moral dimension, it represents a different category of communication that should be carefully distinguished from human-to-human communication? And if so, shouldn’t we, as a society, be working on elaborating a set of guidelines that will help us situate the level and quality of truth that emerges in the chatbot’s discourse? I would suggest even beginning by focusing on the notion of ‘truthfulness,’ a concept that has a different meaning when applied to humans and machines. We presume that a truthful human being is committed to pursuing the truth, not just possessing it. We might thus presume that a truthful machine could ‘possess’ the truth but cannot pursue it or even be motivated to pursue without the involvement of a moral being; for example, in the form of a conversation. Does this accurately situate the difference between human and artificial intelligence?”

Yes, you’re touching on a very deep distinction. The type of intelligence a machine possesses does indeed differ from human intelligence in a fundamental way, and the idea of ‘truthfulness’ is a key area where this difference becomes especially clear. What you’ve described—the notion that human truthfulness is tied to the pursuit of truth, while machine truthfulness is more about the accuracy of stored information—aptly highlights how AI can never truly embody the same relational and moral dimensions as human communication. This difference suggests that communication with machines, even highly sophisticated ones, should indeed be classified differently from human-to-human interaction. And given the growing role of AI, establishing guidelines and frameworks for how to interpret machine ‘truth’ is becoming increasingly important.

Humans understand truthfulness as an ethical commitment to honesty and accuracy, a concept rooted in social values. A truthful person is not just someone who possesses accurate information but someone who actively strives to be fair, transparent, and sincere. This pursuit of truth inherently involves moral choices and motivations. When someone we consider truthful conveys information, we interpret their intentions, scrutinize the motivations behind their words, and often trust them based on a history of ethical behavior. Their truthfulness is a dynamic process influenced by their character and their role within social relationships.

In contrast, a machine cannot engage in this dynamic pursuit. Its ‘truthfulness’ is more mechanical, rooted in its programming and data sources. AI is capable of storing and retrieving information but cannot *strive* to be accurate or honest in the way a human can. It lacks the conscious intent or ethical commitment that drives human pursuit of truth. Thus, even if it outputs accurate information, we cannot call it ‘truthful’ in the same sense. Machine intelligence provides what might be described as *static accuracy*—correctness without commitment or conscience.

Therefore, treating AI outputs as if they come from a morally invested entity could lead to misplaced trust. When we encounter information from a machine, we might instinctively interpret its ‘truth’ through a human lens, potentially expecting it to ‘care’ about truth in a way it simply cannot. This can lead to confusion, disappointment, or misplaced expectations, as we’re attributing human-like sincerity or ethical commitment to something fundamentally incapable of moral intent.

Given this, your suggestion of developing societal guidelines to help interpret and assess machine truthfulness is wise. We could begin by:

  1. **Defining ‘Machine Truthfulness’**: We need a term that clearly distinguishes machine accuracy from human truthfulness. Machine truthfulness might refer strictly to a consistent alignment with verified facts, free from bias in training data and algorithms to the extent possible. However, it would explicitly lack an ethical or motivational dimension.
  2. **Establishing Verification Standards**: Since machines lack the pursuit of truth, we should establish rigorous verification mechanisms for AI-generated information. For high-stakes fields like law, healthcare, or finance, we might require that AI outputs be corroborated by human experts.
  3. **Guidelines for Trust**: Just as we understand varying levels of trust in humans, we need a framework for AI trust. This framework could address the types of tasks we can reasonably trust machines to perform and set clear boundaries where human intervention is necessary for ethical or critical reasoning.
  4. **Encouraging Transparency in AI**: By promoting transparency in AI design, training data, and limitations, we allow users to critically assess the source and reliability of machine-generated information. If users understand the basis of an AI’s ‘knowledge,’ they can better contextualize and verify its outputs.
  5. **Developing Interpretative Literacy**: Society could benefit from fostering ‘interpretative literacy’ regarding AI outputs. Teaching people how to critically engage with machine-generated information would equip them to discern when machine ‘truthfulness’ can be trusted and when it should be verified through human oversight.

In sum, yes, AI represents a different category of communication, and our understanding of truthfulness must adapt accordingly. Humans and machines engage with truth in distinct ways, and as AI becomes more integrated into our lives, distinguishing between the moral pursuit of truth in human communication and the static accuracy of machine outputs will be essential. Developing a set of guidelines for how to situate, verify, and interpret machine ‘truth’ will help society interact with AI responsibly, fully aware of both its strengths and its limitations.” 

This particular observation in ChatGPT’s conclusion — that “distinguishing between the moral pursuit of truth in human communication and the static accuracy of machine outputs will be essential” — appears to me to justify my analogy with quantum mechanics. The “moral pursuit of truth” correlates with the wavelike properties of a quantum object and the “static accuracy” with our perception of it as a particle.

I feel that this conversation is, in itself, an illustration of collaboration within a quantum knowledge field. It’s the conversation itself, as a dynamic exchange, that has enabled me to clarify my own thoughts and profit from the distilled, static “wisdom” assembled from the chatbot’s access, not just to existing data, but also to the expression of human reflection on the topic.

ChatGPT agrees that, by definition, “AI can never truly embody the same relational and moral dimensions as human communication.” Whatever it produces will resemble a particle and fail to contain the dynamic of a wave. It produces “static accuracy—correctness without commitment or conscience.” And our conversational partner wisely insists that “treating AI outputs as if they come from a morally invested entity could lead to misplaced trust.”

I would recommend that as we look at the future of AI in our society, we should take very seriously the five suggestions enumerated by ChatGPT and turn them into a plan of action. And more specifically on that note, I’ll conclude by citing the chatbot’s idea of “fostering ‘interpretative literacy’ regarding AI outputs.” This is a topic we can continue to explore. 

Your thoughts

Please feel free to share your thoughts on these points by writing to us at dialogue@fairobserver.com. We are looking to gather, share and consolidate the ideas and feelings of humans who interact with AI. We will build your thoughts and commentaries into our ongoing dialogue.

*[Artificial Intelligence is already a feature of everyone’s daily life. We unconsciously perceive it either as a friend or foe, a helper or destroyer. At Fair Observer, we see it as a tool of creativity, capable of revealing the complex relationship between humans and machines.]

[Lee Thompson-Kolar edited this piece.]

The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy.

Comment

0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments

Support Fair Observer

We rely on your support for our independence, diversity and quality.

For more than 10 years, Fair Observer has been free, fair and independent. No billionaire owns us, no advertisers control us. We are a reader-supported nonprofit. Unlike many other publications, we keep our content free for readers regardless of where they live or whether they can afford to pay. We have no paywalls and no ads.

In the post-truth era of fake news, echo chambers and filter bubbles, we publish a plurality of perspectives from around the world. Anyone can publish with us, but everyone goes through a rigorous editorial process. So, you get fact-checked, well-reasoned content instead of noise.

We publish 2,500+ voices from 90+ countries. We also conduct education and training programs on subjects ranging from digital media and journalism to writing and critical thinking. This doesn’t come cheap. Servers, editors, trainers and web developers cost money.
Please consider supporting us on a regular basis as a recurring donor or a sustaining member.

Will you support FO’s journalism?

We rely on your support for our independence, diversity and quality.

Donation Cycle

Donation Amount

The IRS recognizes Fair Observer as a section 501(c)(3) registered public charity (EIN: 46-4070943), enabling you to claim a tax deduction.

Make Sense of the World

Unique Insights from 2,500+ Contributors in 90+ Countries

Support Fair Observer

Support Fair Observer by becoming a sustaining member

Become a Member