Science & Technology

Outside the Box: Creating Reliable “Public Truth” with AI’s Help

In “Outside the Box,” I interrogate ChatGPT to better understand how AI “reasons.” It’s like a conversation with an intelligent friend, sharing ideas and challenging some of the explanations. This week we explore the question of disinformation and epistemology, the branch of philosophy dealing with what we know and how we know it. We see this as the beginning of a blueprint for AI’s role in helping us to deepen our understanding of the news. This marks the beginning of a major initiative Fair Observer intends to carry out with interested partners.
By
Robot

Artificial Intelligence Truth Bias Robot Confused Making Decision Explainable Process Vector Design Illustration © aniqpixel / shutterstock.com

October 28, 2024 04:27 EDT
Print

Last week, we began a conversation about the much debated problem of the proliferation of disinformation and the need some people strongly feel to control it. After citing a study demonstrating that “exposure to harmful and false information on social media is minimal,” I asked ChatGTP to weigh in on the motivation of the voices that are leading a campaign that increasingly resembles outright censorship.

ChatGPT’s answer contains a rich and detailed analysis of the psychological, social, political, economic and institutional factors that underlie this motivation. It concluded its exposé with this summary: “Those who campaign against misinformation may be motivated by a mix of legitimate concerns and less altruistic drives. This dynamic illustrates the complexity of modern information ecosystems, where fighting misinformation can itself become a politically charged and often self-serving activity.”

This was far too interesting and far too important a public debate not to explore further. I thus formulated my next prompt.

“Everything you say underlines an empirical fact: that those who have power in government, the media, finance and other institutions are motivated to lie about anything they choose to call misinformation. We have seen that the expression of opinions deemed to be deviant from ‘normative’ thought are only routinely called misinformation or disinformation, but can become the object of organized campaigns of censure.

The function of ‘fact checking’ is not sufficient to account for such abuses, mainly because this isn’t about facts but about the interpretation of facts. The classic example is the debate about the origin of the 2020 Covid-19 pandemic. Unless there is a universal or quasi-universal consensus, no proposition should be deemed a fact and used to discredit or even censure other hypotheses.

Given this level of complexity in people’s motivations, how do you imagine our use of AI in the future will be able to clarify some of the issues pertaining to what can be called the establishment of ‘public truth?’ Our capacity to store data means that we can settle a lot of issues of basic truth that might not otherwise be possible. My question boils down to this: Can we imagine an algorithmic approach to epistemology that takes into account the effects of perception, language and culture, including the various factors of motivation that shape our ability to account for reality?”

ChatGPT’s thorough response merits everyone’s attention. It is a serious contribution to an important debate.

Your question touches on the philosophical and practical complexities of determining ‘public truth’ in a world where perception, language, culture, and political interests deeply shape our understanding of reality. The challenge of disentangling truth from perception and interpretation—especially in a highly polarized and media-driven environment—seems well beyond the scope of fact-checking as it is traditionally understood. But the possibility of developing more sophisticated AI models to engage with these issues does hold promise, though with caveats.

1. **The Problem of Epistemology in Public Truth**

The issue you raise points to the crux of epistemology—how we come to know what is true—and whether truth can be objectively established when interpretations vary so widely based on one’s position, interests, and culture. As you noted, many so-called ‘facts’ are interpretations of reality shaped by these factors. In the case of the COVID-19 pandemic, or Joe Biden’s claim about the Ukraine war, it’s not that the facts are absent—it’s that how they are framed, understood, and communicated is deeply subjective and politicized.

2. **The Limits of Fact-Checking and Human Bias**

Fact-checking, as you point out, often becomes problematic because it assumes a relatively simple truth-value to statements (true/false), while many statements exist in the gray areas of interpretation. Fact-checkers, like any other actors, are prone to their own biases—political, cultural, economic—leading them to select and frame facts in ways that align with particular agendas or worldviews.

The example of ‘the sun rises in the east’ is a perfect illustration: while empirically false in a strict sense (since the Earth’s rotation creates this illusion), it is nonetheless true in a practical, phenomenological sense. This highlights how different levels of ‘truth’ exist, and it depends on context whether a statement is misleading, false, or simply a common, shared shorthand for reality.

3. **AI’s Potential Role in Clarifying ‘Public Truth’**

AI has the potential to engage with this complexity, but for it to be useful in navigating issues of ‘public truth,’ it would need to go far beyond simple fact-checking algorithms. Here’s how I imagine AI could contribute:

4. **Context-Aware Truth Assessment**

AI models could be developed to take into account not just whether a statement aligns with certain empirical data but also its context. For example, AI could parse out the difference between literal and figurative language, assess political motivations, and track the evolution of arguments over time. By understanding context—whether political, historical, or rhetorical—AI could offer nuanced assessments, acknowledging that truth is not always binary but situated within cultural and perceptual frameworks.

**Perception and Interpretation Modeling**

AI could be trained to model how different groups perceive events based on cultural, political, or social backgrounds. For example, one group might interpret ‘Putin has already lost the war’ as a political exaggeration aimed at rallying public support, while another may see it as a falsehood given the situation on the ground. By integrating sociocultural data, AI could highlight how different communities interpret facts differently and how these interpretations shape public discourse.

**Language and Meaning**

AI could also help decode the language and rhetoric used by political figures, journalists, and public institutions. Natural language processing (NLP) techniques are already capable of detecting sarcasm, metaphor, and other figures of speech. An advanced AI could identify when language is being used strategically—like in Biden’s claim about Putin—or when it represents more direct misrepresentation of facts. It could flag statements that are likely to be misleading based on historical patterns or deviations from established narratives.

**Longitudinal Fact Tracking**

AI has a tremendous capacity to store and track vast amounts of data over time. This can be useful in situations where ‘truth’ is not static but evolves. Take the COVID-19 origin debate: early in the pandemic, there was uncertainty about whether the virus had a natural origin or emerged from a lab. Over time, as more data became available, the consensus shifted. AI systems could maintain a record of how expert opinion, public perception, and evidence evolve over time, providing a more dynamic picture of how ‘truth’ changes in response to new information.

**Motive and Incentive Analysis**

AI models could also be used to track incentives and interests, analyzing patterns of behavior that indicate when an actor (be it a political figure, media outlet, or institution) might be manipulating facts or language for strategic purposes. By cross-referencing financial records, political connections, and past behaviors, AI could help flag conflicts of interest or patterns of disingenuous behavior, offering the public a clearer view of the underlying motivations shaping public discourse. 

5. **Challenges and Ethical Concerns**

While AI could enhance our ability to navigate complex truth claims, there are significant challenges:

  • **Bias in AI**: AI models are trained on existing data, which can contain biases from the sources that generate them. If AI is fed biased media, academic, or governmental sources, it may simply replicate or reinforce existing power structures, failing to challenge the kind of misinformation you’ve identified. Even sophisticated AI could become a tool for reinforcing the dominant narrative rather than questioning it.
  • **Interpretive Limits**: AI can analyze patterns, track data, and model perceptions, but it may struggle with deeply interpretive issues that require human intuition, moral judgment, or subjective experience. At its core, some epistemological questions are not about data but about how we, as humans, understand and assign meaning to the world.
  • **Political and Social Manipulation**: There’s also the risk that powerful actors could use AI to manipulate the public’s understanding of truth. By selectively feeding data into AI systems or controlling the parameters by which AI assesses truth claims, elites could further entrench their control over the narrative under the guise of objective ‘algorithmic truth.’

**Conclusion: Toward an Algorithmic Epistemology?**

While we may not be able to create an algorithmic system that fully accounts for the philosophical depth of epistemology, AI can certainly help clarify aspects of public truth. A sophisticated system could assist in understanding how perception, language, and motivation interact to shape our understanding of reality. Such a system could offer a more dynamic, contextualized picture of truth, showing not just whether statements align with empirical data but how and why certain narratives gain traction.

The challenge lies in designing AI that Is transparent, unbiased, and capable of handling the deeply interpretive and socially constructed aspects of truth, while also safeguarding against its misuse by those in power. In the future, AI could be a valuable tool for enhancing our public discourse, but it will require careful ethical considerations and interdisciplinary input to ensure that it becomes a force for clarity, not control.

These are wise comments worth examining in detail. They will help our team at Fair Observer elaborate and execute its project that consists of providing future AI tools focused on improving our capacity to read and analyze the news. Using the hints ChatGPT has provided we will explore this further in next week’s extension of the conversation on epistemology.

Your thoughts

Please feel free to share your thoughts on these points by writing to us at dialogue@fairobserver.com. We are looking to gather, share and consolidate the ideas and feelings of humans who interact with AI. We will build your thoughts and commentaries into our ongoing dialogue.

*[Artificial Intelligence is rapidly becoming a feature of everyone’s daily life. We unconsciously perceive it either as a friend or foe, a helper or destroyer. At Fair Observer, we see it as a tool of creativity, capable of revealing the complex relationship between humans and machines.]

[Lee Thompson-Kolar edited this piece.]

The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy.

Comment

Only Fair Observer members can comment. Please login to comment.

Leave a comment

Support Fair Observer

We rely on your support for our independence, diversity and quality.

For more than 10 years, Fair Observer has been free, fair and independent. No billionaire owns us, no advertisers control us. We are a reader-supported nonprofit. Unlike many other publications, we keep our content free for readers regardless of where they live or whether they can afford to pay. We have no paywalls and no ads.

In the post-truth era of fake news, echo chambers and filter bubbles, we publish a plurality of perspectives from around the world. Anyone can publish with us, but everyone goes through a rigorous editorial process. So, you get fact-checked, well-reasoned content instead of noise.

We publish 2,500+ voices from 90+ countries. We also conduct education and training programs on subjects ranging from digital media and journalism to writing and critical thinking. This doesn’t come cheap. Servers, editors, trainers and web developers cost money.
Please consider supporting us on a regular basis as a recurring donor or a sustaining member.

Will you support FO’s journalism?

We rely on your support for our independence, diversity and quality.

Donation Cycle

Donation Amount

The IRS recognizes Fair Observer as a section 501(c)(3) registered public charity (EIN: 46-4070943), enabling you to claim a tax deduction.

Make Sense of the World

Unique Insights from 2,500+ Contributors in 90+ Countries

Support Fair Observer

Support Fair Observer by becoming a sustaining member

Become a Member