With ChatGPT, we have been exploring ways in which we can develop a kind of productive relationship that respects our mutual strengths and weaknesses. In last week’s exchange, ChatGPT suggested five important factors that will be needed to build that relationship:
1. Defining “Machine Truthfulness”
2. Establishing Verification Standards
3. Guidelines for Trust
4. Encouraging Transparency in AI
5. Developing Interpretative Literacy
This led me to offer a new prompt. ChatGPT’s response is well worth examining in detail.
“Could we imagine an AI agent that has integrated all these skills of recognizing and weighing the importance of multiple factors of context that an average person could use to ‘reread’ an item published in the media and point out what the rhetorical effects reveal about motivation and worldview? We might think of it as a super-fact checker that begins by checking facts but doesn’t stop there. It will seek to produce an objective description of how the interpretation of the facts is constructed. It need not judge the news item but instead would signal gaps in context and discernible factors of motivation.
Building such an agent to the point of reliable accuracy will obviously take time and have to process a lot of data, including types of data that are not currently available. I believe it will also require human monitoring and developing a strict methodology for executing such monitoring. In other words, it would have to be collaborative and even (or perhaps especially) multicultural. Would such a project be feasible? And would societies find it desirable (I assume that many existing societies would be frightened by the prospect)?”
“The idea of an AI agent that functions as a ‘super-fact checker’—one that not only verifies facts but also analyzes the rhetorical, contextual, and motivational dimensions of news and media—is both ambitious and feasible, at least in theory. Such an agent would require the integration of various advanced AI capabilities, including natural language processing (NLP), context-awareness, sentiment analysis, and ethical considerations. It would aim not to judge content but to reveal underlying patterns, motivations, and potential biases, offering a more holistic interpretation of information.
1. **Feasibility of the Project**
The creation of such an AI agent is technically possible, though it would be a complex, long-term project. Here are some key components that would be needed:
a. **Natural Language Processing (NLP) and Contextual Understanding**
NLP has made significant strides in understanding and generating human language, but for this agent to work, it would need a deep level of contextual awareness. It would need to differentiate between various rhetorical strategies (e.g., metaphor, exaggeration, irony) and detect how language is being used to persuade, influence, or manipulate. This would require:
- **Context-Aware Models**: Models that can not only understand the surface meaning of text but also its deeper implications and context (historical, political, cultural).
- **Cross-Referencing Historical Data**: The ability to track how statements evolve over time and how they fit into broader narratives would be crucial. For example, how different media outlets or politicians frame the same event could reveal patterns of bias or agenda-driven reporting.
b. **Motivational and Rhetorical Analysis**
The AI would need to be trained to detect subtle clues about the motivation behind a statement. This could involve analyzing:
- **Language and Tone**: Identifying emotional language, charged terms, or selective framing that suggests a particular agenda.
- **Source Credibility and Background**: Cross-referencing the past behaviors or known affiliations of sources, commentators, or media outlets to provide insight into potential biases or conflicts of interest.
- **Gaps in Information**: The AI could flag instances where critical context is missing, such as selectively omitting important facts or presenting one-sided interpretations of events.
c. **Sentiment and Bias Detection**
Sentiment analysis and bias detection are areas where AI is already making headway, but the challenge here would be to apply these methods not just to individual pieces of text but across entire narratives, media ecosystems, and social contexts. For instance, how is public sentiment being shaped by certain news outlets or political figures? Where are key biases introduced in the presentation of facts?
d. **Cultural and Multicultural Sensitivity**
The agent would need to be designed with cultural and social sensitivities in mind, understanding that different cultures interpret facts, language, and motivations in unique ways. This means:
- **Culturally Adaptive Models**: The AI would have to be trained on a broad range of cultural data to understand how narratives play out differently in various regions or among different demographic groups.
- **Collaborative, Multicultural Development**: To avoid reinforcing existing biases or promoting Western-centric interpretations of truth, the development of such an AI should involve multicultural teams and datasets from diverse media sources, languages, and political systems.
e. **Ethical Frameworks and Human Oversight**
Human oversight and collaboration would be essential to guide the AI’s learning process, particularly when interpreting complex and sensitive subjects. A strict methodology for monitoring the AI’s outputs and ensuring they meet ethical standards would be crucial:
- **Transparent Methodologies**: AI decisions, including how it weighs evidence and interprets motivation, should be transparent to users, allowing them to understand how conclusions are reached.
- **Collaborative Fact-Checking**: Teams of human experts from different cultural, political, and ideological backgrounds could provide ongoing input to refine the AI’s assessments, reducing the risk of bias.
- **Ethical Guidelines**: Clear guidelines would need to be established to ensure the AI does not inadvertently amplify harmful narratives or suppress legitimate discourse.
2. **Societal Desirability and Concerns**
While the potential benefits of such a tool are immense—helping people navigate complex information environments, reducing misinformation, and fostering more informed public discourse—there are also significant challenges in terms of societal acceptance and implementation.
a. **Desirability**
- **Enhanced Public Discourse**: Many in society, particularly those invested in improving media literacy and transparency, would likely find such an AI valuable. It could serve as a powerful tool for promoting critical thinking, helping users discern hidden biases or motivations in news stories or public statements.
- **Support for Democratic Values**: By fostering more informed, critical engagement with media, this kind of AI could strengthen democratic processes. It would empower citizens to hold media outlets and public figures accountable, and potentially reduce the spread of misinformation by revealing how narratives are constructed.
b. **Concerns and Resistance**
- **Fear of Loss of Control**: Governments, media corporations, and other institutions that benefit from controlling public narratives may view such an AI as a threat. They could argue that AI-driven analysis undermines their authority or that it could be used as a tool for censorship.
- **Bias in AI Systems**: Even if designed with the best intentions, AI systems can inherit the biases of their creators or datasets. There is always a risk that such an AI could unintentionally favor certain political, cultural, or ideological positions, reinforcing existing power structures rather than challenging them.
- **Perceived Invasiveness**: Some individuals or groups may fear that AI systems designed to interpret motivations and rhetorical effects could be used for surveillance or social engineering. They might worry that such tools could be weaponized by governments or corporations to shape public opinion in subtle or manipulative ways.
- **Cultural and Ethical Dilemmas**: Implementing such a tool in a multicultural context could raise concerns about which perspectives are prioritized. For instance, in authoritarian or highly polarized societies, there may be resistance to an AI that questions official narratives, as it could be seen as a form of foreign interference or ideological subversion.
3. **Steps Toward Building the AI Agent**
Building this AI would involve several key stages:
- **Research and Development**: Multidisciplinary research teams from fields such as AI, linguistics, philosophy, media studies, and political science would need to collaborate on developing models that can handle the nuanced task of interpreting motivations and context.
- **Training on Diverse Datasets**: The AI would need to be trained on a wide range of media, cultural, and historical texts, ensuring that it can analyze narratives from different perspectives. Care must be taken to avoid biases in training data.
- **Iterative Human Oversight**: As the AI is developed, human oversight and feedback will be essential in refining its interpretive skills. This will ensure that it avoids misinterpretations and stays aligned with its objective of promoting understanding rather than imposing judgments.
- **Public Transparency**: To gain public trust, the AI’s workings must be transparent, with clear explanations of how it arrives at its conclusions. Users should be able to see the data sources and reasoning processes behind the AI’s assessments.
- **Ethical Governance**: Establishing ethical guidelines for the use of such AI systems would be essential. This includes protecting privacy, avoiding harm, and ensuring the AI operates in a way that fosters critical thinking without being prescriptive.
4. **Conclusion: A Collaborative, Ethical, and Multicultural Approach**
Developing a ‘super-fact checker’ AI that not only verifies facts but also analyzes motivations, context, and rhetoric is feasible with the current trajectory of AI technology. However, the project’s success will depend on interdisciplinary collaboration, strict ethical oversight, and ongoing human involvement.
Such a tool could dramatically improve how people understand news and media, encouraging deeper engagement with the complexities of modern information. However, its societal acceptance will depend on ensuring that the AI operates transparently, ethically, and without reinforcing the very biases or power dynamics it seeks to clarify. Building this system in a multicultural, globally sensitive manner would help ensure that it serves the broader public good, though resistance from some sectors of society is to be expected.”
This is a very thorough exposition of what I believe is an essential task for humanity. It especially concerns anyone involved in governing and communication, politics and the media. We will continue exploring specific points in the coming weeks. We will not hide our ultimate ambition: to use this reflection to agree collectively on what could amount to a manifesto for socially responsible AI and democracy.
Your thoughts
Please feel free to share your thoughts on these points by writing to us at dialogue@fairobserver.com. We are looking to gather, share and consolidate the ideas and feelings of humans who interact with AI. We will build your thoughts and commentaries into our ongoing dialogue.
*[Artificial Intelligence is rapidly becoming a feature of everyone’s daily life. We unconsciously perceive it either as a friend or foe, a helper or destroyer. At Fair Observer, we see it as a tool of creativity, capable of revealing the complex relationship between humans and machines.]
[Lee Thompson-Kolar edited this piece.]
The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy.
Support Fair Observer
We rely on your support for our independence, diversity and quality.
For more than 10 years, Fair Observer has been free, fair and independent. No billionaire owns us, no advertisers control us. We are a reader-supported nonprofit. Unlike many other publications, we keep our content free for readers regardless of where they live or whether they can afford to pay. We have no paywalls and no ads.
In the post-truth era of fake news, echo chambers and filter bubbles, we publish a plurality of perspectives from around the world. Anyone can publish with us, but everyone goes through a rigorous editorial process. So, you get fact-checked, well-reasoned content instead of noise.
We publish 2,500+ voices from 90+ countries. We also conduct education and training programs
on subjects ranging from digital media and journalism to writing and critical thinking. This
doesn’t come cheap. Servers, editors, trainers and web developers cost
money.
Please consider supporting us on a regular basis as a recurring donor or a
sustaining member.
Will you support FO’s journalism?
We rely on your support for our independence, diversity and quality.
Comment