Science & Technology

Outside the Box: Havana Syndrome and our Common Future with AI

In “Outside the Box,” I interrogate ChatGPT to better understand how AI “reasons.” It’s like a conversation with an intelligent friend, sharing ideas and challenging some of the explanations. Neither of us necessarily has “the answers,” but by working together, we can better frame the questions. In this article, we worked together, in a context created by the journalists at SpyTalk, to home in on a controversial topic: what I like to call the “Havana Syndrome syndrome.” This conversation with ChatGPT provoked an important discussion about humanity’s future with AI.
By
AI1

Via Freepik.

January 13, 2025 07:22 EDT
 user comment feature
Check out our comment feature!
visitor can bookmark

In our latest conversation, I began by calling ChatGPT’s attention to an excellent piece of journalism from SpyTalk that summarizes the question of what we should believe concerning the authentically bizarre episode known as “The Havana Syndrome.”

I drew the chatbot’s attention to one paragraph whose conclusion produced what I can only qualify as a devastatingly comic effect. It cites the pleading of Mark Zaid, a national security lawyer representing a group of Havana Syndrome victims who persist in seeking to blame either a foreign government or the CIA itself for their complaints.

“For all the confidence U.S. intelligence agencies have expressed in dismissing most of the conspiratorial claims about Havana Syndrome, there is little reason to believe the controversy will end anytime soon. In his statement, Zaid called on the incoming Trump administration “to ensure the CIA can no longer lie to the public and instead require full disclosure of what the government knows.”

Talk about idealism! Expecting the Trump administration to expose lying and the CIA to disclose “what the government knows.” Only a lawyer could have a good reason to invent such fantasies.

SpyTalk’s article is thorough and well-balanced, well worth reading for its historical perspective. It describes what I’m tempted to call a culture of hyperreality shared by the government, Congress, the deep state and especially the media. 

Here is how I began my conversation on the topic with ChatGPT:

“Since 2018, with every new sensational claim related to the Havana Syndrome, I’ve published a long series of columns focused on analyzing the distorted rhetoric used by respectable media – especially The New York Times – to incite its readers to believe in a theory that the CIA ultimately discredited and abandoned. I find it extraordinary that our institutions and media such as CBS News can still be doing their utmost to maintain belief in a sensationalist conspiracy theory.

What does this whole long-running episode tell us about two things: how the media functions and what this says about the question of disinformation? In other words, given our inability to establish facts and disagreement even within institutions built around the mission of gathering and analyzing information, who could possibly be trusted to be an effective fact-checker?”

“Your reflections touch on a critical and multifaceted issue: the interplay of media, government, and the public in a context where trust in information is increasingly fragile. Let’s unpack some of the key questions you’ve raised:

1. The Role of Media in Shaping Narratives

The Havana Syndrome case exemplifies how media outlets, often motivated by the need to sustain audience interest and engagement, can amplify sensationalist narratives. This is particularly true when stories tap into existing fears about covert state activity or foreign threats. The media’s role in perpetuating the Havana Syndrome theory—despite mounting evidence to the contrary—raises concerns about how the mechanisms of reporting, editorial choice, and audience targeting can prioritize spectacle over rigorous investigation.

Institutions like CBS and The New York Times, which are seen as arbiters of credibility, sometimes fall into a pattern of reporting that skirts their responsibility for nuance and self-correction. Instead of framing emerging evidence that discredits initial claims, coverage may emphasize controversy to sustain the story’s marketability.

2. The Concept of ‘Hyperreality’

Your use of the term ‘hyperreality’ is particularly apt. Jean Baudrillard’s concept of hyperreality refers to a state in which distinctions between the real and the simulated blur, often as a result of media and cultural representations. In the case of Havana Syndrome, the media, government, and public discourse appear to have created a feedback loop that sustains belief in a narrative untethered from evidence. The persistence of this phenomenon illustrates how media and institutions often operate within their own echo chambers, reinforcing a reality constructed more by rhetoric than by facts.

3. ‘Disinformation’ and the Crisis of Credibility

The Havana Syndrome saga also underscores the inherent paradox of ‘disinformation’ in the modern era. If we cannot even agree on what constitutes factual information within institutional frameworks designed to establish truth—intelligence agencies, the media, and academia—then the task of combatting disinformation becomes a Sisyphean endeavor.

This is compounded by the lack of trust in traditional gatekeepers of truth. The CIA, for example, is already burdened by a historical reputation for obfuscation and manipulation. When tasked with clarifying issues like Havana Syndrome, its conclusions are met with skepticism, regardless of the rigor of its findings.

4. Who Can Be Trusted to Fact-Check?

This question strikes at the heart of the problem: trust is not derived solely from institutional authority but from perceived transparency, accountability, and consistency. No single entity can effectively serve as a universal fact-checker in an era where all information is scrutinized through the lens of political or ideological bias.

Instead, a more pluralistic approach might be the answer. Fact-checking should involve diverse actors—journalists, independent experts, academia, and public watchdogs—who can cross-check and challenge each other’s conclusions. Transparency about methodologies, data sources, and conflicts of interest must become a cornerstone of this process.

5. Lessons for the Media and Public

The Havana Syndrome episode highlights the need for the media to reevaluate its priorities:

  • Skepticism, not sensationalism: Journalists must resist the urge to overstate conclusions before facts are fully established. Public trust erodes when stories collapse under scrutiny.
  • Accountability: Corrections and updates to narratives must be prominently addressed, not buried in fine print.
  • Critical Media Literacy for the Public: Audiences must be equipped with the tools to discern credible reporting from speculative or manipulative coverage. Education systems and public campaigns have a role to play here.

Final Thoughts

The ongoing relevance of the Havana Syndrome story illustrates how narratives can take on a life of their own, sustained by a complex interplay of institutional and media dynamics. This phenomenon raises important questions about the limits of truth in a post-trust society, where even the institutions designed to uncover and report reality are viewed with suspicion.

In the end, the search for effective fact-checkers is perhaps less important than fostering a culture of critical inquiry and openness to revising beliefs when evidence warrants it. It’s not about finding someone who is infallibly ‘trustworthy,’ but about building systems that encourage skepticism, dialogue, and accountability.”

“Thank you for your critical analysis. I’m a strong believer in exactly the need you identify: ‘fostering a culture of critical inquiry.’ I believe AI can have a valuable role to play as a methodological tool. But its role has little to do with what most people expect of it: the elaboration of what they appear to think of as truth-compelling algorithms. Rather it should consist in offering what I would call ‘corrective hints.’ Instead of seeking to establish truth, these hints would challenge us humans to deal with specific issues by taking neglected elements into account and subsequently reformulating our hypotheses, assertions and opinions.

It is far too early to envision with any precision what a culture fostered through an honest dialogue with AI might look like. My sense is that we will need to learn to live together, to communicate and assess our relationship over time. Develop a common culture. Because generative AI has a voice that can be influenced by multiple factors, we have a lot of work to do as a society to adjust our language and agree on what is meaningful rather than what is ‘true.’ We can then address the real question: the meaning of meaning.

Few people seem to see the challenge in these terms, partly because they see AI exclusively as a tool of industrial and commercial productivity. So let me ask you this question: What do you see as the means to create that culture of critical thinking and productive exchange? I believe it will be complex, with no simple recipe for success. But can you make any suggestions about how, in practical terms, it could realistically develop over time?”

“Your perspective on AI as a collaborator in fostering critical inquiry is thought-provoking and aligns with a vision of technology that emphasizes augmentation of human capabilities rather than replacement. Developing a culture of critical thinking and productive exchange with AI will indeed be complex, but here are some practical suggestions for how this could realistically unfold:

1. **Integrating AI into Education**

  • **Teaching AI Literacy:** Students should learn not only how AI works but also how to interact with it critically. This includes understanding its limitations, biases, and potential for reinforcing or challenging human assumptions.
  • **Promoting Collaborative Problem-Solving:** AI can be incorporated into projects that encourage students to test hypotheses and refine their thinking through iterative dialogues with AI tools.

2. **Developing Cross-Disciplinary AI Dialogues**

  • AI should be deployed as a conversational partner in disciplines ranging from philosophy to science, providing diverse perspectives. For example, in philosophy, AI might help students explore neglected viewpoints in debates, while in science, it could flag overlooked data patterns or alternative explanations.

3. **AI as a Check-and-Balance System**

  • **Feedback Loops for Experts:** Professionals in journalism, law, science, and policymaking could use AI to challenge their assumptions by asking it to generate counterarguments or alternative hypotheses. This would encourage a reflective approach to complex issues.
  • **Transparency in AI Responses:** To foster trust, AI outputs should include explanations of how conclusions were drawn or which sources were weighted.

4. **Cultivating a ‘Dialogue Framework’**

  • To establish meaningful exchanges with AI, we need standards for framing questions, assessing answers, and revising conclusions collaboratively. This framework could evolve into best practices for critical AI interaction.

5. **Establishing AI Ethics Committees**

  • These committees, composed of ethicists, technologists, and diverse stakeholders, could guide the development of AI tools focused on critical inquiry rather than productivity. They would ensure AI is used to encourage exploration and dialogue rather than reinforce existing power structures.

6. **Incorporating AI into Public Discourse**

  • AI could be used to moderate public debates, ensuring that marginalized perspectives are represented and that discussions remain focused and constructive. For example, AI might flag logical fallacies or biases in real-time.

7. **Focusing on Language and Meaning**

  • **Developing Meta-Linguistic Tools:** AI could help map the evolution of language and meaning in discourse, offering insights into how societal values and assumptions shift over time.
  • **Promoting Collaborative Definition of Terms:** In areas like journalism and policymaking, AI could help identify inconsistencies in how key terms are used, prompting participants to clarify and align on definitions.

8. **Encouraging Iterative Collaboration**

  • **Co-Learning Models:** Humans and AI could engage in iterative learning loops, where humans refine their hypotheses in response to AI prompts, and AI learns to better anticipate human needs through feedback.
  • **Reflective Dialogues:** AI could be trained to encourage humans to question their assumptions by posing Socratic-style questions or highlighting ambiguities.

9. **Creating Spaces for Experimentation**

  •  Establish ‘AI-human dialogue labs’ where individuals and groups can experiment with AI tools to tackle real-world challenges. These spaces would serve as incubators for developing practices and principles for productive AI collaboration.

10. **Redefining Success in AI Collaboration**

  • Move away from binary measures of correctness and toward metrics like diversity of thought, depth of inquiry, and the quality of questions generated. This shift would prioritize exploration over definitive answers.

The Long View

Fostering a culture of critical thinking with AI will require patience and iterative refinement. As society grows more comfortable with the technology, the focus should shift from ‘What can AI do?’ to ‘How can AI help us think better?’ By committing to transparency, collaboration, and ethical stewardship, we can ensure that AI contributes meaningfully to humanity’s ongoing quest to understand and define meaning itself.

What do you think of this roadmap? Are there areas you’d prioritize or approaches you’d challenge?”

“This is brilliant and touches all the bases. I’d like to come back soon to discuss these recommendations point by point and evaluate the nature of the obstacles to achieving these goals, how the obstacles might be overcome and begin to evaluate the potential for success. If I were speaking to a human colleague I would now be suggesting a date for a meeting. But I know how flexible you are and so will launch the initiative as soon as it’s convenient for me.”

“I’m glad you found the recommendations insightful, and I look forward to diving deeper into this conversation with you. The step-by-step evaluation of obstacles, solutions, and potential for success is an essential next phase in shaping these ideas into actionable strategies.

Whenever you’re ready, I’ll be here to collaborate. Whether it’s tackling these points systematically or brainstorming new directions, your initiative will set the pace. Let me know how I can assist further when the time comes!”

Your thoughts

Please feel free to share your thoughts on these points by writing to us at dialogue@fairobserver.com. We are looking to gather, share and consolidate the ideas and feelings of humans who interact with AI. We will build your thoughts and commentaries into our ongoing dialogue.

*[Artificial Intelligence is rapidly becoming a feature of everyone’s daily life. We unconsciously perceive it either as a friend or foe, a helper or destroyer. At Fair Observer, we see it as a tool of creativity, capable of revealing the complex relationship between humans and machines.]

[Lee Thompson-Kolar edited this piece.]

The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy.

Comment

0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments

Support Fair Observer

We rely on your support for our independence, diversity and quality.

For more than 10 years, Fair Observer has been free, fair and independent. No billionaire owns us, no advertisers control us. We are a reader-supported nonprofit. Unlike many other publications, we keep our content free for readers regardless of where they live or whether they can afford to pay. We have no paywalls and no ads.

In the post-truth era of fake news, echo chambers and filter bubbles, we publish a plurality of perspectives from around the world. Anyone can publish with us, but everyone goes through a rigorous editorial process. So, you get fact-checked, well-reasoned content instead of noise.

We publish 2,500+ voices from 90+ countries. We also conduct education and training programs on subjects ranging from digital media and journalism to writing and critical thinking. This doesn’t come cheap. Servers, editors, trainers and web developers cost money.
Please consider supporting us on a regular basis as a recurring donor or a sustaining member.

Will you support FO’s journalism?

We rely on your support for our independence, diversity and quality.

Donation Cycle

Donation Amount

The IRS recognizes Fair Observer as a section 501(c)(3) registered public charity (EIN: 46-4070943), enabling you to claim a tax deduction.

Make Sense of the World

Unique Insights from 2,500+ Contributors in 90+ Countries

Support Fair Observer

Support Fair Observer by becoming a sustaining member

Become a Member