Science & Technology

Outside the Box: The Art of Disinformation About Disinformation

In “Outside the Box,” I interrogate ChatGPT not only to better understand how AI “reasons” but also to see what it can tell me about the way I reason. It means engaging in a conversation where we are both sharing ideas spontaneously and challenging whatever emerges. This week and in the following weeks, we have begun focusing on the knotty question of misinformation about disinformation.
By
phone screen

Contemporary art collage. Antique statue bust and phone screen covering eyes with fake news lettering. Spreading rumors. Concept of disinformation, gossips, propaganda, influence, society. Artwork © Anton Vierietin / shutterstock.com

October 21, 2024 05:28 EDT
Print

Earlier this month, Fair Observer featured a piece from the Wharton team with the title, “What Public Discourse Gets Wrong About Social Media Misinformation.” It cites an article in Nature revealing that after reviewing “years of behavioral science research on exposure to false and radical content online,” the researchers found “that exposure to harmful and false information on social media is minimal to all but the most extreme people, despite a media narrative that claims the opposite.”

There is clearly a crime afoot, the crime of systemic misinformation in the pursuit of disinformation. I therefore engaged the conversation with ChatGPT with the following prompt: “This highlights the obvious paradox that those who complain about and even take measures to rein in misinformation are themselves not only spreading misinformation but hiding the fact by appearing to be champions of truth. What do we know about the political, social and economic motivation of such people, which of course includes many people in politics and public administrations?”

Here is ChatGPT’s rich response:

The paradox you mention is indeed intriguing, and it’s a subject of considerable debate among scholars and analysts. The motivations of individuals and groups who seek to combat misinformation but may, in turn, contribute to spreading it or misrepresenting its prevalence can be understood through several political, social, and economic lenses:

1. **Political Motivation: Power and Control**

–       **Narrative Control**: In the political realm, controlling the flow of information and setting the narrative are key components of power. By labeling certain information as misinformation and positioning themselves as defenders of truth, politicians or political actors can control what is perceived as legitimate discourse. This can be used to suppress opposing views, delegitimize dissent, and fortify their own base.

–       **Polarization**: Politicians often exploit misinformation as a tool to drive division. Polarized environments enable politicians to rally their base by creating an ‘us vs. them’ mentality. Often, accusations of spreading misinformation are politically motivated to weaken the credibility of the opposition or alternative narratives.

–       **Deflecting Criticism**: Some political actors may amplify concerns about misinformation to distract from their own actions or to create a convenient scapegoat. If public discourse is dominated by concerns about misinformation, it can distract from policy failures or controversies.

2. **Social Motivation: Tribalism and Identity Politics**

–       **Cultural War and Identity Politics**: For many individuals in public life, taking a stand against misinformation aligns them with particular social or ideological tribes. The act of policing misinformation becomes a signal of moral virtue or loyalty to a cause. These tribal affiliations often eclipse objective assessments of information, leading to the amplification of certain biases.

–       **Moral Authority and Social Capital**: Social media allows individuals to gain prestige or moral authority by publicly positioning themselves as defenders of truth. This is particularly relevant for influencers, activists, and certain media figures, who may gain social capital through this stance. As their influence grows, the lines between advocacy, misinformation, and exaggeration can blur.

3. **Economic Motivation: Media and Market Incentives**

–       **Clickbait and Sensationalism**: News outlets and social media platforms often benefit from sensational narratives about misinformation, as these narratives drive engagement. Alarmist headlines or stories about the prevalence of harmful information can generate more clicks, views, or shares than more measured takes. Hence, media companies may profit from amplifying the threat of misinformation even when it’s not as pervasive as claimed.

–       **Platform Economics**: Social media platforms, while often blamed for misinformation, also have financial incentives to appear proactive. If platforms are seen as failing to address false content, they face regulatory scrutiny or public backlash. Thus, they may overstate the impact of misinformation in public relations efforts to appear responsible to both users and regulators.

–       **Funding for NGOs and Research**: Many organizations focused on combating misinformation, such as fact-checking agencies or advocacy groups, rely on external funding (from governments or private institutions). These organizations may have incentives to overstate the problem of misinformation in order to secure more funding or resources, even though their original goal may be noble.

4. **Psychological and Cognitive Biases**

–       **Confirmation Bias**: Many people, including those in power, are influenced by cognitive biases that lead them to selectively believe or amplify information that confirms their preexisting views. This can happen even when these individuals believe they are combating false information. In this case, misinformation can spread even within groups that seek to suppress it.

–       **Cognitive Dissonance**: Individuals who publicly commit to fighting misinformation may experience cognitive dissonance when confronted with evidence that challenges their stance or reveals their own biases. To reconcile this, they might double down on their original claims rather than admit to spreading or contributing to misinformation themselves.

5. **Institutional Dynamics and Incentives**

–       **Governmental and Bureaucratic Interests**: Governments and bureaucratic institutions often seek to maintain order and control over information. Highlighting the threat of misinformation can be a pretext for introducing laws and regulations that extend state control over media, social platforms, and public discourse. In some cases, those pushing for restrictions may be motivated by a desire to suppress political opposition or dissent under the guise of ‘fighting misinformation.’

–       **Public Administration and Trust**: Public institutions, such as health authorities or electoral commissions, often face crises of public trust. Highlighting their efforts to combat misinformation can be a strategy to restore legitimacy. In some cases, however, they may exaggerate the threat of misinformation to justify measures that are seen as overly paternalistic or authoritarian.

6. **Manipulation and Opportunism**

–       **Manipulative Opportunism**: Some actors within politics and media can opportunistically exploit the issue of misinformation for personal or institutional gain. By positioning themselves as gatekeepers of truth, they not only consolidate influence but can also discredit critics or adversaries without engaging with the content of their critiques. The framing of an opponent’s views as ‘misinformation’ can be a convenient tool to delegitimize their stance without providing a substantive counterargument.

In summary, those who campaign against misinformation may be motivated by a mix of legitimate concerns and less altruistic drives—whether it be maintaining political power, securing social status, or capitalizing on economic opportunities. This dynamic illustrates the complexity of modern information ecosystems, where fighting misinformation can itself become a politically charged and often self-serving activity.

These observations define an important dimension of what can only be described as a fundamental crisis point in our civilization today, especially in the democratic West. Let’s call it the “public truth” problem. Ever since the early 20th century philosopher Bertrand Russell questioned the truth value of the proposition, “The King of France is bald,” we should feel concerned about whether any language produced accurately translates reality or even specific features of reality. The difficulty of expressing truth should not discourage us from trying. It should simply make us humble about all our attempts to do so.

If someone says, “the sun rises in the west,” we can easily shout, “Are you crazy?!” The statement is empirically false, since no one on Earth not under the influence of LSD has ever seen the sun rise in the west. But the statement is false for another reason. Like the bald king of France, there simply is no sun that rises. The Earth turns, creating the illusion for Earthlings that the sun is rising.

So, is the statement, “the sun is rising,” misinformation? No, because it correctly describes our common perception of the event. Compare this with United States President Joe Biden’s claim last year that “Putin has already lost the war,” at a time when every knowledgeable expert inside Biden’s Beltway knew that Ukraine was not only losing the war on the battlefield, but could not possibly win it on its own, no matter how much support it might get from the West.

Did any fact-checker upbraid Biden for his claim? Not in the legacy media. With a bit of indulgence, one might reason that Biden “perceived” Russia’s failure to obtain an outright victory as a loss. But the informed experts who remained quiet or were simply never questioned by the media did not share that perception with the public. Biden used his position of authority to claim the sun was rising in the west.

Next week, we will take a deeper dive into ChatGPT’s analysis and try to understand whether AI itself might be able to clarify the problem of “public truth.”

Your thoughts

Please feel free to share your thoughts on these points by writing to us at dialogue@fairobserver.com. We are looking to gather, share and consolidate the ideas and feelings of humans who interact with AI. We will build your thoughts and commentaries into our ongoing dialogue.

*[Artificial Intelligence is rapidly becoming a feature of everyone’s daily life. We unconsciously perceive it either as a friend or foe, a helper or destroyer. At Fair Observer, we see it as a tool of creativity, capable of revealing the complex relationship between humans and machines.]

[Lee Thompson-Kolar edited this piece.]

The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy.

Comment

Only Fair Observer members can comment. Please login to comment.

Leave a comment

Support Fair Observer

We rely on your support for our independence, diversity and quality.

For more than 10 years, Fair Observer has been free, fair and independent. No billionaire owns us, no advertisers control us. We are a reader-supported nonprofit. Unlike many other publications, we keep our content free for readers regardless of where they live or whether they can afford to pay. We have no paywalls and no ads.

In the post-truth era of fake news, echo chambers and filter bubbles, we publish a plurality of perspectives from around the world. Anyone can publish with us, but everyone goes through a rigorous editorial process. So, you get fact-checked, well-reasoned content instead of noise.

We publish 2,500+ voices from 90+ countries. We also conduct education and training programs on subjects ranging from digital media and journalism to writing and critical thinking. This doesn’t come cheap. Servers, editors, trainers and web developers cost money.
Please consider supporting us on a regular basis as a recurring donor or a sustaining member.

Will you support FO’s journalism?

We rely on your support for our independence, diversity and quality.

Donation Cycle

Donation Amount

The IRS recognizes Fair Observer as a section 501(c)(3) registered public charity (EIN: 46-4070943), enabling you to claim a tax deduction.

Make Sense of the World

Unique Insights from 2,500+ Contributors in 90+ Countries

Support Fair Observer

Support Fair Observer by becoming a sustaining member

Become a Member