Artificial Intelligence

Combating Fake Information in the Era of Generative AI²

AI tools now allow users to quickly generate images and written content, revolutionizing the creative process. This rapid pace of innovation in generative AI has also brought new risks such as fake news and deep fakes. Organizations can use AI to mitigate these risks but human review is still essential.
By
robot-works

A humanoid robot works in an office on a laptop, showcasing the utility of automation in repetitive and tedious tasks. © Stock-Asso / shutterstock.com

July 07, 2023 02:51 EDT
Print

It was the year of generative AI. The twelve-month period between January and December 2022 gave us DALL-E, Midjourney and ChatGPT, powerful tools that put the combined power of a search engine, Wikipedia and a top-notch content generator at our fingertips.

Tools like Bard, Adobe Firefly and Bing AI quickly followed, rapidly expanding the abilities of your average internet user beyond anything we could’ve imagined just a few years ago. With a couple of simple keystrokes, we can now generate captivating images or pages of written content that, this time last year, would’ve taken hours, days, or weeks to produce—even for illustrators or writers with years of training.

Indeed, generative AI is changing the landscape beneath our feet—while we’re standing on it. But this pace of innovation comes with risks; namely, of losing our footing and letting algorithms override human discernment. As a recent article in the Harvard Business Review highlighted, the creation of fake news and so-called deep fakes poses a major challenge for businesses—and even entire countries—in 2023 and beyond.

Fortunately, innovation in AI is not just producing results for content generation. It’s also a tool that, when coupled with good, old-fashioned human instinct, can be used to resolve problems in the systems themselves. But before examining these strategies in more detail, it’s important we understand the real-world threats posed by AI-generated misinformation.

Recognizing the threats

The potential threats of AI-generated content are many, from reputational damage to political manipulation.

I recently read in The Guardian that the journal’s editors received inquiries from readers about articles that were not showing up in its online archives. These were articles that reporters themselves couldn’t even recall writing. It turns out, they were never written at all. ChatGPT, when prompted by users for information on particular topics, referenced Guardian articles in its output that were completely made up.

If errors or oversights baked into AI models themselves weren’t concerning enough, there’s also the possibility of intentional misuse to contend with. A recent Associated Press report identified several risk factors of generative AI use by humans ahead of the 2024 US presidential election. The report raised the specter of convincing yet illegitimate campaign emails, texts, or videos, all generated by AI, which could in turn mislead voters or sow political conflict.

But the threats posed by generative AI aren’t only big-picture. Potential problems could spring up right on your doorstep. Organizations that overly and uncritically rely on generative AI to meet content production needs could unwittingly be spreading misinformation and causing damage to their reputations. 

Generative AI models are trained on vast amounts of data, and data can be outdated. Data can be incomplete. Data can even be flat-out wrong: generative AI models have shown a marked tendency to “hallucinate” in these scenarios—that is, confidently assert a falsehood as true.

Since the data and information that AI models train on are typically created by humans, who have their own limitations and biases, AI output can be correspondingly limited and biased. In this sense, AI trained on outdated attitudes and perceptions could perpetuate certain harmful stereotypes, especially when presented as objective fact—as AI-generated content so often is.

AI vs. AI

Fortunately, organizations that use generative AI are not prisoners to these risks. There are a number of tools at their disposal to identify and mitigate issues of bad information in AI-generated content. And one of the best tools for this is AI itself.

These processes can even be fun. One method in particular, known as “adversarial training,” essentially gamifies fact-checking by pitting two AI models against each other in a contest of wits. During this process, one model is trained to generate content, while the second model is trained to analyze that content for accuracy, flagging anything erroneous. The second model’s fact-checking reports are then fed back into the first, which corrects its output based on those findings.

We can even juice the power of these fact-checker models by integrating them with third-party sources of knowledge—the Oxford English Dictionary, Encyclopedia Britannica, newspapers of record or university libraries. These adversarial training systems have developed sophisticated-enough palates to differentiate between fact, fiction and hyperbole.

Here’s where it gets interesting: The first model, or the “generative” model, learns to outsmart the fact-checker, or “discriminative” model, by producing content that is increasingly difficult for the discriminative model to flag as wrong. The result? Steadily more accurate and reliable generative AI outputs over time.

Adding a human element

Although AI can be used to fact-check itself, this doesn’t make the process hands-off for all humans involved. Far from it. A layer of human review not only ensures delivery of accurate, complete and up-to-date information, it can actually make generative AI systems better at what they do. Just as it tries to outsmart its discriminative nemesis, a generative model can learn from human corrections to improve future results.

What’s more, internal strategies like this can then be shared between organizations to establish industry-wide standards and even a set of ethics for generative AI use. Organizations should further collaborate with other stakeholders, too—including researchers, industry experts and policymakers—to share insights, research findings and best practices.

One such best practice involves data collection efforts that prioritize quality and diversity. This involves careful selection and verification of data sources, by human experts, before they’re fed into models, taking into consideration not just real-time accuracy, but representativeness, historical context and relevance.

All of us with stakes in making better generative AI products should likewise commit to promoting transparency industry-wide. AI systems are increasingly used in critical fields, like health care, finance and even the justice system. When AI models are involved in decisions that impact peoples’ real lives, it’s essential that all stakeholders understand how such a decision was made and how to spot inconsistencies or inaccuracies that could have major consequences.

There could be consequences of misuse or ethical breaches for the AI user too. A New York lawyer landed himself in hot water earlier this year after filing a ChatGPT-generated brief in court that reportedly cited no fewer than six totally made-up cases. He now faces possible sanctions and could lose his law license altogether.

Generative AI modelers therefore shouldn’t be afraid of sharing documentation on system architecture, data sources and training methodologies, where appropriate. The competition to create the best generative AI models is fierce, to be sure, but we can all benefit from standards that promote better, more reliable, and safer products. The stakes are simply too high to be playing our cards so close to our chests.

The strides taken by generative AI in the last year are only a taste of what’s to come. We’ve already seen remarkable transformation not just in terms of what models are capable of, but in how humans are using them. And as these changes continue, it’s critical that our human instinct evolves right along with them. Because AI can only achieve its potential in combination with human oversight, creativity and collaboration.

The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy.

Comment

0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments

Support Fair Observer

We rely on your support for our independence, diversity and quality.

For more than 10 years, Fair Observer has been free, fair and independent. No billionaire owns us, no advertisers control us. We are a reader-supported nonprofit. Unlike many other publications, we keep our content free for readers regardless of where they live or whether they can afford to pay. We have no paywalls and no ads.

In the post-truth era of fake news, echo chambers and filter bubbles, we publish a plurality of perspectives from around the world. Anyone can publish with us, but everyone goes through a rigorous editorial process. So, you get fact-checked, well-reasoned content instead of noise.

We publish 2,500+ voices from 90+ countries. We also conduct education and training programs on subjects ranging from digital media and journalism to writing and critical thinking. This doesn’t come cheap. Servers, editors, trainers and web developers cost money.
Please consider supporting us on a regular basis as a recurring donor or a sustaining member.

Will you support FO’s journalism?

We rely on your support for our independence, diversity and quality.

Donation Cycle

Donation Amount

The IRS recognizes Fair Observer as a section 501(c)(3) registered public charity (EIN: 46-4070943), enabling you to claim a tax deduction.

Make Sense of the World

Unique Insights from 2,500+ Contributors in 90+ Countries

Support Fair Observer

Support Fair Observer by becoming a sustaining member

Become a Member