Artificial intelligence seems to be a novel concept implanted in public memory owing to recent exorbitant investments and its impact on the job market. However, AI as an idea can be traced back thousands of years to myths and legends. Namely, in Greek mythology: Talos was a giant bronze automaton who served as the guardian of the island of Crete. His duties were to make daily circuits around the island and defend against invaders by throwing boulders. Talos’s defeat came when Jason and the Argonauts discovered a critical vulnerability — a plug in his foot — which, when removed, allowed his vital ichor to drain out, rendering him powerless.
The myth of Talos underscores the influence of artificial beings on human actions by illustrating how such entities can wield significant power and control, as well as how their vulnerabilities can be exploited. The automaton’s role as a powerful guardian and his eventual defeat through a single weakness reflect both the potential and the risks of creating autonomous systems that can shape and impact human behavior.
AI has existed for a long time — predictive texts and spam filters are two examples. Earlier systems were predominantly static, relying on predefined rules and manual updates. In contrast, modern ones can be regenerative. This means it can continuously learn from new data and adapt its models over time, allowing for dynamic improvements and more nuanced performance. This opens up a world of opportunity for various stakeholders.
While AI can enhance engagement and provide diverse perspectives, it also poses dangers, including digital propaganda, misinformation and algorithmic bias. Its influence on public opinion is undeniable. I will delve deeper into its capacity to both affect and effect change.
AI’s effect on digital propaganda
Digital propaganda uses digital platforms to manipulate public opinion through targeted messaging and misleading information. It involves methods such as spreading false content, creating fake personas and using algorithms to target specific groups.
AI enhances digital propaganda by analyzing data to deliver personalized content, manipulate search results and automate the spread of misinformation. During the 2016 United States presidential election, AI-driven bots and fake accounts amplified false narratives and divisive content, impacting public opinion and election outcomes.
AI-generated deep fakes also contribute to digital propaganda. For instance, in 2022, a deep fake video of Ukrainian President Volodymyr Zelenskyy emerged on social media. The video shows him surrendering and was a way to demoralize the Ukrainian resistance. This demonstrated the potential AI had to create deceptive content. A more recent and notable example includes the 2024 US presidential elections, where AI-generated deep fake images of political candidates like Donald Trump and Kamala Harris widely circulated. Supporters saw the photographs as satire and free expression, while experts cautioned that they may still cause divisions in society and the dissemination of false information.
Many worry about the possible influence AI-driven images can have on political discourse and the delicate balance between ethical communication and free speech. AI has made it simpler for campaigns and individuals to produce and spread such false content effortlessly.
These examples illustrate how AI can increase propaganda’s scale and precision. While the technologies can enhance communication, their misuse threatens information integrity and democratic processes. Effective oversight and improved detection are essential to address these issues.
On a positive note, AI can boost political engagement and access to information. AI-driven tools allow for personalized communication between politicians and constituents, enhancing voter outreach and participation. For example, algorithms help political campaigns tailor messages based on voter data, potentially increasing engagement. It also improves information access by aggregating and curating content from various sources, keeping users informed about political developments.
Conversely, AI has significant negative impacts. It contributes to polarization by creating echo chambers that reinforce existing beliefs, limiting exposure to diverse perspectives and deepening societal divisions. This polarization can hinder constructive political discourse and increase partisanship.
Another issue is the erosion of trust in democratic institutions. AI-driven bots and deepfakes spread misinformation and fake news, which distort public perception and undermine confidence in media and political processes.
The Cambridge Analytica scandal and social media manipulation
Social media platforms have become arenas for influence, with algorithms profoundly shaping user experiences and public opinion. Businesses and movements have jumped on the bandwagon of the online world. We now see the role of technocrats increasing within society as lawmakers are paralyzed by either misinformation or lack of capacity to take action.
The Cambridge Analytica scandal exemplifies how AI manipulation can lead to doubts about election integrity and government transparency. Cambridge Analytica, a political consulting firm, exploited Facebook’s data to create psychographic profiles of millions of users without their consent. By leveraging AI to analyze these profiles, the firm tailored political ads and messages to manipulate voter behavior during the 2016 US presidential election and the Brexit referendum.
This case exemplifies how AI-driven data analysis can be used to exploit personal information for political gain, exacerbating the effects of echo chambers by delivering highly personalized and persuasive messaging that reinforces existing biases. Algorithms facilitate this targeted manipulation, making it easier to influence public opinion on a large scale.
The scandal is likely the most infamous example of targeted ads manipulation used by a private company. This has already caused a huge dent in Western democracy. Despite publicly available knowledge about the same, the public outcry in response to it was poor. This can be owing to various factors, like the practices not being viewed as threats. The widespread acceptance of such egregious violations reveals a disturbing desensitization to digital manipulation.
AI-driven bots and fake accounts can easily spread misinformation. They can generate and disseminate false narratives rapidly, amplifying misleading content across social media platforms. Misinformation spreads faster and more widely than factual information, partly due to these automated systems.
Looking forward, regenerative AI and modern computing power present both exciting opportunities and considerable risks. Regenerative AI’s ability to continuously learn and adapt could lead to more sophisticated systems that enhance user experiences. Advanced algorithms might improve content curation, reducing echo chamber effects by introducing a wider range of perspectives and fostering more balanced discussions. For instance, AI could help filter out extreme content while promoting constructive dialogue, thereby improving the quality of information and interaction on social media.
However, these same capabilities could also be used to manipulate public opinion more effectively. Regenerative AI could be harnessed to create more persuasive misinformation and more effective echo chambers, further polarizing public discourse and exploiting individuals’ vulnerabilities. The capacity for rapid and adaptive misinformation could lead to even more severe consequences for democratic processes and public trust.
Regulate AI to reap its benefits and diminish its problems
While modern AI’s potential benefits are substantial, we must address risks through robust ethical guidelines, transparency and regulation. Governments should enforce transparency and accountability in AI systems, require clear disclosure of AI-driven content and combat misinformation. It is also crucial to promote digital literacy and critical thinking. Ensuring that AI technologies are used responsibly and in ways that support, rather than undermine, democratic values will be crucial as we navigate the future of digital influence.
Drawing a parallel to the myth of Talos, AI systems hold immense power to guard and influence human actions. Just as the mighty bronze automaton was brought down by his vulnerability, modern AI too has its weaknesses: algorithmic biases, potential for misuse and lack of transparency. As we advance, it is crucial to address these vulnerabilities. A thoughtful balance between innovation and ethical safeguards could allow us to harness AI’s benefits while protecting the core values of democracy.
As Indian philosopher and statesman S. Radhakrishnan once said, “The end-product of education should be a free creative man, who can battle against historical circumstances and adversities of nature.” This insight emphasizes the importance of using AI to empower individuals and society positively rather than letting it become a tool for manipulation and control. Building on Dr. Radhakrishnan’s vision, we must harness AI as a tool to empower individuals to think critically, foster resilience, and navigate the challenges of our era with integrity—ensuring technology uplifts rather than undermines human agency.
[Lee Thompson-Kolar edited this piece.]
The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy.
Support Fair Observer
We rely on your support for our independence, diversity and quality.
For more than 10 years, Fair Observer has been free, fair and independent. No billionaire owns us, no advertisers control us. We are a reader-supported nonprofit. Unlike many other publications, we keep our content free for readers regardless of where they live or whether they can afford to pay. We have no paywalls and no ads.
In the post-truth era of fake news, echo chambers and filter bubbles, we publish a plurality of perspectives from around the world. Anyone can publish with us, but everyone goes through a rigorous editorial process. So, you get fact-checked, well-reasoned content instead of noise.
We publish 2,500+ voices from 90+ countries. We also conduct education and training programs
on subjects ranging from digital media and journalism to writing and critical thinking. This
doesn’t come cheap. Servers, editors, trainers and web developers cost
money.
Please consider supporting us on a regular basis as a recurring donor or a
sustaining member.
Will you support FO’s journalism?
We rely on your support for our independence, diversity and quality.
Comment