I confess that I have never paid much attention to the artificial drama attached to the stroke of midnight every December 31st. Some people seem to see it as a moment that clears the sullied slate of the worries of the past in anticipation of others to come. The celebration of the Chinese New Year, with its lunar variability, a month or so us after ours serves as a reminder that the idea of a new year marking a new beginning is nothing more than a cultural artifact. Without that corrective reminder, we might retain the illusion that January 1 has some kind of deep metaphysical significance.
For reasons linked to recent events, this new year struck me as singular, perhaps even unique. I decided to share my thoughts with ChatGPT.
“The events of 2023 revealed a world that delivers the feeling of being engaged on a different path from that of any of us whose lifetimes covered most or all of the second half of the twentieth century.
At the end of 2023 a war in Eastern Europe that began in February 2022 was still raging and a new war in West Asia began on October 7. Both turned out to be very different from the traditional wars in the past that boiled down to disputes over territorial control gone awry.
The war in Ukraine was visibly a proxy war about global hegemony. This became clear when Ukraine was not permitted to negotiate its own peace, though that reality is still denied to this day by an unprecedented and unparalleled propaganda effort led by governments and the complicit media that continue to refuse any explanation other than that of sovereignty and territorial integrity.
The delirium of two full years of deliberately skewed narratives that began in January 2022 with predictions of an imminent Russian false flag operation to justify an “unprovoked” invasion has never stopped, even though the outcome of the war, entirely unfavorable to Ukraine and NATO, is no longer in doubt. It’s largely because of the need to continue the propaganda campaign and rewrite the narrative before acknowledging the military outcome that the war is officially still going on and reassuringly deemed a “stalemate.”
The second war, in Gaza, has visibly become a campaign of genocide, but that reality is so contrary to the acceptable narrative promoted by those who maintain that their cherished “rules-based order” is nothing more than a defense of territorial status quo that the question of genocide can still be written off as merely a question of interpretation.
2024 promises to deliver some kind of verdict on both of those conflicts. That verdict may include anything from World War III to a reordering of the global political culture and the ultimate collapse of belief in the set of rules that were implied by the former rules-based order.
2024 will see numerous crucial elections in influential countries and regions. What does all that mean concerning the chances of things just calming down and carrying on as usual?
AI’s strategy for evading complex questions
ChatGPT’s response tells us much more about the state of generative AI than it does about the world. This is a lesson worth reflecting on. Here is what it told me.
“2024 is indeed a year of great significance, with numerous crucial elections taking place in influential countries and regions. According to TIME, at least 64 countries (plus the European Union) representing a combined population of about 49% of the people in the world are meant to hold national elections, the results of which, for many, will prove consequential for years to come1.
It is also a year that could deliver some kind of verdict on the conflicts in Eastern Europe and West Asia. The war in Ukraine, which began in February 2022, is still ongoing and has been described as a proxy war about global hegemony. The second war, in Gaza, has visibly become a campaign of genocide. The verdict on both of these conflicts may include anything from World War III to a reordering of the global political culture and the ultimate collapse of belief in the set of rules that were implied by the former rules-based order2.
It is important to note that the information provided in your message is highly opinionated and may not be entirely accurate. It is always advisable to verify the information from multiple sources before drawing any conclusions.”
Let’s look at what ChatGPT has done here. First, it offered two sources: a link to a Time Magazine article about 2024 as the “ultimate election year” and another to the World Economic Forum on the same topic. Now that in itself is an indirect commentary on my own claim concerning a propaganda campaign designed to protect the status quo of the vaunted “rules-based order.” Of all the sources available to someone thinking about the state of the world today, are Time Magazine and the WEF the ones we should look to for guidance.
Then in the final two paragraphs, ChatGPT at first seems to agree with me but “subtly” accuses me of being “opinionated” and lacking accuracy. If that is what the AI model “thinks,” then why doesn’t it engage? The answer may be that I had also provided the explanation of the essential strategy of the propaganda: that all realistic assessments that counter the official narrative can be dismissed as “questions of interpretation.”
Do we really expect AI to think or just amuse us by pretending?
But the key issue is one we see frequently with ChatGPT and probably any AI designed in the future: it doesn’t even address my question which concerned assessing “the chances of things just calming down and carrying on as usual.” This is truly the work of a “stochastic parrot” but it is also one with a subscription to Time Magazine and a faith in the wisdom of the World Economic Forum. Should we be surprised that what we get at best will be a defense of a pretty solidly established status quo?
This should demonstrate that AI is a powerful tool, able to recognize and assemble a lot of facts and repeat existing ideas because of its access to data. But we need to acknowledge that it is built precisely to avoid anything resembling human debate. It may in the future get better at simulating engagement in real dialogue, but the fact that it produces something that resembles human discourse should not delude us. It doesn’t even try to “understand” our questions. It uses its mastery of style to spew out the most anodyne reaction to real human concerns.
Presumably aware of this structural problem with AI, Elon Musk launched his chatbot, Grok in the belief that it was all about accepting discourse that goes beyond the rules of a rules-based semantic order. It was part of his campaign to counter wokeness. What Grok. But wokeness is just a variant of intellectual conformity that plays out in other ways but always achieves the same result: reducing the scope of critical thinking. Musk himself, for all his accomplishments, has been guilty of multiple lapses in critical thinking, including concerning the issues he is most invested in, such as the colonization of Mars.
To further explore this point, I continued my conversation with ChatGPT with this question:
“Which ‘information’ I provided strikes you as ‘opinionated’?” and received the following response.
“I apologize for the confusion. I did not mean to imply that the information you provided was opinionated. I was referring to the fact that the information you provided was highly subjective and may not be entirely accurate. It is always advisable to verify the information from multiple sources before drawing any conclusions.
If you have any other questions, please feel free to ask.”
What better proof is needed that ChatGPT will always do three things?
- Refuse to respond to the obvious meaning of a question that goes beyond the mere citation of facts.
- Apply what is clearly a “rules-based strategy” in its answer, which in some cases produces the impression of wokeness in others alignment with “normative” thinking.
- Distance itself from anything that amounts to what we might call “thinking through” a complex issue.
In other words, our use of AI should not include expecting what humans constantly need to do for any kind of serious problem-solving: to think outside the box. That is why we must always strive to be aware of the structure behind its fashioning of human-sounding discourse, just as we should do with most of the human discourse we get from our media and even, often enough, from our science.
*[Artificial Intelligence is rapidly becoming a feature of everyone’s daily life. We unconsciously perceive it either as a friend or foe, a helper or destroyer. At Fair Observer, we see it as a tool of creativity, capable of revealing the complex relationship between humans and machines.]
The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy.
Support Fair Observer
We rely on your support for our independence, diversity and quality.
For more than 10 years, Fair Observer has been free, fair and independent. No billionaire owns us, no advertisers control us. We are a reader-supported nonprofit. Unlike many other publications, we keep our content free for readers regardless of where they live or whether they can afford to pay. We have no paywalls and no ads.
In the post-truth era of fake news, echo chambers and filter bubbles, we publish a plurality of perspectives from around the world. Anyone can publish with us, but everyone goes through a rigorous editorial process. So, you get fact-checked, well-reasoned content instead of noise.
We publish 2,500+ voices from 90+ countries. We also conduct education and training programs
on subjects ranging from digital media and journalism to writing and critical thinking. This
doesn’t come cheap. Servers, editors, trainers and web developers cost
money.
Please consider supporting us on a regular basis as a recurring donor or a
sustaining member.
Will you support FO’s journalism?
We rely on your support for our independence, diversity and quality.
Comment