On what many consider the most serious geopolitical issue in the news over the weekend – the ruling by the International Court of Justice (ICJ) on the question of genocide in Gaza – the world is now awaiting the measures taken by all actors involved. Israel is obviously the first nation concerned, but even more interesting is the position that its allies and especially the US and the UK are likely to take.
Before the ruling was issued, those two nations publicly qualified South Africa’s complaint as “unfounded” and “without merit.” But all signatories of the agreement that originally established the court and defined its role are legally bound to accept its judgments. That means that whatever private opinion the US State Department or the UK Foreign Ministry previously had of the merits of South Africa’s case must be suspended in the name of compliance.
Because the court has no means to enforce any concrete actions implicitly required by its ruling, some ambiguity still reigns about the ultimate effect of its pronouncement. This seemed to me to be a topic that deserved soliciting the opinion of our favorite Large Language Model, ChatGPT, potentially to resolve the ambiguity. I therefore formulated the following question.
“The ICJ has rendered its ruling on South Africa’s request to determine whether there is a case for considering that Israel’s actions in the context of its war on Gaza may be deemed genocide. Human Rights Watch described the ruling in these terms: ‘The Court found a plausible risk of genocide & the UK has an obligation to prevent genocide & not be complicit.’ The Atlantic Council summarized the court’s judgment in similar terms: ‘Today, the world’s top court ruled that South Africa’s claim that Israel is committing genocide in Gaza is indeed plausible.’
There appears to be widespread agreement about the question of plausibility. But what does this mean in practical terms and more specifically in legal terms?”
As so often is the case, ChatGPT “kind of” answered the question.
“The International Court of Justice (ICJ) has found a **plausible risk of genocide** in the context of Israel’s war on Gaza and has ordered provisional measures that are directly binding. The court has not yet ruled on whether Israel has actually violated its obligations under the Genocide Convention ².
In practical terms, the ICJ’s ruling means that the UK has an obligation to prevent genocide and not be complicit ¹. However, the ruling does not order Israel to halt its military campaign in Gaza ³.
In legal terms, the ICJ’s ruling is a significant development. The court has found that South Africa’s claim that Israel is committing genocide in Gaza is indeed plausible. This means that the ICJ has found that there is a reasonable basis for South Africa’s claim and that it is not frivolous or vexatious. However, it is important to note that the ICJ has not yet made a final determination on the matter ².
“So,” I responded, “you seem to be saying that the legal term ‘plausible’ means a reasonable hypothese that is ‘not frivolous or vexatious.’ Is that the basis for a definition?
“I apologize for any confusion. The term ‘plausible’ has different meanings depending on the context in which it is used. In the context of the ICJ’s ruling, the court found that South Africa’s claim that Israel is committing genocide in Gaza is indeed plausible. In this context, ‘plausible’ means that there is a reasonable basis for the claim and that it is not frivolous or vexatious ¹ .
However, in other contexts, ‘plausible’ may have different meanings. For example, in the legal context of pleading standards, ‘plausible’ means that a complaint must include facts giving rise to a plausible entitlement to relief.
I hope this clarifies things for you. Let me know if you have any other questions!”
Is dialogue possible with AI?
Apart from what I perceived as a slight political bias in its answer, the final two sentences of this response amused me. Clearly, ChatGPT doesn’t like the idea of an exploratory dialogue seeking to understand the semantic complexity of a very real issue. At any given point, it wants me to consider the debate closed thanks to the pertinence of its answer. It wants me to treat it as an authority.
That, of course, is not the way the best human intelligence works. Nor does it reflect the way most authentic, natural human dialogue works, except perhaps in political press conferences. Socrates, for example, never introduced an initial question with any other purpose than to generate other questions. He never supposed that any initial question might produce a definitive response. The whole point of questioning is that it leads to a host of other mostly unforeseen considerations, all of which taken together create the effect of what we can call new perspective.
That at least is how I read Plato’s version of the Socratic dialogues. I’m the first to admit that Plato, who purportedly “transcribed” these conversations, was partial to certain conclusions as he was drafting the text. But it should be evident that the value of the Socratic method is lies elsewhere than in its ability to induce ironclad conclusions.
What does this short exchange tell us about ChatGPT’s algorithms? I tend to think this demonstrates that its dialectical model is closer to the political press conference, or alternatively the modern academic lecture, than it is to Socratic dialogue. Its aim is consistently to close a debate rather than seek to understand what is being debated, In contrast to curious humans, it shows no interest in exploring other deeper, more interesting issues that are likely to emerge and stimulate new reflection.
This seems to raise an important question. Does our idea of artificial intelligence and everything we have done to design it reflect only a limited view of how actual human intelligence works. If the model itself is skewed towards closing debate rather than exploring hypotheses, it is fair to say that its bias puts it in opposition to all forms of creativity, including the scientific method, let alone the free interplay of human intelligences. I insist on the plural here because it seems to me that we tend to assume that only individuals are intelligent. I believe the case can be made that groups of humans possess intelligence.
The Socratic model
When we talk about multiple intelligences, as people like Howard Gardner and Daniel Goleman have done, we are still remaining focus on the reality of an individual personality. Some thinkers and researchers have sought to go further in exploring the idea of social intelligence, but their reference remains the individual, the consideration of what goes on in an individual’s mind, conditioning that individual’s behavior.
Perhaps the greatest lesson we can derive from artificial intelligence, which literally seeks to ape human intelligence, is to begin to understand that our idea of intelligence, just like our idea of plausibility is poorly defined and constricted by a social culture that overemphases the individual and neglects collective reality.
That is why Socratic dialogue offers us a radical counter-example to the algorithmic logic of ChatGPT derived from our current social values. Socrates keeps redefining ideas, in relation to their often deliberately neglected contexts. The Socratic dialogues create a constant effect of surprise, producing the impression that “I never thought of things in that way.” Plato does tend to cheat a bit when he gives the impression that Socrates, from the start, has thought things out thoroughly and knows exactly where he is going. Socrates questions to learn, even for himself.
Instead of a know-it-all, Socrates was clearly the kind of thinker and talker who, like a good chess player, was always three or four steps ahead of his adversary. That sums up Socrates’s brand of individual intelligence. But in the actual dialogues that inspired Plato’s crafted literature, it is more likely that Socrates himself used dialogue to discover and articulat the insights Plato has bequeathed us.
I suggest that as we use AI we need to do two things. The first is to engage with it in the form of dialogue to understand what kinds of assumptions it appears to hold about ideas and words. The second is to understand what we don’t already understand about human intelligence. Despite the vast production of professional psychologists and cognitivists, I believe that we still have a long way to go. I also believe that if we find a way of adopting the Socratic method with AI, we might make some real progress in that effort.
*[Artificial Intelligence is rapidly becoming a feature of everyone’s daily life. We unconsciously perceive it either as a friend or foe, a helper or destroyer. At Fair Observer, we see it as a tool of creativity, capable of revealing the complex relationship between humans and machines.]
The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy.
Support Fair Observer
We rely on your support for our independence, diversity and quality.
For more than 10 years, Fair Observer has been free, fair and independent. No billionaire owns us, no advertisers control us. We are a reader-supported nonprofit. Unlike many other publications, we keep our content free for readers regardless of where they live or whether they can afford to pay. We have no paywalls and no ads.
In the post-truth era of fake news, echo chambers and filter bubbles, we publish a plurality of perspectives from around the world. Anyone can publish with us, but everyone goes through a rigorous editorial process. So, you get fact-checked, well-reasoned content instead of noise.
We publish 2,500+ voices from 90+ countries. We also conduct education and training programs
on subjects ranging from digital media and journalism to writing and critical thinking. This
doesn’t come cheap. Servers, editors, trainers and web developers cost
money.
Please consider supporting us on a regular basis as a recurring donor or a
sustaining member.
Will you support FO’s journalism?
We rely on your support for our independence, diversity and quality.
Comment