AI hasn’t yet been programmed to realize that emotion is a key ingredient of understanding.
Artificial intelligence, like some colonial administration backed up by a powerful army, keeps moving into areas once proudly owned and managed by the race of human beings. As in all colonial ventures, some of the natives see the opportunity of supporting and, thereby, being handsomely rewarded and supported by the colonizers.
In this spirit, IBM unveiled in a live contest the first non-human debating machine. The Guardian describes how it worked: “The AI system could, without emotion, listen to the conversation, take all of the evidence and arguments into account and challenge the reasoning of humans where necessary.”
Here is today’s 3D definition:
Emotion:
A human characteristic responsible for making a mess of everything. From the point of view of technocratic cultures dominated by the supreme value of binary logic, emotion is the declared enemy of reasoning, and should ideally be eliminated from every form of serious decision-making.
Contextual note
When Arvind Krishna, director of IBM Research, says, “We believe there’s massive potential for good in artificial intelligence that can understand us humans,” we can sympathize and agree. If we follow his own binary logic, we should also admit the hypothesis that there’s also “massive potential for evil.” Mentioning one without the other skews the resulting business (but also moral) decision in a convenient direction: trusting AI to solve human problems. If both hypotheses are true, it would be wise from a moral point of view to begin to weigh the impact of that trust. And to avoid the temptation to trust AI to do the weighing.
The implied advantage of AI “that can understand” is that “being without emotion,” it will depend entirely on reason. That already supposes that AI can, as claimed, “understand,” which usually means not just processing facts, but also assessing and empathizing with the motivation of moral agents.
In the case of humans this always involves emotion. Perception is what we see, feel and remember through association, like Proust and the madeleines. The perception that precedes understanding also involves emotion. But machines store knowledge, not perception. And while sensory captors can achieve higher levels of perfection than humans, no one in the field of AI appears to be concerned with the most central perceptive function, proprioception: our sense and understanding of where we are in relation to our environment. If we extend the notion of proprioception to social psychology — the creation of a sense of self and social identity — and admit its importance in all human activity, we can begin to understand (in the rational rather than empathic sense) how alien AI is now and will be likely to remain.
The famous Turing test of machine intelligence asked whether a human could distinguish the linguistic utterances of a machine from human utterances. Given the progress made since Turing’s death and the increasing capacity of machines to fool us, perhaps we should propose the “kidding test.” It would consist of testing machines to see whether they can seriously and appropriately say, “Are you kidding?” in response to any input and then determine — when they do say it — whether it has any meaning.
Historical note
Professor Joachim Krueger reminds us that “Plato described emotion and reason as two horses pulling us in opposite directions.” He also tells us that the dualistic Platonic model has become a fixture of our culture. With the Industrial Revolution followed two centuries later by the digital revolution, the reigning ideology has increasingly favored replacing emotion (unreliable, inaccurate) by reason, and more particularly reasoning based on logic that builds its own rules from statistical patterns.
Krueger highlights the importance of context: “Understanding human choices in their natural context is harder than understanding the rules of a laboratory game.” The whole point of AI is to draw from all contexts. In a sense that makes it too intelligent to limit itself to the parameters of any real context.
This doesn’t mean AI won’t be a game-changer in the way humans and human institutions make decisions in the future. Acknowledging its absolute limits will enable us to use it not to make decisions, but to prepare decisions. In the war between AI crusaders and AI Luddites, the crusaders will tell us that the limits (proprioception, empathic understanding of context) will be superseded, but in so doing they appear to deny that absolute limits actually do exist. The fact that a Google search of AI and proprioception produces zero results (until Google picks up this edition of 3D), gives the measure of the hubris the promoters of AI are capable of.
Ever since the dawn of the machine age, we have been living with the myth of a glitzy hyperrational (and increasingly hyperreal) world in which all individual needs will be understood, responded to and satisfied. This myth is a cultural construct, an artificial (and unofficial) dogma of a religion convinced that it is about to arrive in a technological promised land. We don’t need to resist it, because it will happen and actually does have the “potential” to be used for the good. We simply need to put it in perspective and find a way of grappling seriously — in a new context — with the notion of “the good.”
*[In the age of Oscar Wilde and Mark Twain, another American wit, the journalist Ambrose Bierce, produced a series of satirical definitions of commonly used terms, throwing light on their hidden meanings in real discourse. Bierce eventually collected and published them as a book, The Devil’s Dictionary, in 1911. We have shamelessly appropriated his title in the interest of continuing his wholesome pedagogical effort to enlighten generations of readers of the news.]
The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy.
Photo Credit: metamorworks / Shutterstock.com
Support Fair Observer
We rely on your support for our independence, diversity and quality.
For more than 10 years, Fair Observer has been free, fair and independent. No billionaire owns us, no advertisers control us. We are a reader-supported nonprofit. Unlike many other publications, we keep our content free for readers regardless of where they live or whether they can afford to pay. We have no paywalls and no ads.
In the post-truth era of fake news, echo chambers and filter bubbles, we publish a plurality of perspectives from around the world. Anyone can publish with us, but everyone goes through a rigorous editorial process. So, you get fact-checked, well-reasoned content instead of noise.
We publish 2,500+ voices from 90+ countries. We also conduct education and training programs
on subjects ranging from digital media and journalism to writing and critical thinking. This
doesn’t come cheap. Servers, editors, trainers and web developers cost
money.
Please consider supporting us on a regular basis as a recurring donor or a
sustaining member.
Will you support FO’s journalism?
We rely on your support for our independence, diversity and quality.