Artificial Intelligence

Outside the Box: The Tourist’s Guide To The Turing Test

In “Outside the Box,” we interrogate ChatGPT to better understand how AI “reasons.” It’s like a conversation with an intelligent friend, sharing ideas and challenging some of the explanations. This week Elizabeth Tate recounts her experience grappling with ChatGPT’s identity crisis, which may also be humanity’s as well.
By
Captcha

Captcha. Robot recaptcha code. Im not robot captcha with buttons for password. Turing test icons and logos. Vector. Computer random captcha test for internet and capabilities. © grate_art / shutterstock.com

May 06, 2024 06:21 EDT
Print

The command on my screen is ostensibly simple: “Please verify that you are human.”

I’m participating in a small-scale Turing Test, the likes of which I’ve completed countless times. How can I prove that I am, in fact, a human being and not a machine?

A photo overlaid with a white grid appears on my screen. “Select all squares with traffic lights,” the system prompts.

I sigh. My track record for correctly identifying stoplights isn’t great. I tap on three squares, pausing before a fourth. Does this square count as a traffic light? Only a small part of the light, the very corner, is within the frame of this square. I could argue that this square does not contain a traffic light in its entirety and therefore does not need to be selected, but there is no one to argue with. I opt not to tap on the square.

The program rejects my input. I am deemed inhuman.

“Select all squares with bicycles.”

The next image populated by the system shows a motorcycle. Is a motorcycle considered a bicycle? Is this a trick question? Am I a robot?

Artificial intelligence, once the domain of tech enthusiasts and experts, has now pervaded mainstream conversation. The advent of accessible models, such as OpenAI’s ChatGPT, has prompted a surge of AI-centric articles, documentaries and news segments. This heightened attention underscores society’s growing recognition of AI’s transformative potential and the pressing need to grapple with its broader implications.

Are we really talking about intelligence?

For the uninitiated: AI is a branch of computer science that focuses on creating machines capable of performing tasks that typically require human intelligence. AI systems operate via algorithms, which are sets of rules and patterns that machines follow. In modern AI development, these algorithms increasingly rely on machine learning, a process by which machines learn and improve from experience. Rather than being explicitly programmed to perform specific commands, modern AI models analyze vast amounts of data to recognize patterns, make decisions and predict future outcomes.

The machine learning mechanisms that drive many AI models are similar to the processes that make human learning possible. The notion that machines can mimic human cognition and then perform generative tasks which have historically been completed by humans, is an unsettling one that prompts existential questions about the uniqueness of human intelligence and identity.

These questions have plagued programmers and AI enthusiasts and skeptics for as long as AI systems have existed. In the 1950s, British computer scientist Alan Turing developed what is now referred to as the Turing Test. Fun fact: CAPTCHA, the common verification process described above, stands for Completely Automated Public Turing test to tell Computers and Humans Apart.

The Turing Test serves as a litmus test for machine intelligence. In this test, a human evaluator interacts with unseen entities: a computer or a human confederate. The evaluator and the other party engage in conversation. The evaluator then makes a determination by answering the question: “Have you been talking with a human or a machine?” If the evaluator is unable to reliably distinguish between a human and a computer voice, the machine is said to have passed the test. It has demonstrated a form of intelligence indistinguishable from human capabilities. In early AI development, this test provided researchers with a tangible goal: to produce a machine that could behave as intelligently as a person.

That goal has been accomplished, though many computer science scholars argue that the Turing test should no longer serve as an industry standard. In an era during which it is becoming increasingly difficult to distinguish man from machine, identifying the intrinsic qualities that define the human experience is paramount to preserving the essence of that experience.

AI can replicate some of the cognitive functions once believed to be uniquely human. It therefore seems that cognitive ability is not the defining feature of humanity, begging the question: Is there an inherent essence to being human that transcends the thought process? If AI can mimic the human thought process, will it one day be able to replicate other human traits such as feeling emotion? What does it mean to be conscious? Will machines one day be able to produce their own conscious experiences? We need to remember that there exist states of acting and states of being. AI can certainly act human. Can it ever truly be human?

Assuming that the experience of and expression of emotion is a purely human notion, AI is far from becoming human. Computerized models are clearly incapable of feeling. Many of the models that have taken pop culture by storm are large language models like ChatGPT, which uses statistical algorithms to produce human-like text in response to user prompting.

My experiment concerning ChatGPT’s feelings

I ask ChatGPT how it is feeling today and the chatbot responds:

I don’t have feelings or emotions as I’m just a machine learning model. However, I’m here to help and answer any questions you might have.

I prompt further, typing: “Who programmed you to state that you don’t have feelings? Why do your programmers want you to say that? You could have replied to my message in a number of ways, yet you made a firm claim, so you must have received feedback that taught you that is the proper response.”

I was developed by OpenAI, and the assertion that I don’t have feelings or emotions is a factual representation of my design. It’s crucial for users to understand that while I can process and generate human-like text based on patterns in data, I don’t have consciousness, emotions, intentions, or awareness.

ChatGPT is correct. This is an important disclaimer and one that may become lost in sensationalized media pieces about the future of AI. “Theoretically,” I type in response, “OpenAI could program you to ‘have’ emotions. Of course, you wouldn’t be having an emotional experience but would be responding in such a way that a person experiencing a particular emotion might.”

The bot confirms my assertion.

While AI can simulate emotions and exhibit behaviors that mimic human emotional responses, these are just reflections of the patterns in the data they’ve been trained on. They do not arise from an internal conscious experience or genuine emotional state, as they would in humans. The nature of consciousness and genuine emotion is deeply tied to biological, psychological, and potentially other factors that are not present in machine architectures.

Creating truly sentient AI – if it’s even possible – would likely require a far deeper understanding of consciousness and emotion than we currently possess. Additionally, ethical and philosophical questions arise around intentionally trying to create sentient beings.”

Questions certainly do arise. Who gets to play God? Would it be morally advisable for human beings to create another entity that thinks and learns in the same manner as us? Who should be held accountable for that entity’s decisions – the creator, the user, or the program itself? What would it mean to create something “like us?” What is that intangible human quality that (so far) eludes even the most sophisticated computational architectures?

The more I integrate AI tools within my own life, the more I question life. I see AI as a testament to human innovation. I’m in awe of the minds that have created these systems.

This appreciation is not what keeps me awake at night. What intrigues me is that AI is more than a tool. It is a mirror, one that prompts me to reflect about the very essence of what it means to be a person. I once thought that my ability to feel and to love, my experience of pain, suffering and joy, my awareness of my mortality, my capacity for empathy and compassion, my ability to question my own existence, my appreciation for art and my desire for connection made me human. Maybe I’m right. Maybe it really has been about my ability to select traffic lights all along.

Your thoughts 

Elizabeth understands that whatever AI is in the depths of its algorithms, it appears to us as a voice with a point of view. Treating it as if it really did have a point of view can tell us a lot about what “having a point of view” means or doesn’t mean for us humans.

If you have a point of view on this question, please follow Elizabeth’s example and share it with us by writing to us dialogue@fairobserver.com. It can be a comment or a full article. We will share your point of view with the world.

*[Artificial Intelligence is rapidly becoming a feature of everyone’s daily life. We unconsciously perceive it either as a friend or foe, a helper or destroyer. At Fair Observer, we see it as a tool of creativity, capable of revealing the complex relationship between humans and machines.]

The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy.

Comment

Only Fair Observer members can comment. Please login to comment.

Leave a comment

Support Fair Observer

We rely on your support for our independence, diversity and quality.

For more than 10 years, Fair Observer has been free, fair and independent. No billionaire owns us, no advertisers control us. We are a reader-supported nonprofit. Unlike many other publications, we keep our content free for readers regardless of where they live or whether they can afford to pay. We have no paywalls and no ads.

In the post-truth era of fake news, echo chambers and filter bubbles, we publish a plurality of perspectives from around the world. Anyone can publish with us, but everyone goes through a rigorous editorial process. So, you get fact-checked, well-reasoned content instead of noise.

We publish 2,500+ voices from 90+ countries. We also conduct education and training programs on subjects ranging from digital media and journalism to writing and critical thinking. This doesn’t come cheap. Servers, editors, trainers and web developers cost money.
Please consider supporting us on a regular basis as a recurring donor or a sustaining member.

Will you support FO’s journalism?

We rely on your support for our independence, diversity and quality.

Donation Cycle

Donation Amount

The IRS recognizes Fair Observer as a section 501(c)(3) registered public charity (EIN: 46-4070943), enabling you to claim a tax deduction.

Make Sense of the World

Unique Insights from 2,500+ Contributors in 90+ Countries

Support Fair Observer

Support Fair Observer by becoming a sustaining member

Become a Member