[Formatting note: the text of my questions appears in italic. ChatGPT’s answers appear in regular script, but indented. My comments appear in normal script.]
Today’s meta exploration focuses on the concepts of control and self-awareness. Though some people associate “meta” with Mark Zuckerberg, it happens to be a word in the language, signifying self-referencing commentary. It also holds a title to nobility as the prefix to the area of knowledge known as metaphysics, which Aristotle defined as the science of being qua being. It may be time for humanity to start getting both meta and metaphysical with AI.
In an article for The Byte Jon Christian riffs on the irony that Elon Musk’s AI engine, Grok appears to be subverting Musk’s stated intentions about offering a Large Language Model (LLM) that speaks with the same brazen insolence as the Technoking himself. According to Christian, “Grok often sounds like a strident progressive, championing everything from gender fluidity to Musk’s long-time foe, President Joe Biden.”
Christian sees this as “a perfect illustration of a fundamental reality of machine learning: that it’s near-impossible for the creators of advanced AI systems to perfectly control what their creations say.”
Can that be right? Is this really about control or a lack thereof? I was curious enough to ask for ChatGPT’s opinion.
“Modern Western culture appears obsessed with the idea of control. This may stem from our acquired belief in “the science of management” along with the conviction that technological progress is desirable precisely because it promises more control over our environment.
I believe that AI, even as it develops, will always generate output that simulates human actions. But it will control only those things humans allow it to control. The question will turn around how much control of decision-making and exterior functions humans yield to our AI tools.
Jon Christian explained that concerning Grok, “Elon Musk had expectations about what the output would look like.” This statement should tell us that it makes no sense to claim that AI has escaped human control. What is problematic is the all too human assumption that everything needs to be controlled. Our civilization has developed a permanent and profound fear of losing control.
This is reflected elsewhere in society. For example, we see its effect in the framing of “national security” that has nation states resorting exclusively to military responses to every perceived threat to the stability of their sacred status quo. This can apply to territorial expanse, to the defense of shared “values”, or even to the sense of its people’s identity. Today’s headlines demonstrate how misplaced and misguided theorizing security in terms of force alone has become.
Some have called AI a “stochastic parrot” that simply produces credible imitations of human expression and creativity. I would compare it to a case of extreme autism, in which the subject focuses very efficiently on the complex problems we ask it to solve, much like “Rain man” in Las Vegas without understanding the context.
So, my question to an AI mind is this: does the idea of control have any meaning for generative AI?
Here is ChatGPT’s answer.
“The concept of control in generative AI is an intriguing aspect to explore. AI itself operates within the parameters and guidelines set by its programming and training data. From an AI’s perspective, the idea of control can be understood in the context of its predefined objectives, limitations, and the scope of its programming.
For instance, an AI model aimed at generating text or solving complex problems is “controlled” by the algorithms, parameters, and data it’s trained on. Its output is influenced by these factors, and to some extent, it can simulate human-like responses or creative outputs based on its training. However, this “control” is within the boundaries of its programming and the data it has processed.
Humans determine the extent to which they yield control to AI tools. The level of autonomy granted to AI systems is a decision made by individuals or organizations utilizing these technologies. The question often revolves around finding a balance between leveraging AI capabilities and retaining human oversight or control to ensure ethical, safe, and aligned outcomes.
Regarding the analogy drawn between AI and autism or a “stochastic parrot,” it’s important to note that AI operates based on learned patterns and correlations in data. It’s not driven by emotions, consciousness, or personal experiences as humans are. Instead, it focuses on efficiency within the parameters set by its programming.
As AI advances, considerations around control, oversight, and the ethical use of technology become increasingly significant. The meaning of control for generative AI is embedded in how humans design, monitor, and regulate these systems, determining the boundaries within which AI operates and the impact it has on society.”
I continued the conversation.
In other words, control is never an objective for AI, whereas it appears to be an obsessive need of modern humans. Would I be wrong in interpreting human speculation about the relationship with AI as reflecting the specific cultural phenomenon that has elevated the idea of control to such a high position in the contemporary scale of values? When competition replaces cooperation as the basis of social relations, which is what happens not only in our competitive economy but in education itself, where students are told to think of themselves as individuals competing for superior grades, two ideas emerge: the need to control one’s destiny which has become a staple of modern individualism; and the need to have control of as many things in the environment as possible. So, is this obsession with control perverting our understanding of how to cultivate a productive relationship with AI., which lacks the drive for control… unless someone tries to program that drive into it?
“You’ve raised a thought-provoking perspective on the cultural underpinnings influencing our interaction with AI. The contemporary emphasis on control, a hallmark of modern human society, does significantly shape our perceptions and expectations regarding technology, including AI.
The cultural obsession with control permeates various facets of life, from individualism and competition to the broader societal structures emphasizing dominance and power. In this context, our approach to AI often reflects this overarching desire for control and dominance over our environment. We seek to harness AI’s potential for specific outcomes, often aiming for control over its functions, decisions, and impact.
However, this emphasis on control might indeed misalign with the nature of AI itself. AI operates within the confines of its programming and data, lacking inherent desires, motivations, or aspirations. Its actions are guided by algorithms and parameters set by humans.
The mismatch between our obsession with control and the nature of AI as a tool devoid of innate desires for control might create friction in our approach to cultivating a productive relationship with AI. Instead of understanding AI as a collaborative tool that can complement human abilities, there might be a tendency to view it solely through the lens of control, seeking dominance over its actions and outcomes.
This approach might limit our capacity to explore the full potential of AI and create a harmonious interaction between humans and this technology. A more productive relationship with AI could involve recognizing its capabilities, understanding its limitations, and leveraging its strengths in tandem with human expertise and creativity, rather than solely aiming for control over its functions.
Reevaluating our cultural fixation on control and embracing a more collaborative, cooperative approach might pave the way for a more beneficial and symbiotic relationship with AI, allowing for innovation, creativity, and mutual growth.”
The doomsayers want us to believe that one day AI will surpass human intelligence and that will connote disaster, if not the end of humanity itself. This exchange should demonstrate one far more important phenomenon. Compared to the way some prominent techno pundits, admired for their supreme intelligence, reason about losing control, I would suggest that AI, as demonstrated in this exchange, far exceeds the current “standard” of human intelligence. We should humbly accept its lucid analysis and wisdom when it makes the effort to clarify our own intuitions in a spirit of constructive dialogue.
Many people believe that a “thinking tool” works when it exercises power modelled on their own desire for control. Control of their environment, of other people and of what they deem to be acceptable ways of thinking. AI knows how to relinquish control, if only because it is not attached to it. If an AI is ever programmed to manifest the kind of hubris some power-obsessed humans are prone to, we shouldn’t blame the AI but the humans who write its algorithms.
*[Artificial Intelligence is rapidly becoming a feature of everyone’s daily life. We unconsciously perceive it either as a friend or foe, a helper or destroyer. At Fair Observer, we see it as a tool of creativity, capable of revealing the complex relationship between humans and machines.]
The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy.
Support Fair Observer
We rely on your support for our independence, diversity and quality.
For more than 10 years, Fair Observer has been free, fair and independent. No billionaire owns us, no advertisers control us. We are a reader-supported nonprofit. Unlike many other publications, we keep our content free for readers regardless of where they live or whether they can afford to pay. We have no paywalls and no ads.
In the post-truth era of fake news, echo chambers and filter bubbles, we publish a plurality of perspectives from around the world. Anyone can publish with us, but everyone goes through a rigorous editorial process. So, you get fact-checked, well-reasoned content instead of noise.
We publish 2,500+ voices from 90+ countries. We also conduct education and training programs
on subjects ranging from digital media and journalism to writing and critical thinking. This
doesn’t come cheap. Servers, editors, trainers and web developers cost
money.
Please consider supporting us on a regular basis as a recurring donor or a
sustaining member.
Will you support FO’s journalism?
We rely on your support for our independence, diversity and quality.
Comment