Science & Technology

Outside The Box: Do You Think AI’s Full of Schmidt?

For the past year, in “Outside the Box,” we have been interrogating ChatGPT to better understand how AI “reasons.” It’s time to move on to interrogating the people who are actively shaping the role AI will play in our lives. Will AI ever have intentions? We don’t know, but the people crafting our experience of AI certainly do.
By
_Eric Schmidt

DECEMBER 1, 2015 – BERLIN: Eric Schmidt (former CEO of Google, now Managing Director of Alphabet Inc.) at a presentation of a cooperation between Google and the Berlin Philharmonic Orchestra. © 360b / shutterstock.com

September 02, 2024 06:42 EDT
Print

AI is an increasingly visible feature of our environment. It has even achieved the status of an artificial culture. As cultures are defined by their values, do we know anything about the values of this new culture?

The “Outside the Box” team at Fair Observer wishes to deepen our perception of a culture that is being created by people largely beyond our reach. It is a small elite now busy investing in technology and making fundamental decisions about how it will affect our lives.

We shall now begin featuring videos as well as our traditional conversations with ChatGPT. For starters, here’s a fascinating interview in August with former CEO of Google, Eric Schmidt. Concerned that people found some of his statements controversial, he had the video removed from YouTube. We have to thank YouTube user and AI enthusiast Matthew Berman for retrieving the video and commenting pertinently on the most significant moments.      

After watching Schmidt’s performance, I addressed the following prompt to ChatGPT:

“Eric Schmidt made this prediction: ‘In your lifetimes the battle between US and China for knowledge supremacy is going to be the big fight.’ This tells me a lot about how Schmidt thinks not just about AI, but also the world. After all, that’s what language does: Whatever it literally communicates, it reveals something about the speaker’s view of the world. In this case there is a clear political dimension. Is that apparent to you, as an AI that speaks?”

ChatGPT offered this sympathetic analysis of Schmidt’s worldview.

Yes, Eric Schmidt’s statement does indeed reveal much about how he perceives the world, particularly in terms of global power dynamics and the role of knowledge and technology in shaping those dynamics. Here’s a breakdown of what his statement implies:

1. **Framing Knowledge as a Battleground**

·         **Knowledge Supremacy:** Schmidt uses the metaphor of a “battle” or “fight” for “knowledge supremacy” between the U.S. and China. This language suggests that he views knowledge, particularly in fields like AI, as a strategic resource that nations compete over, similar to how they might compete over economic or military dominance.

·         **Zero-Sum Competition:** The choice of words implies a zero-sum competition, where one nation’s gain in knowledge is seen as another’s loss. This is a common perspective in geopolitical thinking, where global power is often viewed as a finite resource.

2. **US-China Rivalry**

·         **Bipolar Worldview:** By framing the issue as a contest between the U.S. and China, Schmidt reinforces the idea of a bipolar world order, where these two countries are the primary contenders for global leadership. This reflects a recognition of China’s rise as a major technological and economic power.

·         **Focus on AI and Technology:** Given Schmidt’s background in technology, particularly AI, his prediction likely reflects concerns that advancements in AI and related technologies will be a key factor in determining which country leads in this global contest. This suggests that he sees technological innovation as not just an economic asset but a crucial element of national power.

3. **Political Dimension**

·         **Implicit Nationalism:** The statement carries an implicit nationalism, as it presupposes that the U.S. and China are natural adversaries in this “battle.” This is not merely a technical or scientific competition but a political and strategic one, where the stakes are high for national sovereignty and global influence.

·         **Urgency and Lifespan:** The phrase “in your lifetimes” adds a sense of immediacy and inevitability to this conflict, suggesting that it is not a distant concern but an ongoing and intensifying struggle that will define the current and upcoming generations.

4. **Power Dynamics in Language**

·         **Influence of Language on Thought:** As an AI, I recognize that language shapes how people perceive and think about the world. Schmidt’s use of combative and strategic language suggests a worldview where global interactions are driven by competition rather than collaboration. This perspective might influence not only how individuals and institutions approach AI development but also how they perceive international relations more broadly.

In summary, Schmidt’s statement reflects a worldview where knowledge, particularly technological knowledge, is a key axis of global power, with the U.S. and China as the primary actors. The political dimension is indeed apparent, as it frames the development of AI and other advanced technologies as part of a larger geopolitical struggle, rather than as purely scientific or economic pursuits.”

It should be clear that, had I not expressed what may be called my “concern” with Schmidt’s attitude, ChatGPT would not have analyzed things in a similar way to my own. ChatGPT is perfectly capable of behaving in ways similar to participants in an “echo chamber” on social media. In this case, ChatGPT develops a line of reasoning in support of my own observations.

Following this brief exchange, I prolonged the conversation with the following prompt.

“You evoke a possible alternative to Schmidt’s competitive worldview; a collaborative approach. What would such an approach mean for the value of science, especially given the fears that exist about AI? Would the AI that emerges and eventually dominates — precisely as people like Schmidt imply — be different in a world where collaboration defines the fundamental orientation of R&D?”

Rather than duplicate ChatGPT’s very interesting response here, I’ll hold off my examination of its complete response till next week. One thing, however, appeared to me particularly significant:

The AI that comes to dominate in a collaborative world might be more inclusive, considering diverse perspectives and needs across different cultures and societies. This contrasts with AI developed in a competitive environment, which might prioritize the values and interests of a single nation or group.

In this age of exaggerated nationalism; when major powers seem intent on shying away from dialogue as they prefer to rely on the display of force, we desperately need a debate about how all technologies are going to be used in the coming years and decades. Why not engage ChatGPT in that same debate?

It’s particularly encouraging that we can engage in something that truly resembles a dialogue with AI. It’s something we obviously cannot do with drones, hypersonic missiles and nuclear explosives.

What is less encouraging is the asymmetry that exists between ordinary people — including deep thinkers on questions of science and geopolitics — and wealthy public personalities such as Eric Schmidt, Elon Musk and Sam Adkins. As a society, due to their wealth, we have endowed them the power to make decisions for us. They choose which technologies to invest in and how to harness those investments for personal profit. They also make decisions that aim at leveraging geopolitical influence and lead to the prosecution of wars.

Will these people be asking ChatGPT the same questions we are asking about worldviews? It seems unlikely. That is precisely why ordinary thinking people who feel even a slight concern for the way technology is likely to craft our own futures should at the very least seek to join the debate.

Here at Fair Observer, we invite you to weigh in and to do so in the framework of what we are calling our developing and evolving “Crucible of Collaboration.” We are not alone. People like Matthew Berman are producing the evidence and raising similar questions.

Unlike ChatGPT, these people don’t seek to impose their authority by giving what appear to be definitive answers to any question we throw at them. Instead, they are asking questions, reacting to the answers they have received, and pushing further, in the spirit of a modern Socrates. Ultimately, that behavioral contrast may define the difference between artificial intelligence — including superintelligence — and the  human attribute AI has been designed to imitate. Humans are curious and engaged; AI is knowledgeable and indifferent.

This brings us back to the question of the meaning of Schmidt’s notion of “knowledge supremacy.” As an intelligent human being with life experience acquired in a diversity of cultures, Schmidt’s phrase evokes in my mind a kind of instinctive, unprompted association with another notion: “white supremacy.” Is there a connection? We need to explore it. Would AI or some future superintelligence make that connection? I doubt it, because it serves no programmable purpose.

AI could, of course, be prompted to make the connection, by an inquiring human mind. I like to call this kind of mental operation, for lack of a better term, the capacity for “creative suspicion.” It’s a feature of critical thinking, and I would maintain a crucial one. It springs from feelings, not directly from knowledge. It is a natural part of our psychological makeup that some governments and institutions today are seeking to suppress, in the name of rooting out disinformation.

This conversation will continue next week.

Your thoughts

Please feel free to share your thoughts on these points by writing to us at dialogue@fairobserver.com. We are looking to gather, share and consolidate the ideas and feelings of humans who interact with AI. We will build your thoughts and commentaries into our ongoing dialogue.

*[Artificial Intelligence is rapidly becoming a feature of everyone’s daily life. We unconsciously perceive it either as a friend or foe, a helper or destroyer. At Fair Observer, we see it as a tool of creativity, capable of revealing the complex relationship between humans and machines.]

[Lee Thompson-Kolar edited this piece.]

The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy.

Comment

Only Fair Observer members can comment. Please login to comment.

Leave a comment

Support Fair Observer

We rely on your support for our independence, diversity and quality.

For more than 10 years, Fair Observer has been free, fair and independent. No billionaire owns us, no advertisers control us. We are a reader-supported nonprofit. Unlike many other publications, we keep our content free for readers regardless of where they live or whether they can afford to pay. We have no paywalls and no ads.

In the post-truth era of fake news, echo chambers and filter bubbles, we publish a plurality of perspectives from around the world. Anyone can publish with us, but everyone goes through a rigorous editorial process. So, you get fact-checked, well-reasoned content instead of noise.

We publish 2,500+ voices from 90+ countries. We also conduct education and training programs on subjects ranging from digital media and journalism to writing and critical thinking. This doesn’t come cheap. Servers, editors, trainers and web developers cost money.
Please consider supporting us on a regular basis as a recurring donor or a sustaining member.

Will you support FO’s journalism?

We rely on your support for our independence, diversity and quality.

Donation Cycle

Donation Amount

The IRS recognizes Fair Observer as a section 501(c)(3) registered public charity (EIN: 46-4070943), enabling you to claim a tax deduction.

Make Sense of the World

Unique Insights from 2,500+ Contributors in 90+ Countries

Support Fair Observer

Support Fair Observer by becoming a sustaining member

Become a Member