Science & Technology

Outside the Box: Will AI’s Masters Know How to Collaborate?

For the past year, in “Outside the Box,” we have been interrogating ChatGPT to better understand how AI “reasons.” It’s time to move on to focusing on the people who are actively promoting the role AI will play in our lives.
By
AI robot

Woman and AI robot working together in the office, automation and technology concept © Stock-Asso / shutterstock.com

September 09, 2024 06:39 EDT
Print

Artificial intelligence is everywhere these days. Most often it remains invisible. Don’t believe me? An article we published last week, “The Impact of Automation on Corporate Decision-Making,” never once mentions AI! Have we become that inured? Just by looking at it, I can’t even be sure AI didn’t write that informative article.

When we read a text, examine a photo, react emotionally to a video or use an app, we don’t think about how they were produced, even if we suspect AI had a hand in it. Humans made these objects available to us, managing their creation, production and delivery. Humans bank the revenue they generate. Those same humans will also be monitoring our use of those objects.

AI is everywhere. But for the moment it is unthinkable that humans are not seriously involved in whatever it does. On the positive side, this means that the norm today is fundamentally collaborative. It’s about people with projects using AI to advance some part of the project.

Like everything else in our commercial culture, AI does things because people want them done. Or rather, because people want to earn money from the things that get done. Those people make decisions about how they get done, what purpose they serve and, ultimately, how much we consumers will end up paying. These same people also make decisions about how that cost might be disguised or transferred. 

AI does amazing things but rarely on its own. The dominant trend is collaborative. AI can certainly impress us with its “creations:” images, videos, industrial design, college essays, poems and more. But, for the moment, whatever AI produces, however useful or useless, is the result of a collaboration.

Futurologists like to predict the awe-inspiring “singularity,” the moment when super-intelligence leaves human intelligence definitively in the dust. They imagine a world in which collaboration between AI and humans will no longer be necessary. But, even if this were true, how would our societies react? Any scientifically minded psychologist, sociologist or economist would admit that, yes, it’s logically possible to imagine machines programming themselves and even assuming control over human environments. But something critical will still be missing: the indomitable need humans have to assert control over other humans through the acquisition of property and power. Often those who make such predictions are themselves obsessed with power.

Last week, “Outside the Box” began addressing our switch to focusing on the strategic, economic and cultural side of AI. We began by examining the thinking and intentions of the kind of people now defining the future of AI. Alongside the technology itself a culture is growing, guided by those who are investing in AI and making fundamental decisions about how it will affect our lives.

Last week we featured a fascinating but controversial interview with Eric Schmidt, the former CEO of Google. Embarrassed by his own candor, Schmidt subsequently had the video removed from YouTube.

This week, we return to questions raised in that video. Again, we must thank YouTube user Matthew Berman for retrieving the video and offering enlightening comments to clarify Schmidt’s discourse. Matthew described himself to me as “a former tech founder and current content creator trying to help as many people as possible learn how to leverage AI in their lives and businesses.”

After getting ChatGPT to explore the meaning of Schmidt’s prediction of a “battle between US and China for knowledge supremacy,” I followed up with this prompt about the culture of political competition Schmidt assumes to be the norm.

“You evoke a possible alternative to Schmidt’s competitive worldview; a collaborative approach. What would such an approach mean for the value of science, especially given the fears that exist about AI? Would the AI that emerges and eventually dominates — precisely as people like Schmidt imply — be different in a world where collaboration defines the fundamental orientation of R&D?”

A collaborative approach to AI development, as opposed to a competitive one, would significantly alter the value and nature of science, as well as the kind of AI that emerges. Here’s how:

1. **The Value of Science in a Collaborative World**

  • **Knowledge as a Public Good:** In a collaborative framework, knowledge, including advancements in AI, would be treated more as a public good rather than a strategic asset. This perspective emphasizes the idea that scientific knowledge should be shared widely and used to benefit humanity as a whole, rather than to gain an advantage over rivals.
  • **Global Problem-Solving:** Science would be increasingly oriented toward addressing global challenges—such as climate change, pandemics, and poverty—rather than serving the strategic interests of individual nations. This would encourage multinational cooperation on research projects, pooling resources and expertise from around the world to tackle issues that no single country can solve alone.
  • **Open Science:** The value placed on open-access research and data-sharing would increase. The open science movement, which promotes transparency, reproducibility, and the democratization of knowledge, would thrive in a world where collaboration is the norm. This could lead to more rapid advancements, as researchers build on each other’s work more freely.

2. **AI in a Collaborative Context**

  • **Human-Centric AI:** An AI developed in a collaborative world would likely prioritize ethical considerations and the broader impact on humanity. This could lead to the creation of AI systems that are designed to enhance human well-being, promote social equity, and address global challenges. The focus would be less on achieving superiority and more on creating systems that are beneficial to all.
  • **Ethical Standards and Governance:** Collaborative efforts could lead to the establishment of global ethical standards and governance structures for AI. Rather than each nation developing its own set of rules (potentially in opposition to others), there would be a concerted effort to create a unified framework that ensures AI is developed and deployed in ways that are fair, safe, and aligned with human values.
  • **Transparency and Trust:** Collaborative R&D would foster greater transparency in how AI systems are developed and used. This could alleviate some of the fears surrounding AI, as people would have more insight into the processes behind these technologies and the goals they are intended to serve. Increased trust in AI systems could emerge from a global consensus on best practices and shared goals.

3. **The Nature of Dominant AI in a Collaborative World**

  • **Inclusive and Equitable AI:** The AI that comes to dominate in a collaborative world might be more inclusive, considering diverse perspectives and needs across different cultures and societies. This contrasts with AI developed in a competitive environment, which might prioritize the values and interests of a single nation or group.
  • **Slower, Safer Development:** A collaborative approach might lead to more deliberate and cautious AI development. Instead of racing to achieve breakthroughs (potentially at the expense of safety and ethical considerations), collaboration could ensure that AI systems are thoroughly vetted, with global input helping to identify and mitigate risks.
  • **Focus on Collective Goals:** The objectives of AI development might shift toward global goals like sustainability, health, and education. AI would be seen as a tool to address collective human challenges rather than as a means to gain economic or military advantage.

4. **Addressing AI Fears**

  • **Mitigating Risks Through Cooperation:** Many fears about AI stem from its potential to be used unethically or to perpetuate inequalities. A collaborative approach would likely involve shared responsibility for mitigating these risks, with international bodies overseeing AI development to ensure that it aligns with agreed-upon ethical standards.
  • **Avoiding an AI Arms Race:** One of the significant fears is that competition will lead to an AI arms race, where nations rush to develop increasingly powerful AI systems without adequate consideration of the consequences. Collaboration could prevent this by promoting transparency, shared goals, and a focus on safety.

Conclusion

A collaborative approach to AI R&D would fundamentally change the landscape of scientific progress and the nature of the AI that emerges. Science would be valued more for its potential to solve global challenges and improve the human condition, rather than as a tool for national superiority. The AI systems developed in this environment would likely be more ethical, transparent, and oriented toward collective human flourishing. This approach could also help alleviate some of the fears associated with AI by ensuring that its development is guided by shared values and common goals, rather than by competition and strategic rivalry.

Next week, we’ll look further into the political and geopolitical implications of these two worldviews: competitive and collaborative.

Your thoughts

Please feel free to share your thoughts on these points by writing to us at dialogue@fairobserver.com. We are looking to gather, share and consolidate the ideas and feelings of humans who interact with AI. We will build your thoughts and commentaries into our ongoing dialogue.

*[Artificial Intelligence is rapidly becoming a feature of everyone’s daily life. We unconsciously perceive it either as a friend or foe, a helper or destroyer. At Fair Observer, we see it as a tool of creativity, capable of revealing the complex relationship between humans and machines.]

[Lee Thompson-Kolar edited this piece.]

The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy.

Comment

0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments

Support Fair Observer

We rely on your support for our independence, diversity and quality.

For more than 10 years, Fair Observer has been free, fair and independent. No billionaire owns us, no advertisers control us. We are a reader-supported nonprofit. Unlike many other publications, we keep our content free for readers regardless of where they live or whether they can afford to pay. We have no paywalls and no ads.

In the post-truth era of fake news, echo chambers and filter bubbles, we publish a plurality of perspectives from around the world. Anyone can publish with us, but everyone goes through a rigorous editorial process. So, you get fact-checked, well-reasoned content instead of noise.

We publish 2,500+ voices from 90+ countries. We also conduct education and training programs on subjects ranging from digital media and journalism to writing and critical thinking. This doesn’t come cheap. Servers, editors, trainers and web developers cost money.
Please consider supporting us on a regular basis as a recurring donor or a sustaining member.

Will you support FO’s journalism?

We rely on your support for our independence, diversity and quality.

Donation Cycle

Donation Amount

The IRS recognizes Fair Observer as a section 501(c)(3) registered public charity (EIN: 46-4070943), enabling you to claim a tax deduction.

Make Sense of the World

Unique Insights from 2,500+ Contributors in 90+ Countries

Support Fair Observer

Support Fair Observer by becoming a sustaining member

Become a Member