A few weeks ago, I attended the Jaipur Literature Festival (JLF) in India. Called the “greatest literary show on Earth,” this annual gathering of famous authors and thinkers was founded in 2006 by British author and historian William Dalrymple.
During the panel titled, “From the Ruins of Empire,” the pin dropped. The JLF website introduced the panel as such:
“The legacy of the British Empire reshaped the modern world, leaving a trail of upheaval, resistance, and transformation. Pankaj Mishra, Jane Ohlmeyer, Christopher de Bellaigue, and Stephen R. Platt join Anita Anand to explore how imperial domination fueled intellectual revolutions and political awakenings across Asia and beyond. Together they uncover the political and intellectual movements that challenged colonial power, drawing connections between the past and the influence of the empire on global politics, identity, and resistance movements today.”
What were the first questions raised to Pankaj Mishra, author of the book, From the Ruins of Empire: The Revolt Against the West and the Remaking of Asia? They were about the new generative AI model, DeepSeek:
- How did we get there?
- How do we craft the best path possible for the future of AI?
- Why is open source key in AI development?
In this piece, I’ll be addressing all three questions.
How did we get there: a short history to understand DeepSeek’s reception
How does DeepSeek invite itself to a literature festival? What historical events let to its prominence, when arguably some of the breakthrough open source AI contributions that enabled its creation originated elsewhere, including those in France (Mistral AI, kyutai1 and the Meta FAIR Paris team who started it all with the Llama language model), the United Kingdom (Stability AI) and Germany (Black Forest Labs)?
The answer is simple: a historically-rooted rivalry.
While European AI labs received accolades for their open source AI breakthroughs — especially as DeepMind went proprietary and OpenAI transformed into a for-profit entity — DeepSeek’s reception in Asia had a much deeper historical resonance.
For instance, an article in the Financial Times on June 11, 2024, highlighted the success of Mistral AI:
“Mensch said that Mistral had used a little more than 1,000 of the high-powered graphics processing units chips needed to train AI systems and spent just a couple of dozen millions of euros to build products that can rival those built using much bigger budgets by some of the richest companies in the world, including OpenAI, Google and Meta.”
Yet DeepSeek’s launch was met with an overdose of media coverage, and its reception at JLF showed something more profound than just a discussion on AI performance. Why did Indian writers and journalists at the event, many of whom are often at odds with or critical of China, suddenly feel a shared struggle against the dominance of American AI Corporations (AICs)?
The pride and enthusiasm for DeepSeek across Asia are deeply rooted in colonial history and more recent corporate remarks.
The historical context: AI as a modern struggle for self-reliance
For Stephen Platt, also on the JLF panel and author of the book, Imperial Twilight: The Opium War and The End of China’s Last Golden Age, China’s tech ambition cannot be dissociated from its historical scars.
For Chinese leadership over the years, the Opium Wars (1839–1860) exemplify how Britain’s superior military and technological leadership humiliated China, forced territorial concessions and cemented for them a legacy of foreign exploitations. This Century of Humiliation remains a driving force for China’s current drive for self-reliance strategy, its aggressive investment in AI, semiconductors and other critical technologies — in summary its determination to avoid dependence on Western technology going forward — a lesson stitched into national consciousness.
The reason Indian panelists relate is several fold. Like China, the East India Company is a dark part of Indian history. There is no better book than William Dalrymple’s The Anarchy: The Relentless Rise of the East India Company to understand how the rise from a small trading company to a powerful force led to the collapse of the Mughal Empire and the denunciation of Western corporate greed. As this review by The Guardian puts it:
“Dalrymple steers his conclusion toward a resonant denunciation of corporate rapacity and the governments that enable it. This story needs to be told, he writes, because imperialism persists, yet it is not obviously apparent how a nation state can adequately protect itself and its citizens from corporate excess.”
More recently, and during the JLF panel, British journalist Anita Anand brought up the infamous video of OpenAI CEO Sam Altman answering a question on the capacity of India and its talent to rival AICs:
“The way this works is we’re going to tell you, it’s totally hopeless to compete with us on training foundation models [and] you shouldn’t try. And it’s your job to try anyway. And I believe both of those things. I think it is pretty hopeless.”
Open source AI as a symbol of resistance
DeepSeek, and European labs before it, offered hope in the AI race. The way they chose to do so was by favoring open source.
Moreover, the DeepSeek R1 release needs to be understood within a deeply-entrenched institutionalized rivalry, with the United States in particular — one so deep that Europe is often not mentioned when it comes to discussing competition with US technology.
For instance, here is a chart from a Special Competitive Studies Project (SCSP) report where Europe is never mentioned:

The AICs’ dominance triggers colonialism comparison in the West, too. In an excellent August 2024 op-ed, “The Rise of Techno-Colonialism,” European Innovation Council member Hermann Hauser and Senior Researcher at University College London (UCL) Hazem Danny Nakib write:
“Unlike the colonialism of old, techno-colonialism is not about seizing territory but about controlling the technologies that underpin the world economy and our daily lives. To achieve this, the US and China are increasingly onshoring the most innovative and complex segments of global supply chains, thereby creating strategic chokepoints.”
The pioneering open source approach of European AI labs like Mistral, kyutai and Meta’s FAIR Paris team, and more recently DeepSeek, has presented a viable alternative to the proprietary AI model strategy of the AICs. These open source contributions are now resonating strongly globally and have further motivated the embrace of open source AI as a symbol of resistance against American AI dominance.
The case for open source: history repeats itself
There is tremendous energy and speed in technological collaboration. Software code is particularly suited for this model.
French Nobel Economics laureate Jean Tirole was once puzzled by the emergence of open source. In his 2000 paper with Josh Lerner, The Simple Economics of Open Source, they ask:
“Why should thousands of top-notch programmers contribute freely to the provisions of a public good? Any explanation based on altruism only goes so far.”
It is understandable one would ask the question then, but anyone following AI for the last few years should not wonder post-DeepSeek R1 release. The power of open sourcing Llama by FAIR Paris at Meta, the meteoric rise of Mistral and its founders by open sourcing a 7B language learning model (LLM) and DeepSeek R1 prove why these programmers and scientists do it.
One also understands why Sam Altman and his co-founders chose “OpenAI” as a name to start their company and attract talent. Would any of these frontier lab teams have added such resounding publicity and built such personal branding amongst the AI community so quickly had they chosen to go proprietary rather than open source? The answer is unequivocally no.
There are two powerful quotes also included at the beginning of the paper by two monuments of the open source software movement. These quotes from 1999 by programmer Richard Stallman and developer Eric Raymond, respectively, explain the reception of DeepSeek at JLF and highlight the deeper ideological forces at play:
“The idea that the proprietary software social system—the system that says you are not allowed to share or change software—is unsocial, that it is unethical, that it is simply wrong may come as a surprise to some people. But what else can we say about a system based on dividing the public and keeping users helpless?”
“The utility function Linux hackers is maximizing is not classically economic, but is the intangible of their own ego satisfaction and reputation among other hackers. … Voluntary cultures that work this way are actually not uncommon; one other in which I have long participated is science fiction fandom, which unlike hackerdom explicitly recognizes egoboo (the enhancement of one’s reputation among other fans).”
The trajectory of Unix in the 1970s and 1980s serves as a powerful analogy for what is happening in AI today. What happened with Unix and AT&T serves as a foretelling of the AI open source moving its epicenter to Europe once OpenAI created its for-profit and accepted a $10 billion investment from Microsoft and others.
Originally, AT&T’s Bell Labs had promoted and freely distributed Unix within academia in the 1960–1970s. That free distribution fostered both innovation and adoption. Then in the late 1970s, AT&T decided to impose a proprietary license that restricted access. This inevitably led Berkeley University to launch BSD Unix — an open alternative — and ultimately Linus Torvalds to launch Linux. Linus Torvalds developed Linux in Europe, shifting the epicenter of open source software away from the US.
One can easily draw the parallels when even the geography of evolution matches with what we have witnessed in the AI field, except this time new geographies have also emerged: Abu Dhabi’s TII with its Falcon Models, China’s DeepSeek, Alibaba’s Qwen and more recently, India’s Krutrim AI Lab with its open source models for Indic languages.
The Meta FAIR Paris team, along with leading European AI labs and newer frontier labs (DeepSeek, Falcon, Qwen, Krutrim), have accelerated AI innovation. By openly sharing research papers and code, they have:
- Trained a new generation of AI engineers and researchers on state-of-the-art AI techniques.
- Created an ecosystem of open collaboration, allowing rapid advancements outside of proprietary AI labs.
- Provided alternative AI models, ensuring that AI is not monopolized by American AI Corporations.
These four ecosystems (Europe, India, Abu Dhabi and China) could bring distinct strength to an open source AI alliance to catch up with the dominant AICs still operating under a proprietary AI mindset.
In an Ask Me Anything (AMA) questionnaire on January 31, 2025, following the release of DeepSeek R1, Altman acknowledged this proprietary AI model approach had been on the wrong side of history.

In due course, AI labs around the world would decide to join this alliance to advance the field together. It is not the first time that a scientific field has had a non-profit initiative cross boundaries and political ideology. It has the merit of being a mode of competition that does not trigger the anti-colonial grievances that the Global South might express.
Historical precedents: the Human Genome Project as a model for AI
As a biologist, I am particularly aware and sensitive to what the Human Genome Project (HGP) had achieved and how ultimately it beat the for-profit initiative of Celera Genomics for the benefit of the field and humanity overall.
The Human Genome Project was a groundbreaking international research initiative that mapped and sequenced the entire human genome. It was completed in 2003 after 13 years of collaboration. According to a report published in 2011 and updated in 2013, from an investment of $3 billion it has generated nearly $800 billion in economic impact (a return on investment to the US economy of 141 to one — every $1 of federal HGP investment has contributed to the generation of $141 in the economy). It has revolutionized medicine, biotechnology and genetics by enabling advancements in personalized medicine, disease prevention and genomic research. The sequencing work and research was performed by 20 laboratories across six countries: the US, UK, France, Germany, Japan and China.
Whereas the competing for-profit project run by Celera Genomics attempted to sequence genomic sequences, the HGP focused on open data sharing enshrined in its Bermuda Principles. These principles were established during the International Strategy Meeting on Human Genome Sequencing held in Bermuda in February 1996. These principles were key in shaping data-sharing policies for the HGP and have had a lasting impact on genomic research practices globally. Its key tenets were:
- Immediate Data Release: All human genomic sequence data generated by the HGP were to be released into public databases preferably within 24 hours of generation. This rapid dissemination aimed to accelerate scientific discovery and maximize the benefits to society.
- Free and Unrestricted Access: The data were to be made freely available to the global scientific community and the public, ensuring no restrictions on their use for research or development purposes.
- Prevention of Intellectual Property Claims: Participants agreed that no intellectual property rights would be claimed on the primary genomic sequence data, promoting an open-science ethos and preventing potential hindrances to research due to patenting.
In terms of governance, the HGP was a collaborative and coordinated scientific initiative rather than a standalone organization or corporation. It was not a single entity with permanent employees but rather a decentralized effort funded through government grants and contracts to various research institutions. Part of its budget (3–5%) was set aside to study and address ethical, legal and social concerns of human genome sequencing.
Bridging AI safety and open source AI
One other key advantage of open source AI is its role in AI safety research.
The AI Seoul Summit in 2024 decided to focused exclusively on existential risks at a time when the AICs were so ahead of the rest of the world. As recently as May 2024, former Google CEO Eric Schmidt proclaimed the US to be 2–3 years ahead of China at AI, while Europe is too busy regulating to be relevant. Had it been successful, the Summit would have effectively ceded control of AI safety decisions to these corporations. Fortunately, it was not.
Now that open source AI continues to bridge the technological gap, safety discussions will no longer be dictated solely by a handful of dominant players. Instead, a broader and more diverse group of stakeholders — including researchers, policymakers and AI labs from Europe, India, China and Abu Dhabi — now have an opportunity to shape the discussion alongside the AICs.
Moreover, open source AI enhances global deterrence capabilities, ensuring that no single actor can monopolize or misuse advanced AI systems without accountability. This decentralized approach to AI safety will help mitigate potential existential threats by distributing both capabilities and oversight more equitably across the global AI ecosystem.
A Human AI Project with the Paris Principles
What role can the AI Action Summit in Paris next week play in shaping the future of AI?
This would be a crucial opportunity to establish a Human AI Project, modeled after the Human Genome Project, to advance and support open source AI development on a global scale. One can already see that the current open source contributions, from the European pioneering AI labs to DeepSeek, are already accelerating the field and helping close the gap with AICs.
AI’s ability is in great part enhanced by the maturity of the general open source ecosystem with thousands of mature projects, dedicated governance models (for example like the Linux Foundation or Apache Software Foundation) and deep integration into enterprise, academia and government.
The AI open source ecosystem also benefits from platforms like Github or Gitlab. More recently, dedicated platforms for open source AI such Hugging Face — a US corporation co-founded by three French entrepreneurs — have begun playing an important role as distribution platforms for the community.

Given the relative maturity of the open source AI ecosystem relative to human genome sequencing at the beginning of the 1990s, how could open source AI benefit from a Human AI Project?
For one, the European Union is often criticized by the AICs and its own frontier AI Labs on its regulation of open source. A Human AI Project could dedicate a joint effort to come up with regulatory alignment and standards across participating countries and regions. A coordinated approach on this with initial contribution from Europe, India, Abu Dhabi and China could facilitate the dissemination of open source models across this shared regulatory region (a kind of free trade area for open source).
While not definitively proven, there are parallels to the rivalry-driven dynamics that shaped the reaction to DeepSeek at JLF. Similarly, AI regulation could be crafted with a focus on fostering innovation and maximizing public benefit — both for enterprises and consumers — rather than serving as a potential mechanism to impede the progress of AICs or hinder homegrown AI champions striving to close the gap.
The project could also facilitate talent exchange, and also fund a shared compute infrastructure (linked to energy infrastructure) for open source AI. One can easily see from the chart below that talented STEM graduates in some parts of the world might currently find it difficult to access the world class AI infrastructure that their country is lacking.

Another area of collaboration would be to come up with best practices on open access standards for models and data sets around weights, code and documentation.
The project could also foster a global collaboration on AI Safety Research. Instead of racing in secret to fix alignment issues, researchers from Paris to Beijing to Bangalore could work together on evaluating models and mitigating risks. All safety findings (for example, methods to reduce harmful outputs or tools for interpretability) could be shared promptly in the open domain.
This principle would recognize that AI safety is a global public good — a breakthrough in one lab (say, a new algorithm to make AI reasoning transparent) should benefit all, not be kept proprietary. Joint safety benchmarks and challenge events could be organized to encourage a culture of collective responsibility. By pooling safety research, the project would aim to stay ahead of potential AI misuse or accidents, reassuring the public that powerful AI systems are being stewarded with care.
The focus on existential risk at 2023’s UK AI Safety Summit at Bletchley Park, by overfocusing on the Nuclear Proliferation analogy, missed an opportunity to look at other areas where safety is considered a public good: cybersecurity, antibiotics and immunology (with a number of interesting initiatives post Covid-19), and aviation safety.
The project could also partner with and further the work currently carried out by the private ARC Prize Foundation to foster the development of safe and advanced AI systems. The ARC Prize, co-founded by François Chollet, creator of the Keras open source library, and Mike Knoop, co-founder of the Zapier software company, is a nonprofit organization that hosts public competitions to advance artificial general intelligence (AGI) research. Their flagship event, the ARC Prize competition, offers over $1 million to participants who can develop and open-source solutions to the ARC-AGI benchmark — a test designed to evaluate an AI system’s ability to generalize and acquire new skills efficiently.
The ARC Prize Foundation’s emphasis on open source solutions and public competitions would align seamlessly with the Human AI Project’s goals of fostering international collaboration and transparency in AI development as stated on the ARC Prize Foundation website under “AGI:”
“LLMs are trained on unimaginably vast amounts of data, yet remain unable to adapt to simple problems they haven’t been trained on, or make novel inventions, no matter how basic.
Strong market incentives have pushed frontier AI research to go closed source. Research attention and resources are being pulled toward a dead end.
ARC Prize is designed to inspire researchers to discover new technical approaches that push open AGI progress forward.”
Like HGP, the Human AI Project would dedicate part of its funding to ethical governance and oversight. This would also include discussion about copyright. The Project could help society think about the ethics of accessing the best source of information in training for free while developing proprietary models on top of it. In the biology space, it is well known that the Protein Data Bank, which was critical for Google DeepMind’s AlphaFold model to predict protein structure, likely required the equivalent of $10 billion of funding over a period of 50 years. The Project could help in thinking about how we continue to fund AI development or how the proprietary AICs should share revenue with original work creators.
Together, these Paris Principles and the Human AI Project would help advance AI globally in a more open, collaborative and ethical manner. They would build on what leading open source contributors from Europe to the Middle East, India and now China have already been able to achieve within the existing open source software and AI specific frameworks and platforms.
History repeats itself with AI
The opportunity in front of us is immense. Mistral AI, kyutai, BFL, Stability and more recently DeepSeek have given the public hope that a future where cooperation beats or at least rivals the proprietary AICs is possible.
We are still in the early days of this technological breakthrough. We should be thankful for the contributions AICs made to the field. The AI Action Summit should be an opportunity to foster cooperative innovation on a scale never before seen and bring as many players as possible to the right side of history.
It is 1789 all over again. We see before us the fight for technological sovereignty, the decentralization of power and a call for AI as a public good. And just like 1789, this revolution will not be contained.
[Lee Thompson-Kolar edited this piece.]
The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy.
Support Fair Observer
We rely on your support for our independence, diversity and quality.
For more than 10 years, Fair Observer has been free, fair and independent. No billionaire owns us, no advertisers control us. We are a reader-supported nonprofit. Unlike many other publications, we keep our content free for readers regardless of where they live or whether they can afford to pay. We have no paywalls and no ads.
In the post-truth era of fake news, echo chambers and filter bubbles, we publish a plurality of perspectives from around the world. Anyone can publish with us, but everyone goes through a rigorous editorial process. So, you get fact-checked, well-reasoned content instead of noise.
We publish 2,500+ voices from 90+ countries. We also conduct education and training programs
on subjects ranging from digital media and journalism to writing and critical thinking. This
doesn’t come cheap. Servers, editors, trainers and web developers cost
money.
Please consider supporting us on a regular basis as a recurring donor or a
sustaining member.
Will you support FO’s journalism?
We rely on your support for our independence, diversity and quality.
Comment