AI Race - If you don't push forward I will
Why the AI race is a prisoner's dilemma
In part 2 of this series, I explain the second and third pillar within AI risk - AI Race and Malicious Use.
AI Race
During the Cold War, both the US and the Soviet Union actively and consciously participated in a potential arms race that could have ended humanity. Perhaps the most classic phenomenon needed to explain a race whether it is an arms race or AI race is the Prisoner’s Dilemma. Since one party will always think the other will defect, in order to protect its own citizens, the former party must strengthen militarily or in our case, their AI capability. Lo and behold, it sends the message to all other countries that they need to do the same. Consequently, all countries begin a vicious race to acquire more powerful AI systems.
We see this play out most evidently between US and China now, with bans on exports for semiconductors and GPUs. The AI race between the United States and China has unfolded like a high-stakes game of geopolitical ping pong, with each move by one side prompting a swift, strategic response from the other. It began with the United States leveraging its dominance in semiconductor technology to curb China’s access to the hardware necessary for cutting-edge AI development. In 2022, the U.S. imposed sweeping export controls, effectively barring companies like NVIDIA from selling their most advanced chips, including the A100 and H100 GPUs, to China. It also leaned on allies like the Netherlands and Japan to restrict the sale of critical chipmaking equipment, especially the EUV lithography machines manufactured solely by ASML.
China in response, doubled down on domestic innovation. State-backed firms like SMIC and Huawei intensified efforts to build an indigenous chipmaking supply chain. Chinese AI startups like Baichuan and DeepSeek surged forward, releasing open-source large language models that rivaled Western offerings. At the same time, China ramped up its purchases of downgraded NVIDIA chips (like the A800 and H20), stockpiling what it could before further restrictions came into effect. And when legal access dried up, smuggling networks filled the gap. Between late 2024 and early 2025, an estimated $1 billion worth of restricted chips flowed into China through black market channels.
At an AI summit in Shanghai in July 2025, China proposed a new international framework for AI governance, subtly positioning itself as a responsible power amid U.S. tightening. Meanwhile, firms like SMEE, China’s would-be rival to ASML, announced prototype progress in developing its own lithography tools, though still years behind in capabilities.
The U.S. countered with policy shifts of its own. A new Executive Order in 2025 replaced previous safety-focused guidance with a directive prioritizing national AI leadership. The White House unveiled the “Winning the AI Race” Action Plan, which redirected federal research, investment, and infrastructure toward accelerating innovation. OpenAI and Anthropic were encouraged to release more open-weight models to keep pace with China’s open-source momentum and eventually OpenAI did.
AI aside, this phenomenon of a technological race has impacted lives in all sorts of industries. In the 1960s, when Ford was faced with increasing competition from foreign brands, they released a car that had a gas tank located too close to the bumper. In an ambitious timeline to release the car before their competitors, they were sued for the numerous deaths that resulted from the underprepared technology. Ford’s president at the time was fond of saying, “Safety doesn’t sell.”
In 2019, OpenAI migrated from a non for profit to a for-profit entity. There are numerous controversial articles in media that documents why this change happened, most pointing to Altman’s desire to chase commercialization and a loosening on rigor safety tests. Right around this time, those in the company who believed in safety design left and gave birth to Anthropic and its focus on Constitutional AI. Or did Dario simply saw the potential to create his own fortune? I think that’s one version of the story that we will never know unless we were the players themselves. Sometimes, I wonder if it can be at all possible to decouple money with AI research. If AI researchers were allowed to raise money only for training models and had to extricate themselves from personal wealth, what kind of model could we see then? Could we have a much safer model? Would they then have the collective in mind, as the first and foremost priority?
Malicious Use
In the fourth tier of AI hierarchies that OpenAI has published, it says that AI will one day aid us in invention. Scientists are already using AI to develop new drugs. If we can develop cures for ourselves, we can equally have bad actors developing harmful substances. Many outlets have sounded the alarm that the next big danger is an AI-assisted pandemic. Countries may engage in biochemical warfare.
If you live in the US, you would not be new to the increase in cases of shootings that has happened in recent years. This mainly happened because of the ease of obtaining a gun. As ubiquitous as AI is, it can be much more dangerous than a gun if used by someone malicious. The core issue here is what security experts call "dual-use technology", the same AI capabilities that can benefit humanity can also be weaponized. Think of it this way: when we teach an AI system to understand molecular structures and chemical interactions to develop life-saving drugs, we're essentially giving it the same knowledge needed to design harmful substances. The AI doesn't distinguish between helpful and harmful applications; it simply applies its learned patterns to whatever problem it's asked to solve.
A single bad actor with access to AI could potentially design a biological weapon that spreads globally, create thousands of sophisticated cyber attacks simultaneously, or generate convincing disinformation campaigns that reach millions of people. Traditionally, developing biological or chemical weapons required extensive specialized knowledge, expensive laboratory equipment, and significant resources that were typically only available to nation-states or well-funded terrorist organizations. AI is changing this equation dramatically. A person with basic computer skills could potentially use AI tools to design novel toxins, optimize delivery methods, or identify vulnerabilities in critical infrastructure, all from their laptop.
In biotechnology, AI systems like those used by companies such as Insilico Medicine can analyze vast databases of molecular interactions to predict how different compounds will behave in the human body. While this accelerates legitimate drug discovery, the same capability could help someone design more effective biological weapons or find ways to make existing pathogens more virulent or resistant to treatments.
The international security implications create what we might call a "malicious use arms race." If one nation develops AI-assisted biological weapons capabilities, others feel compelled to develop similar capabilities for deterrence. This dynamic could lead to a world where the threshold for catastrophic biological warfare is significantly lowered.

