Let us know about free updates
The author is the program director at the Institute for Global Affairs of the Eurasia Group.
When Openai and Mattel announced their partnership earlier this month there was an implicit perception of risk. The first toys with artificial intelligence are not for children under the age of 13.
Another partnership last week had seemingly few warnings. Openai has made an individual announcement that it has won its first Pentagon contract. It will pilot a $200 million program to “develop prototype frontier AI capabilities to address critical national security challenges in both the combat and enterprise domains,” according to the US Department of Defense.
The ability for major tech companies to launch military work with almost public scrutiny represents change. National security applications for everyday apps are actually given. Armed with stories about how they recharged Israel and Ukraine in their war, some tech companies framed this as a new patriotism without having a conversation about whether it should happen in the first place, or how to ensure that ethics and safety are prioritized.
Silicon Valley and the Pentagon are always intertwined, but this is the first step into an open military contract. The company has built a national security team with Biden administration alumni, and quietly removed the ban on using the app for things like weapons development and “war” only last year. By the end of 2024, Openai had partnered with Anduril, a Mega-Startup lined up with Maga, led by Palmer Luckey.
Big Tech has changed dramatically since 2018, when Google staff protested the secret pentagonal effort called Project Maven, called the ethical concern, and the Tech giant has expired its contract. Currently, Google has completely revised its approach.
Google Cloud is collaborating with Lockheed Martin on Generic AI. Meta has also changed its policy to allow the military to use the Llamas AI. Big Tech Stalwarts Amazon and Microsoft are all participant. And Anthropic partnered with Palantir to lead Claude to the US military.
It’s easy to imagine the benefits of AI here, but what’s missing from public view is a conversation about risk. Now it is well documented that AI sometimes hallucinates and lives its own life. On a more structural level, consumer technology may not be safe enough for the use of national security, experts warn.
Many Americans and Western Europeans share this skepticism. My organization’s recent survey of the US, UK, France and Germany found that the majority support stricter regulations regarding military AI. People worry that it could be weaponized by the enemy or that it might be used by their government to monitor its citizens.
Respondents were provided with eight statements, half emphasized AI’s benefits to their country’s military, and half emphasized risk. In the UK, less than half (43%) (43%) say that AI will help improve the workflow of their own military, but the majority (80%) say these new technologies need to be more regulated to protect people’s rights and freedoms.
Using AI for war could mean, to the most extreme, entrusting algorithms with flawed issues of life and death. And it’s already happening in the Middle East.
Israeli News Outlet +972 magazine investigated Israeli military AI in targeting Hamas leaders in Gaza and reported that “thousands of Palestinians – those who most women were not involved in the battle – were wiped out by the Israeli Air Force, particularly due to the decision of the AI programme.”
The US military uses AI to select targets in the Middle East, but last year, a senior Pentagon official told Bloomberg it was unreliable enough to act on its own.
Public conversations about what it means for tech giants to work with the military have been postponed. As former Openai researcher Miles Brundage warns, “AI companies should be more transparent than they are now about which national security, law enforcement, and immigration-related use cases are being done and not supported, and which countries/agencies and how to enforce these rules.”
At the time of war and instability around the world, the people are looking for conversations about what it really means for the military to use AI. They deserve some answers.