Home » OpenAI announces deal with defense startup to create anti-drone tech

OpenAI announces deal with defense startup to create anti-drone tech

OpenAI announces deal with defense startup to create anti-drone tech

[[{“value”:”A row of missiles in front of a background filed with lines of computer code.

OpenAI has entered into its first major defense partnership, a deal that could see the AI giant making its way into the Pentagon.

The joint venture was recently announced by billion-dollar Anduril Industries, a defense startup owned by Oculus VR co-founder Palmer Lucky that sells sentry towers, communications jammers, military drones, and autonomous submarines. The “strategic partnership” will incorporate OpenAI’s AI models into Anduril systems to “rapidly synthesize time-sensitive data, reduce the burden on human operators, and improve situational awareness.” Anduril already supplies anti-drone tech to the U.S. government. It was recently chosen to develop and test unmanned fighter jets and awarded a $100 million contract with the Pentagon’s Chief Digital and AI Office.

OpenAI clarified to the Washington Post that the partnership will only cover systems that “defend against pilotless aerial threats” (read: detect and shoot down drones), notably avoiding the explicit association of its technology with human-casualty military applications. Both OpenAI and Anduril say the partnership will keep the U.S. on par with China’s AI advancements— a repeated goal that’s echoed in the U.S. government’s “Manhattan Project”-style investments in AI and “government efficiency.

“OpenAI builds AI to benefit as many people as possible, and supports U.S.-led efforts to ensure the technology upholds democratic values,” wrote OpenAI CEO Sam Altman. “Our partnership with Anduril will help ensure OpenAI technology protects U.S. military personnel, and will help the national security community understand and responsibly use this technology to keep our citizens safe and free.”

In January, OpenAI quietly removed policy language that banned applications of its technologies that pose high risk of physical harm, including “military and warfare.” An OpenAI spokesperson told Mashable at the time: “Our policy does not allow our tools to be used to harm people, develop weapons, for communications surveillance, or to injure others or destroy property. There are, however, national security use cases that align with our mission. For example, we are already working with DARPA to spur the creation of new cybersecurity tools to secure open source software that critical infrastructure and industry depend on. It was not clear whether these beneficial use cases would have been allowed under ‘military’ in our previous policies.”

Over the last year, the company has reportedly been pitching its services in various capacities to the U.S. military and national security offices, backed by a former security officer at software company and government contractor Palantir. And OpenAI isn’t the only AI innovator pivoting to military applications. Tech companies Anthropic, makers of Claude, and Palantir recently announced a partnership with Amazon Web Services to sell Anthropic’s AI models to defense and intelligence agencies, advertised as “decision advantage” tools for “classified environments.”

Recent rumors suggest President-elect Donald Trump is eyeing Palantir chief technology officer Shyam Shankir to take over the lead engineering and research spot in the Pentagon. Shankir has previously been critical of the Department of Defense’s technology acquisition process, arguing that the government should rely less on major defense contractors and purchase more “commercially available technology.”

“}]] Mashable Read More 

 As Trump eyes new DoD leaders, OpenAI announces a partnership that will place its AI models in U.S. military tech.