Microsoft last year proposed using OpenAI’s mega-popular image generation tool, DALL-E, to help the Department of Defense build software to execute military operations, according to internal presentation materials reviewed by The Intercept. The revelation comes just months after OpenAI silently ended its prohibition against military work.
The Microsoft presentation deck, titled “Generative AI with DoD Data,” provides a general breakdown of how the Pentagon can make use of OpenAI’s machine learning tools, including the immensely popular ChatGPT text generator and DALL-E image creator, for tasks ranging from document analysis to machine maintenance. (Microsoft invested $10 billion in the ascendant machine learning startup last year, and the two businesses have become tightly intertwined. In February, The Intercept and other digital news outlets sued Microsoft and OpenAI for using their journalism without permission or credit.)
The Microsoft document is drawn from a large cache of materials presented at an October 2023 Department of Defense “AI literacy” training seminar hosted by the U.S. Space Force in Los Angeles. The event included a variety of presentation from machine learning firms, including Microsoft and OpenAI, about what they have to offer the Pentagon.
The publicly accessible files were found on the website of Alethia Labs, a nonprofit consultancy that helps the federal government with technology acquisition, and discovered by journalist Jack Poulson. On Wednesday, Poulson published a broader investigation into the presentation materials. Alethia Labs has worked closely with the Pentagon to help it quickly integrate artificial intelligence tools into its arsenal, and since last year has contracted with the Pentagon’s main AI office. The firm did not respond to a request for comment.
One page of the Microsoft presentation highlights a variety of “common” federal uses for OpenAI, including for defense. One bullet point under “Advanced Computer Vision Training” reads: “Battle Management Systems: Using the DALL-E models to create images to train battle management systems.” Just as it sounds, a battle management system is a command-and-control software suite that provides military leaders with a situational overview of a combat scenario, allowing them to coordinate things like artillery fire, airstrike target identification, and troop movements. The reference to computer vision training suggests artificial images conjured by DALL-E could help Pentagon computers better “see” conditions on the battlefield, a particular boon for finding — and annihilating — targets.
In an emailed statement, Microsoft told The Intercept that while it had pitched the Pentagon on using DALL-E to train its battlefield software, it had not begun doing so. “This is an example of potential use cases that was informed by conversations with customers on the art of the possible with generative AI.” Microsoft, which declined to attribute the remark to anyone at the company, did not explain why a “potential” use case was labeled as a “common” use in its presentation.
OpenAI spokesperson Liz Bourgeous said OpenAI was not involved in the Microsoft pitch and that it had not sold any tools to the Department of Defense. “OpenAI’s policies prohibit the use of our tools to develop or use weapons, injure others or destroy property,” she wrote. “We were not involved in this presentation and have not had conversations with U.S. defense agencies regarding the hypothetical use cases it describes.”
Bourgeous added, “We have no evidence that OpenAI models have been used in this capacity. OpenAI has no partnerships with defense agencies to make use of our API or ChatGPT for such purposes.”
At the time of the presentation, OpenAI’s policies seemingly would have prohibited a military use of DALL-E. Microsoft told The Intercept that if the Pentagon used DALL-E or any other OpenAI tool through a contract with Microsoft, it would be subject to the usage policies of the latter company. Still, any use of OpenAI technology to help the Pentagon more effectively kill and destroy would be a dramatic turnaround for the company, which describes its mission as developing safety-focused artificial intelligence that can benefit all of humanity.
“It’s not possible to build a battle management system in a way that doesn’t, at least indirectly, contribute to civilian harm.”
“It’s not possible to build a battle management system in a way that doesn’t, at least indirectly, contribute to civilian harm,” Brianna Rosen, a visiting fellow at Oxford University’s Blavatnik School of Government who focuses on technology ethics.
Rosen, who worked on the National Security Council during the Obama administration, explained that OpenAI’s technologies could just as easily be used to help people as to harm them, and their use for the latter by any government is a political choice. “Unless firms such as OpenAI have written guarantees from governments they will not use the technology to harm civilians — which still probably would not be legally-binding — I fail to see any way in which companies can state with confidence that the technology will not be used (or misused) in ways that have kinetic effects.”
The presentation document provides no further detail about how exactly battlefield management systems could use DALL-E. The reference to training these systems, however, suggests that DALL-E could be to used to furnish the Pentagon with so-called synthetic training data: artificially created scenes that closely resemble germane, real-world imagery. Military software designed to detect enemy targets on the ground, for instance, could be shown a massive quantity of fake aerial images of landing strips or tank columns generated by DALL-E in order to better recognize such targets in the real world.
Even putting aside ethical objections, the efficacy of such an approach is debatable. “It’s known that a model’s accuracy and ability to process data accurately deteriorates every time it is further trained on AI-generated content,” said Heidy Khlaaf, a machine learning safety engineer who previously contracted with OpenAI. “Dall-E images are far from accurate and do not generate images reflective even close to our physical reality, even if they were to be fine-tuned on inputs of Battlefield management system.
The accuracy of generative image models has been called into question, as they struggle to even generate the correct number of limbs or fingers. This raises concerns about their reliability in accurately representing a realistic field presence.
In a recent interview with the Center for Strategic and International Studies, Capt. M. Xavier Lugo of the U.S. Navy discussed the potential military applications of synthetic data, similar to what models like DALL-E can produce. He suggested that fake images could be used to train drones to enhance their ability to perceive and identify objects below them.
Lugo, who serves as the mission commander of the Pentagon’s generative AI task force, is connected to a Microsoft presentation document delivered by Nehemiah Kuhns, a technology specialist working with the Space Force and Air Force.
The Air Force is actively working on the Advanced Battle Management System, a component of the larger Joint All-Domain Command and Control project. This initiative aims to integrate the entire U.S. military for improved communication, AI-driven data analysis, and increased combat capabilities. The Pentagon envisions a future where different military branches seamlessly share information to enhance their combat effectiveness.
While Microsoft has a history of securing defense contracts, OpenAI recently announced its collaboration with the Department of Defense. The use of OpenAI software by the Pentagon raises questions about the potential implications for warfare, as these technologies could play a role in military operations.
Microsoft openly supports the use of its machine learning tools in the military’s “kill chain,” highlighting a different approach compared to OpenAI’s stance on avoiding harm.
The collaboration between OpenAI and the military has been framed as defensive and focused on initiatives like cybersecurity and veteran support. However, the development of a battle management system could bring OpenAI closer to the realm of warfare, potentially influencing the deployment of weapons and mission planning.
Ultimately, the goal of military applications of AI is to achieve combat objectives, which often involve causing harm. The AI literacy seminar series conducted by Microsoft emphasizes the importance of understanding these technologies in the context of military operations, where the ultimate goal is to “kill bad guys.” Please write again.
Source link