A global summit in South Korea on Tuesday announced a new “blueprint for action” to guide the responsible use of artificial intelligence (AI) in military applications. The updated document offers more practical guidelines than last year’s version but remains legally non-binding.
The Responsible AI in the Military Domain (REAIM) summit, held in Seoul, is the second of its kind, following an initial summit in Amsterdam in 2023. This year, 96 nations sent government representatives, including major global powers like the United States and China. However, it was not immediately clear how many nations are endorsing the new blueprint.
Last year’s summit resulted in around 60 countries supporting a modest “call to action” without any legal commitments. This year, officials described the blueprint as more action-oriented, reflecting growing concerns over AI risks and its increasing use in military contexts, such as Ukraine’s deployment of AI-enabled drones.
“We are making further concrete steps,” said Netherlands Defence Minister Ruben Brekelmans during a roundtable discussion. “Last year’s summit focused on creating shared understanding. Now, we are moving toward action.”
The new guidelines outline necessary risk assessments, the importance of maintaining human oversight in military operations, and confidence-building measures to manage those risks. Notably, the document highlights preventing the proliferation of AI-enabled weapons of mass destruction (WMD), including those used by terrorist organizations, and emphasizes human involvement in decisions regarding nuclear weapons.
South Korean officials noted that much of the blueprint aligns with existing principles, such as the U.S. government’s declaration on responsible military AI use. However, the Seoul summit, co-hosted by the Netherlands, Singapore, Kenya, and the United Kingdom, aims to ensure ongoing global discussions that are not dominated by a single nation.
The venue and timing for the next REAIM summit are still under discussion.