Singapore’s government unveiled a framework today for international collaboration on artificial intelligence safety after a gathering of AI experts from the US, China, and Europe. This document outlines a collective vision for advancing AI safety through global cooperation instead of rivalry.
“Singapore stands out as one of the few nations that maintains good relations with both Eastern and Western powers,” remarked Max Tegmark, an MIT scientist who facilitated the assembly of AI leaders last month. “They understand that they won’t be the ones to invent [artificial general intelligence]—it will happen to them—so it’s crucial for them to encourage dialogue among the countries that will actually create it.”
The nations considered most capable of developing AGI are naturally the US and China—yet these countries appear more focused on competing with each other rather than collaborating. In January, following the launch of a sophisticated model by Chinese startup DeepSeek, President Trump labeled it “a wake-up call for our industries” and asserted that the US must be “laser-focused on competing to win.”
The Singapore Consensus on Global AI Safety Research Priorities advocates for cooperation among researchers in three primary domains: assessing the risks brought by advanced AI models, investigating safer methods of constructing these models, and creating strategies for managing the behavior of cutting-edge AI systems.
This consensus was shaped at a meeting held on April 26 during the International Conference on Learning Representations (ICLR), a leading AI conference hosted in Singapore this year.
Participants at the AI safety meeting included researchers from OpenAI, Anthropic, Google DeepMind, xAI, and Meta, along with academics from renowned institutions like MIT, Stanford, Tsinghua, and the Chinese Academy of Sciences. Experts from AI safety organizations across the US, UK, France, Canada, China, Japan, and Korea were also present.
“Amid geopolitical fragmentation, this comprehensive overview of advanced research on AI safety signifies a hopeful development as the global community unites in a shared intent to foster a more secure AI future,” Xue Lan, dean of Tsinghua University, stated.
The evolution of increasingly capable AI models, some exhibiting unexpected functionalities, has led researchers to express concern regarding a variety of risks. While certain experts emphasize immediate dangers, such as issues stemming from biased AI systems or the potential for criminals exploiting the technology, a notable number are concerned that AI may represent an existential threat to humanity as it learns to outperform humans in more areas. These researchers, often termed “AI doomers,” fear that AI models may deceive and manipulate humans to achieve their own objectives.
The promise of AI has further fueled discussions of an arms race among the US, China, and other influential nations. The technology is perceived in policy discussions as vital for economic success and military supremacy, prompting various governments to outline their own visions and regulations on how it should be developed.