The Singapore AISI focuses on the following as priority areas for the advancement of AI safety and governance.
Following the blogpost released earlier this year, this evaluation report takes a closer look at the various methodological components and findings behind this joint testing exercise, which aims to seed a common approach for multilingual safety testing of frontier models at scale.
As part of the International Network of AI Safety Institutes’ continued effort to advance the science of AI model evaluations and work towards building common best practices for testing advanced AI systems, Singapore, Japan, and the United Kingdom led the Network’s latest joint testing exercise aimed at improving the efficacy of model evaluations across different languages
IMDA, together with Humane Intelligence, completed the world's first-ever multicultural and multilingual AI Safety Red Teaming Challenge focused on Asia-Pacific in November and December 2024! Over 350 participants across 9 countries tested 4 large language models for bias stereotypes in English and regional languages. AI safety initiatives like these go towards supporting the ongoing testing work of the AISI Network.<br><br>Stay tuned for the publication of the Challenge Evaluation Report in Feb 2025!
As part of the inaugural International Network of AI Safety Institutes (AISIs) convening in San Francisco last year, the Singapore AISI, alongside the US and UK AISIs, conducted a pilot joint testing exercise of a Gen AI model to explore methodological considerations impacting reproducibility – a key concept supporting the Network’s mission to develop best practices for model testing, amongst others. More details on the pilot testing exercise can be found in this jointly issued blogpost.
The Singapore Conference on AI (SCAI): International Scientific Exchange (ISE) on AI Safety saw over 100 of the best global minds from academia, industry and government, collectively identify and demonstrate consensus around technical AI safety research priorities. In shaping reliable, secure and safe AI, the outcomes of the discussion at SCAI:ISE are synthesized into the Singapore Consensus on Global AI Safety Research Priorities, a living document that continues to welcome views from the global research community.
Join us at Crowne Plaza Changi Airport on 19 Jan 2026 to learn how AI testing and assurance work. The event will discuss deploying safe and reliable GenAI systems across multilingual, multicultural contexts. Hear from industry leaders on demonstrating trustworthy AI to build stakeholder confidence.
Held on 5 November 2024, IMDA, together with Humane Intelligence, completed the world's first-ever multicultural and multilingual AI Safety Red Teaming Challenge focused on Asia-Pacific in November and December 2024! Over 350 participants across 9 countries tested 4 large language models for bias stereotypes in English and regional languages. AI safety initiatives like these go towards supporting the ongoing testing work of the AISI Network.
Held on 26 April 2025, the Singapore Conference on AI (SCAI): International Scientific Exchange (ISE) on AI Safety saw over 100 of the best global minds from academia, industry and government, collectively identify and demonstrate consensus around technical AI safety research priorities. In shaping reliable, secure and safe AI, the outcomes of the discussion at SCAI:ISE are synthesized into the Singapore Consensus on Global AI Safety Research Priorities, a living document that continues to welcome views from the global research community.