Home/Our Work

Our Work

We focus on defining AI safety policies, developing practical tools, running tests and conducting research to advance the science for AI safety testing and evaluation.

Image

Policy Making

Setting the next bound for AI governance, we inform policies and develop methodologies and frameworks on topics such as red teaming and agentic testing. Under this priority area, we also define requirements for testing benchmarks.

Image

Engineering Tools

We continually seek to enhance our testing tools like AI Verify and Moonshot to support AISI needs. This includes developing new features to support safety testing and benchmark development.

Image

Testing

As part of our involvement in the International Network of AI Safety Institutes, we work with other global AISIs to lead the Network’s joint testing exercises, which aim at improving the efficacy of model evaluations across different languages.

Image

Research

Pulling together Singapore’s research ecosystem, we conduct research to advance the science for AI safety testing and evaluation, focusing on areas such as explainability and interpretability, mitigation techniques, machine unlearning, mathematical proofs for AI safety, as well as privacy-preserving AI training.