Home/Our Work

Our Work

We focus on defining AI safety policies, developing practical tools, running tests and conducting research to advance the science for AI safety testing and evaluation.

Image

Policy Making

Setting the next bound for AI governance, we inform policies and develop methodologies and frameworks on topics such as red teaming and agentic testing. Under this priority area, we also define requirements for testing benchmarks.

Image

Engineering Tools

We continually seek to enhance our testing tools like AI Verify and Moonshot to support AISI needs. This includes developing new features to support safety testing and benchmark development.

Image

Testing

We lead the testing track in the International Network of AI Safety Institutes. This work includes conducting joint testing with other AISIs around the world, in order to improve testing methodologies and evaluation science. Please refer to our blogposts in the Resources section for more  information.

Image

Research

Pulling together Singapore’s research ecosystem, we conduct research to advance the science for AI safety testing and evaluation, focusing on areas such as explainability and interpretability, mitigation techniques, machine unlearning, mathematical proofs for AI safety, as well as privacy-preserving AI training.