Policy Making
Setting the next bound for AI governance, we inform policies and develop methodologies and frameworks on topics such as red teaming and agentic testing. Under this priority area, we also define requirements for testing benchmarks.
Setting the next bound for AI governance, we inform policies and develop methodologies and frameworks on topics such as red teaming and agentic testing. Under this priority area, we also define requirements for testing benchmarks.
We lead the testing track in the International Network of AI Safety Institutes. This work includes conducting joint testing with other AISIs around the world, in order to improve testing methodologies and evaluation science. Please refer to our blogposts in the Resources section for more information.
Pulling together Singapore’s research ecosystem, we conduct research to advance the science for AI safety testing and evaluation, focusing on areas such as explainability and interpretability, mitigation techniques, machine unlearning, mathematical proofs for AI safety, as well as privacy-preserving AI training.