Policy Making
Setting the next bound for AI governance, we inform policies and develop methodologies and frameworks on topics such as red teaming and agentic testing. Under this priority area, we also define requirements for testing benchmarks.
Setting the next bound for AI governance, we inform policies and develop methodologies and frameworks on topics such as red teaming and agentic testing. Under this priority area, we also define requirements for testing benchmarks.
As part of our involvement in the International Network of AI Safety Institutes, we work with other global AISIs to lead the Network’s joint testing exercises, which aim at improving the efficacy of model evaluations across different languages.
Pulling together Singapore’s research ecosystem, we conduct research to advance the science for AI safety testing and evaluation, focusing on areas such as explainability and interpretability, mitigation techniques, machine unlearning, mathematical proofs for AI safety, as well as privacy-preserving AI training.