Skip to content Skip to footer

UK Safety Institute Unveils Open-Source ‘Inspect’ Toolset for Assessing AI Model Capabilities

The United Kingdom’s newly formed AI safety body, the UK Safety Institute, has launched an open-source toolset called Inspect aimed at enabling widespread evaluations of AI models’ skills and potential risks. Touted as the first such state-backed platform for AI safety testing, Inspect provides a framework for assessing core capabilities like knowledge and reasoning in AI systems.

Comprised of datasets, testing modules dubbed “solvers”, and scoring components, Inspect generates metrics based on how well AI models perform on various evaluation tasks. The toolset is extensible, allowing third-parties to contribute new testing techniques and components written in Python.

In announcing Inspect’s release, UK Safety Institute chair Ian Hogarth emphasized the importance of unified, accessible approaches to AI evaluations. “We hope Inspect can be a building block for successful global collaboration on AI safety testing,” Hogarth stated, encouraging the AI community to utilize, adapt, and expand the open platform.

Inspect’s debut comes as government initiatives scrutinizing generative AI intensify, with the U.S. recently launching its own NIST GenAI program. The UK and U.S. have partnered to cooperatively advance AI model testing efforts, including plans for a dedicated American AI safety institute.