Investing.com -- OpenAI has launched a new hub for safety evaluations of its artificial intelligence (AI) models. This hub is designed to measure each model’s safety and performance and will publicly share these results.
The safety evaluations encompass several aspects such as harmful content, jailbreaks, hallucinations, and instruction hierarchy. The harmful content evaluations ensure that the model does not comply with requests for content that violates OpenAI’s policies, including hateful content or illicit advice.
Jailbreak evaluations include adversarial prompts designed to circumvent model safety training and induce the model to produce harmful content. Hallucination evaluations measure when a model makes factual errors. Instruction hierarchy evaluations measure adherence to the framework a model uses to prioritize instructions between the three classifications of messages sent to the model.
This hub provides access to safety evaluation results for OpenAI’s models, which are included in their system cards. OpenAI uses these evaluations internally as part of their decision-making process regarding model safety and deployment.
The hub allows OpenAI to share safety metrics on an ongoing basis, with updates coinciding with major model updates. This is part of OpenAI’s broader effort to communicate more proactively about safety.
As AI evaluation science evolves, OpenAI aims to share its progress on developing more scalable ways to measure model capability and safety. As models become more capable and adaptable, older methods become outdated or ineffective at showing meaningful differences, leading to regular updates of evaluation methods to account for new modalities and emerging risks.
The safety evaluations results shared on the hub are intended to make it easier to understand the safety performance of OpenAI systems over time and support community efforts to increase transparency across the field. These results do not reflect the full safety efforts and metrics used at OpenAI, but provide a snapshot of a model’s safety and performance.
The hub describes a subset of safety evaluations and displays results on those evaluations. Users can select which evaluations they want to learn more about and compare results on various OpenAI models. The page currently describes text-based safety performance on four types of evaluations: harmful content, jailbreaks, hallucinations, and instruction hierarchy.
This article was generated with the support of AI and reviewed by an editor. For more information see our T&C.