[[{“value”:”
OpenAI and rival company Anthropic have signed agreements with the U.S. government to have new models tested before public release.
On Thursday the National Institute of Standards and Technology (NIST) announced that its AI Safety Institute will oversee “AI safety research, testing and evaluation” with both companies. “These agreements are just the start, but they are an important milestone as we work to help responsibly steward the future of AI,” said Elizabeth Kelly, director of the AI Safety Institute in the announcement.
Sam Altman just teased ‘Project Strawberry’ on X: Everything we know about the secret AI tool
It’s no secret that generative AI poses safety risks. Its tendency to produce inaccuracies and misinformation, enable harmful or illegal behavior, and entrench discrimination and biases is well documented at this point. OpenAI has its own internal safety testing, but has been secretive about how its models work and what they’re trained on. This is the first instance of OpenAI opening up access to third party scrutiny and accountability. Altman and OpenAI have been vocal about the need for AI regulation and standardization. But critics say the willingness to work with the government is a strategy to ensure OpenAI is regulated favorably and stamps out competition.
“For many reasons, we think it’s important that this happens at the national level. US needs to continue to lead!” posted OpenAI CEO Sam Altman on X.
The formal collaboration with NIST builds on the Biden Administrations AI executive order that was signed last October. Amongst other mandates that tapped several federal agencies to ensure the safe and responsible deployment of AI, the order directed requires AI companies to grant access to NIST for red-teaming before an AI model is released to the public.
The announcement also said that it would share findings and feedback in partnership with the UK AI Safety Institute.
“}]] Mashable Read More
AI companies OpenAI and Anthropic have announced a formal agreement with the U.S. AI Safety Institute that would share access to AI models for safety evaluations before public release.