OpenAI has cut down the time and resources needed for identifying and mitigating risks while testing its artificial intelligence models, as pressure mounts to speed up new model launches amid ...
The company really wants you to know that it’s trying to make its models safer. OpenAI is once again lifting the lid (just a crack) on its safety-testing processes. Last month the company shared the ...
AI models in China will be tested by the leading internet regulator to ensure that their responses on sensitive topics "embody core socialist values," FT reported. AI models will be tested by local ...
Several frontier AI models show signs of scheming. Anti-scheming training reduced misbehavior in some models. Models know they're being tested, which complicates results. New joint safety testing from ...
Pillay is an editorial fellow at TIME. Pillay is an editorial fellow at TIME. Despite their expertise, AI developers don't always know what their most advanced systems are capable of—at least, not at ...
Executives at artificial intelligence companies may like to tell us that AGI is almost here, but the latest models still need some additional tutoring to help them be as clever as they can. Scale AI, ...
OpenAI used the subreddit, r/ChangeMyView, to create a test for measuring the persuasive abilities of its AI reasoning models. The company revealed this in a system card — a document outlining how an ...
当前正在显示可能无法访问的结果。
隐藏无法访问的结果