--Advertisement--
Advertisement

OpenAI to launch tool capable of detecting fake images

OpenAI logo OpenAI logo

OpenAI, the Microsoft-backed artificial intelligence (AI) company, says a tool capable of detecting falsely generated images will soon be launched.

The tech company announced the development in a statement on Tuesday.

“Today, OpenAI is joining the Steering Committee of C2PA, the Coalition for Content Provenance and Authenticity. C2PA is a widely used standard for digital content certification, developed and adopted by a wide range of actors, including software companies, camera manufacturers, and online platforms,” the statement reads.

“C2PA can be used to prove the content comes from a particular source. We look forward to contributing to the development of the standard, and we regard it as an important aspect of our approach.”

Advertisement

OpenAI is behind DALL-E, the popular tool that generates digital images from natural language descriptions.

The AI company said the C2PA would ensure that images pass authenticity standards, adding that the world needs common ways of sharing information about how digital content was created.

“Standards can help clarify how content was made and provide other information about its origins in a way that’s easy to recognize across many situations—whether that content is the raw output from a camera or an artistic creation from a tool like DALL·E 3,” the company said.

Advertisement

The organisation added that during internal testing on an earlier version, the tool accurately detected around 98 percent of DALL-E 3 images while incorrectly flagging less than 0.5 percent of non-AI images.

OpenAI also announced applications for testers — including research labs and research-oriented journalism nonprofits — to access its image detection classifier and provide feedback through its Researcher Access Program.

 

Advertisement
Add a comment

Leave a Reply

Your email address will not be published. Required fields are marked *

error: Content is protected from copying.