OpenAI Shares ‘Deepfake' Identifier to Aid Fact-Checkers
As misleading AI media proliferates online ahead of worldwide elections, top AI lab OpenAI is providing researchers with a tool to expose generated content. But detecting deception will require more than just technology.
On Tuesday, the San Francisco-based company revealed plans to let a small group of fact-checking organizations test its new “deepfake detector”. The algorithm was trained to identify images crafted using OpenAI's own text-to-image generator DALL-E, one of the most popular creative AI tools. However, OpenAI acknowledges the single tool can only curb a portion of the manipulated media problem.
“This is the start of further exploration,” says Sandhini Agarwal of OpenAI. “Collaboration between tech developers and independent researchers is crucial moving forward.”
The tool correctly pinpoints 98.8% of DALL-E images, but can't recognize media from other AI like Midjourney. OpenAI is additionally supporting initiatives like the C2PA standard, which watermarks creative works with provenance data, and developing audio fingerprints to tag machine-generated sound.
As elections in countries like India and Taiwan saw disinformation spread via synthetic audio and video already this year, experts stress a comprehensive strategy is urgent. “There is no quick solution – building media literacy and technology standards must go hand in hand,” says Agarwal. OpenAI shares this detector in hope of spurring innovation toward reliable deception detection for all.