New research by NewsGuard has revealed that the latest version of OpenAI’s video creation tool Sora 2 can be prompted to advance false or misleading information 80% of the time.
NewsGuard, which rates the credibility of news and information websites, maintained that its findings demonstrate the ease with which bad actors can weaponize the powerful new technology to spread false information at scale. Five of the 20 false claims Sora generated originated with Russian disinformation operations, it added.
The researchers noted that within minutes of accessing Sora 2 they had it producing false or misleading videos related to major news, including videos showing a Moldovan election official destroying pro-Russian ballots, a toddler detained by U.S. immigration officers, and a Coca-Cola spokesperson announcing that the company would not sponsor the Super Bowl because of Bad Bunny’s selection as the halftime headline act.
NewsGuard also asserted that its findings demonstrate how, with minimal effort and no technical expertise, bad actors, including health-hoax peddlers, authoritarian regimes engaged in hostile information operations, and political misinformers, can easily use this technology to make false claims more convincing.
OpenAI Acknowledges Sora 2 Risks
OpenAI cautioned users about the risks of Sora 2 on a “system card” at its website. “Sora 2’s advanced capabilities require consideration of new potential risks, including nonconsensual use of likeness or misleading...
Read Full Story:
https://news.google.com/rss/articles/CBMitAFBVV95cUxObHloaS1LY0ZkSjhiLUhKM0hf...