The following was first posted in the Blogs section of Pullman & Comley’s website. It is reposted here with permission.
The technology that was supposed to make work easier is now making it more dangerous.
AI-generated deepfakes—defined generally as fabricated images, video, and audio that look and sound real—have arrived in the workplace, and they’re creating a category of liability that didn’t exist two years ago.
For employers, especially those in hospitality and other high-turnover industries, the message is straightforward: your current handbook probably doesn’t address this. It needs to.
Employees are using AI tools to create doctored images and audio targeting coworkers.
The content ranges from sexually explicit deepfakes to fabricated recordings designed to humiliate or defame. Recent lawsuits illustrate the scope of the problem.
For example, a 19-year veteran Washington State Patrol trooper filed suit after colleagues allegedly created and circulated an AI-generated video depicting him in a sexually suggestive scenario designed to mock his sexual orientation.
Likewise, a Nashville television meteorologist sued her former station after management failed to adequately address deepfake sexual images created using her likeness.
And a Baltimore high school athletic director was sentenced to jail for creating a deepfake audio recording of his principal making racist and antisemitic comments.
These aren’t edge cases; they’re the leading edge.
AI-generated content...
Read Full Story:
https://news.google.com/rss/articles/CBMikAFBVV95cUxQbmFLUnQ1Sk54R1VBQjhob2ZJ...