Proposed changes to the law may mean AI tools could create third party harassment liabilities.
Proposed changes to the law may mean AI tools could create third party harassment liabilities.
AI tools are becoming increasingly integrated into workplace systems and interact directly with employees. While AI is not a legal person, could employers who deploy these systems still face liability where outputs amount to harassment? Is there a difference between human‑approved or prompted AI content?
Legal context
Under the Equality Act 2010 (EqA), harassment occurs where a person engages in unwanted conduct related to a protected characteristic which has the purpose or effect of violating dignity or creating an intimidating, hostile, degrading, humiliating or offensive environment for another person. Employers are vicariously liable for acts of harassment by employees in the course of employment, unless they can show they took all reasonable steps to prevent it.
A separate regime applies for third‑party harassment (liability for harassment by non‑employees where reasonable steps are not taken). The Employment Rights Bill has proposed to extend the duty on employers to prevent sexual harassment to acts by third parties, though this is not expected to come into effect until sometime between 2026-2027.
Who has conduct of the harassment?
Where an AI tool drafts content and a person reviews and issues it (for example, a letter containing harassing language), liability appears...
Read Full Story:
https://news.google.com/rss/articles/CBMi0wFBVV95cUxPSG1ILWozeXc2RFJxb19yMGF3...