×
Friday, June 6, 2025

AI use at work under legal scrutiny, lawyer urges employers to conduct comprehensive risk reviews - HRD America

A recent study by cybersecurity firm Cyberhaven revealed a troubling trend: nearly 35% of the data that employees feed into AI tools like ChatGPT would be classified as sensitive.

This includes HR records, source codes, and proprietary R&D material. The same report highlighted that mid-level employees, who are often trusted with operational autonomy, are the most frequent users of these AI tools.

With AI platforms becoming a fixture in daily workflows, this pattern creates legal exposure on multiple fronts.

Companies risk violating confidentiality agreements, breaching internal policies, and even falling afoul of the Personal Data Protection Act 2012 (PDPA) if sensitive personal information is improperly disclosed.

Despite these risks, many organisations still lack dedicated AI-use protocols. Nicholas Ngo, Director at TSMP Law Corporation spoke with HRD Asia to explain the real risks of unchecked AI use at work and how HR can take charge before policies and trust break down.

The quiet risk of everyday inputs

According to Ngo, one of the most serious threats to companies’ confidential information comes from employees inputting confidential material into AI platforms. These can include everything from source codes and product strategies to confidential client lists.

"When data is uploaded into a publicly available AI platform, it may be used to train the system’s learning model," he says. "This means another user could receive an output that indirectly draws from your...



Read Full Story: https://news.google.com/rss/articles/CBMi8wFBVV95cUxOUGdnWFJnR3lIUTVxQVM1S2Zo...