While artificial intelligence (AI) can be a powerful tool in a manager’s arsenal when it comes to efficiently making decisions, it is essential to use it ethically and fairly. Companies are no longer relying on AI solely to automate repetitive tasks or produce predictive analytics — recent studies have shown that over 60% of managers use AI for critical employment decisions, such as hiring, firing, layoffs, and/or promotions. And more than one in five managers use AI to make these decisions without any human input. As managers increasingly — and often blindly — rely upon AI, companies may risk significant legal exposure.
Although it may be tempting to use AI to streamline employment decisions (e.g., hiring, promotion, workforce reductions), it is critical to remember that AI output merely reflects the data the system receives. These systems have no measurement for context, lack human judgment and empathy, and risk producing outcomes with unintended disparate impacts.
A Cautionary Tale (or Three)
In 2014, Amazon was one of the first companies to attempt to automate its hiring process. While testing its automated software, the company quickly noticed that the search engine was excluding female applicants. The algorithm had been trained on a decade of historical data, which reflected a male-dominated applicant pool, and as a result it learned to favor male resumes — even downgrading any that included the word “women” or references to women’s organizations. Had Amazon not...
Read Full Story:
https://news.google.com/rss/articles/CBMimwFBVV95cUxOMzYwT0ljQXNnc2Q2VEU2V2lU...