Should companies hand their hiring responsibility over to a computer program? What about decisions based on a background screening report? In a competitive labor market and at a time when speed is essential to many sectors, more companies have turned to emerging tools such as artificial intelligence and automated hiring systems to keep their competitive edge. However, the result has been a flurry of legislative scrutiny, including in New York City, where an AI bias audit law has been rigorously debated for almost a year.
That law, which aims to require an independent third party to audit AI tools for decisions based on illegal biases, remains under consideration. Two rounds of public comment have only led to further disagreements between activists, employers, and AI purveyors. What are the key takeaways for companies in such a confusing environment?
Today's "AI" tools are only as good as their training data, which means there are still many ways for the software creator’s biases to seep into future decisions made by the AI. Even the training data itself could cause the model to make erroneous conclusions. We see this already with the "hallucinations" experienced by chat-based AI programs when the program makes unfounded connections or supplies outright falsehoods.
Notably, the law does not make allowances for using AI in employment background screening. The need for individualized assessments, especially in "ban the box" jurisdictions, makes using AI for risk analysis especially legally perilous. NYC aims to combat these problems by requiring an anti-bias audit. However, pushback from employers over who would conduct such audits and what qualifies as bias continues to prolong the debate.
Could recruitment tools prove useful to employers after an audit for bias? It's possible. Many businesses want to avoid wasting time looking at irrelevant resumes from those who don't meet specific criteria, and automation could speed up the interview and selection process. However, making final employment decisions may be better left in human hands for the foreseeable future. This way, AI tools don't have the final say; rather, they provide advice and insight that real people can use to make decisions instead.
As AI technology continues to proliferate, develop, and deepen its connection to many industries, employers should be wary of moving too quickly to embrace technologies before finalizing the relevant legal questions. New York City is far from the only jurisdiction that will look to implement an AI bias audit law or restrict AI use in hiring and recruitment applications, and other states could follow. Monitoring the growth of this technology and its surrounding legal environment is a prudent choice for now as legislators and employers wrestle with this game-changer.