If your business uses artificial intelligence (AI) as part of its hiring protocols, you should be aware of a growing push to regulate AI in the hiring process. Concerns about AI bias have been growing over the past several years, as stories have emerged of AI algorithms disqualifying qualified applicants and filtering candidates in a way that unintentionally discriminated against certain groups. Now, there is a first-of-its-kind law in New York City that demands audits of the algorithms employers in the city use for hiring and promotion purposes. The new law could spell a more heavily regulated future for the use of AI algorithms in hiring.
Employers have turned to AI and other HR technologies to streamline the hiring process. Hiring is a famously tedious process that can drag on for months and cost thousands upon thousands of dollars in time, resources, and lost productivity. That tedium is partly due to the manual steps involved in the hiring process – namely, the need for a hiring manager to sort through and individually read each resume, cover letter, and job application they receive. Algorithms can simplify the process by allowing employers to input parameters for sorting resumes. For instance, a hiring manager can tell an AI algorithm to search for specific keywords that correspond with desired qualifications or filter out candidates with long employment history gaps.
But critics of AI in the hiring process have pointed to discrimination and bias as common issues with these technologies. For example, disqualifying candidates based on a gap in their resume without allowing the applicant to explain that gap and be discriminatory against women who took several years away from work raising their children.
According to a recent Wired article about the New York City law and AI bias in hiring more broadly, there are other issues, too. Some algorithms, the Wired piece said, “favor applicants based on where they want to school,” while others grade candidates based on the font they choose for their resumes. It’s not just resume sorting tools, either. AI in hiring can also extend to video interviews and other touchpoints between the candidate and the interviewer. Those types of AI technologies, Wired said, can favor applicants depending on things like “their accent, whether they wear glasses, or whether there’s a bookshelf in the background.”
The New York City Council adopted the new law in January. Now, all employers using AI technologies in hiring must have their processes audited for AI bias. Additionally, employers that use these technologies must inform candidates that AI plays a role in the hiring decision. Auditors will attempt to discern whether employers’ AI hiring tools or algorithms are biased based on sex, race, or ethnicity.
The law could start a new trend in the employment world. While AI in hiring has been largely unregulated up to this point, the New York City law could signify that that period is drawing to a close. The Wired piece pointed to a bill that members of Congress are currently drafting “that would require businesses to evaluate automated decision-making systems used in areas such as health care, housing, employment, or education, and report the findings to the Federal Trade Commission.” The article also noted that other countries, including Canada, China, Germany, and the United Kingdom, already regulate AI in the hiring process.
While these changes don’t directly relate to criminal checks, backgroundchecks.com has been closely monitoring shifts in the HR technology space over the past several years. When used correctly and responsibly, AI tools can streamline the hiring process and help employers avoid situations where they hire someone who isn’t fully qualified for a job or overlook a red flag on a candidate’s application. Because AI bias exists, though, it is vital for employers who use these tools to exercise discretion and know the latest legal developments around the technologies.
Get monthly updates on background check news, industry trends, and changes in laws and regulations.
About Michael Klazema The author
Michael Klazema is the lead author and editor for Dallas-based backgroundchecks.com with a focus on human resource and employment screening developments