The rise and spread of artificial intelligence-powered solutions has already had a big impact on industries around the world. Opportunities to streamline processes and engage with automation on a deeper level seem to be everywhere. This expanding availability has already led many employers to investigate or adopt the use of AI, including automated employment decision tools (AEDTs), in their processes.
For SMBs, such tools could represent a source of value and savings in areas such as the hiring process. Intelligent automation could make some time-consuming workflows faster, cheaper, and more reliable, all of which can help a business reach its next growth stage. However, AI and AEDTs do come with concerns related to compliance and fair hiring practices.
AI can support many tasks, but it also has the potential to be a source of real or perceived bias. Employers face a need to navigate the risks and the shifting regulatory landscape surrounding AI. In this article, we'll look at the potential ways employers may use AEDTs, assess some of the bias-related risks AI creates, and examine the role of current and future regulations.
What are AEDTs?
Recruitment, hiring, and personnel management can consume significant time and resources within a business. AI can be a tool for organizations to leverage technology to reduce workloads on HR teams. AEDTs are tools that use AI or machine learning to support decision-making processes related to employment, such as:
- Application selection and filtering of resumes to identify those best aligned with the role.
- Interview shortlisting.
- Promotion recommendations.
- Recommendations about who to hire.
- Recommendations about whose contracts to end.
While such solutions may represent the potential to improve critical processes, their use may accompany the need for additional considerations surrounding compliance and fairness.
What About AI in the Background Screening Process?
AI can also be a part of the screening process in several ways. Screening service providers may use AI or machine learning (ML)-powered solutions to support the research that goes into a background check. They may also use such tools for compiling information into a final report. This work often takes place with a “human in the loop.” The AI involved does not completely take over the process. Instead, humans work together with AI tools to build reports.
There is consistent support for such innovations today. According to HireRight’s Benchmark report, a majority of employers feel comfortable with their screening providers using AI for certain purposes. Accelerating the screening process and reducing human errors were among the most popular reasons cited.
Employers may also themselves explore other AI tools that could assist with analyzing the results of background screening or other information gathered about an individual. For example, there has been some research into using AI models for predictive analysis. Such a solution might evaluate the potential for an individual to re-offend based on their criminal history and other factors. However, the EEOC currently requires assessments on such tools to determine if they create a disparate impact and the potential for discrimination. Individuals must also have a means to dispute adverse information generated by an AI solution.
To understand more about the compliance environment in this area, it’s helpful to consider the potential risks related to AI.
Understanding AI Risks
Computer systems, including AI and ML algorithms, aren’t inherently immune to bias. The decisions suggested and advice offered by AI can fall prey to biases and prejudices that negatively impact human decision-making. Algorithmic bias may emerge as a result of model training or because of implicit assumptions made by the human designers of a model.
There’s a simple phrase for understanding how this occurs: “bias in, bias out.” Without careful calibration and challenging assumptions, hidden biases can creep into a model’s decisions. These prejudicial outcomes are a serious risk because of the influence they may have on a company’s employment choices. Biased AI could lead to a disparate impact on one group over another, leading to discriminatory and unfair outcomes.
Let’s consider an example of how an improper accounting of potential bias can lead to larger issues in a model’s outcomes. The company in our example was a large online retailer, but even SMBs can learn from such large organizations.
Highlighting How Bias Can Lead to Disparate Impacts
In 2018, a retailer was forced to abandon an internal project it was developing to enhance the company’s hiring process. The tool was meant to be an AI-driven recruiting tool to streamline application filtering and candidate selection. However, the tool began to demonstrate an implicit bias against women over time. The algorithm frequently disqualified the vast majority of female applicants. Why?
Developers hadn’t anticipated the possibility of bias in the training data. The business used ten years of past applications to train the model to identify suitable individuals. However, most of that training data consisted of applications from men.
As a result, the algorithm eventually began to display a bias against women simply due to a lower application volume.
This scenario offers a valuable lesson for businesses exploring new AI tools. The company demonstrated adherence to some core principles that should be the same ones SMBs adopt. First and foremost, the company actively audited the system’s output. They evaluated the data to discover that bias existed in the first place. They also attempted to correct the algorithm’s direction to cure the bias before reaching a live hiring environment.
Though those first attempts weren’t successful, they prevented an unfair tool from having any real-world impacts. SMBs exploring how to embrace AI innovation should be ready to engage in auditing themselves, potentially with outside, impartial assistance.
What Does the EEOC Say About AI?
With a sense of the issues that bias in AI can create, we can explore the Equal Employment Opportunity Commission(“EEOC”) guidance for employers in this area. As the adoption of AI tools becomes more widespread, this guidance can provide a framework for compliance with the current environment. Some EEOC guidance focuses on compliance with the Americans with Disabilities Act (ADA), while other guidelines explore maintaining fairness. Employers are encouraged to regularly review the EEOC’s website to keep up-to-date on current guidance in this space.
State and Local Regulations Continue to Develop
Employers in specific states and cities also face growing regulation of automated and AI-related employment decision tools. In 2024, there was a high degree of legislative activity on the subject, with a majority of state legislatures at least considering an AI-related bill. One of the most notable laws is the Colorado Artificial Intelligence Act, passed in 2024 and due to take effect in 2026.
The CAIA establishes a duty of “reasonable care” for developers and deployers of “high-risk” AI systems to ensure that AI systems do not unfairly discriminate. The law also creates a requirement for deployers of high-risk systems to provide certain, specific notices about the use of AI systems in making consequential decisions. Alongside new additions such as Colorado’s law, other states have already put other regulations on the books.
Illinois, for example, has regulated the use of AI since 2020 when it's used to analyze job interviews conducted by video call. Maryland regulates the use of facial recognition technologies in interviews. In New York City, employers must give candidates a notice that contains specific pieces of information 10 days prior to using an AEDT. Companies must also complete mandatory bias audits on any AEDTs they use. The law requires an additional independent audit each year with the results made public.
Looking forward, these regulatory trends look set to continue. In 2025, the Texas legislature will consider a sweeping law to regulate the use of high-risk AI systems. As more state legislatures gear up to address concerns about discrimination and bias in AI, SMBs will have to carefully assess operational strategies for compliance.
Balance Innovation, Risk, and Compliance with AI Solutions
AI has the potential to have positive impacts on the processes that lead to many employment decisions. For SMBs, those impacts can translate into enhanced efficiency and stronger hiring. However, watching out for potential bias and complying with federal and state laws will have a central role in defining whether AI is a risk or a benefit to a business.
There is a need for organizations to have a clear focus on compliance issues and a proactive stance towards bias prevention. Innovating for the future can take place in concert with keeping the focus on fair hiring. Stay up to date on what’s new as these technologies continue to advance rapidly and compliance guidelines evolve. Through a well-considered approach SMBs can unlock the advantages of AI-driven employment decisions without compromising on fairness.
Get monthly updates on background check news, industry trends, and changes in laws and regulations.

About Michael Klazema The author
Michael Klazema is the lead author and editor for Dallas-based backgroundchecks.com with a focus on human resource and employment screening developments