Although AI and recruitment are increasingly being used in the same sentence, are there particular rules that dictate how the technology should be used?
From our smartphones to our kitchen appliances, AI is everywhere. Outside of our homes, the rise of artificial intelligence in business cannot be understated, and this technology is helping us streamline how we work as a means to unlock greater efficiency.
Artificial intelligence (AI) is often credited for mitigating bias in hiring as the technology screens candidates using a large volume of data. Operating via an algorithm, AI combines various data points and predicts the best-fit candidate for a role. The human brain obviously can’t process information at such a massive scale. In theory, AI should objectively assess the data points and reduce assumptions, mental fatigue, and bias that humans often succumb to.
Despite this, artificial intelligence has remained relatively unchecked from a legal standpoint. Complex AI systems understandably have the potential to raise concerns, as the way they derive information from data can be misinterpreted if not properly managed.
It’s for this reason that AI in recruitment is one of the key areas that have attracted this type of attention. Just like humans, automated AI talent acquisition software isn’t immune to bias. As a tool to ensure that no applicant is unfairly sidelined during the hiring process, new legislation in the United States is further encouraging the conversation on how AI is changing the game for recruiting.
The relationship between a fair diversity recruiting strategy and AI
We know that artificial intelligence has incredible potential to positively influence diversity of hire as well as reduce bias in recruitment — but, like any tool, it’s not foolproof. Although over 86% of recruiters say they now use AI technology to speed up the process, there still seems to be a lack of clarity about what AI can and cannot do, and how the theory is applied in real-world settings.
Broad failure to understand how the technology works and what it is and isn’t programmed to do can lead to AI systems adopting bias during the hiring process. Bias tied to age, gender, race, religion, and more is pervasive in workplaces worldwide. In organizations, it’s all too easy to consciously or subconsciously include these biases in AI talent acquisition software, creating inequalities at every stage of the employment cycle.
The use of computer processing power in the screening and hiring process is not a new phenomenon. In fact, for several decades, employers and recruiting firms have been using simple text searches to cull through resumes submitted in response to job listings. However, the technology is getting smarter, and AI is now capable of using facial and voice recognition software to analyze body language, tone, and other factors to determine whether a candidate exhibits preferred traits.
To eliminate any opportunity for potential bias and formulate a truly authentic diversity recruiting strategy, companies need to do the work to ensure that the data sets being ‘fed’ to AI talent acquisition software with machine learning capability are clean.
This means fixing or removing incorrect, corrupted, incorrectly formatted, duplicate, or incomplete data within a dataset, and ensuring the AI system is assessing and ranking candidates on skills and attributes rather than their background or appearance. The AI should only be able to score or grade a candidate based on any actual work samples, such as tasks conducted via job simulations.
While this may sound like common sense, too many hiring assessment companies are failing to address these vital functions to reduce bias. But that’s slowly changing, thanks to sweeping new legislation that is being rolled out across the United States at both state and federal levels.
New York City unveils new legislation for AI and recruitment
Although formal regulation of AI-powered recruitment sourcing strategies is still very much in its infancy, governments at all levels have begun to explore options for managing the use of artificial intelligence in any employment setting where there is room for bias.
While most of this activity has so far occurred at local and state levels, the latest legislation that has been implemented has occurred in New York City. Aimed at restricting an employer’s use of AI in making hiring and promotion decisions, Local Law Int. No. 1894-A is currently planned to come into effect in 2023.
By definition, the new law will cover “any computational process, derived from machine learning, statistical modeling, data analytics, or AI, that issues simplified output, including a score, classification, or recommendation, that is used to substantially assist or replace discretionary decision-making for employment choices that impact natural persons”.
In simple terms, New York City has put any AI talent acquisition software or hiring assessment companies on notice. In essence, the legislation effectively prohibits the use of AI employee screening tools, unless the AI has been subject to a “bias audit”, which is defined as “an impartial evaluation by an independent auditor”. Under the new law, employers must also notify existing employees and new candidates that AI will be used in their hiring process, in addition to providing an opt-out option.
Although Maryland and Illinois have previously introduced similar measures to limit the use of facial recognition software and other AI technologies that have the potential to discriminate based on ethnicity, age, or gender, New York City is the first to get this specific.
On a federal level, lawmakers have also begun calling for the Equal Employment Opportunity Commission to address concerns raised over the use of AI tools in employment settings. In response, the EEOC released new guidelines to monitor how AI is used in this manner, with the aim being to provide employers and AI vendors with more streamlined guidelines on ensuring the technology aligns with federal law.
As developments in artificial intelligence and human enhancement technologies have the potential to remake American society in the coming decades, it’s understandable that nearly half of the nation’s adults (45%) say they are equally concerned and excited about how AI is shaping the future. The reality is that AI is here to stay, and now is the time to take measures to ensure it’s used in the right way.
How to stop AI talent acquisition software from using human bias
When used as a force for good, the beauty of AI is that it can be incredibly effective not just as a diversity recruiting strategy, but as a means to work more efficiently. As most systems can be customized to meet specific needs, it’s becoming more accessible than ever to implement design principles that are both ethical and fair.
If you’re one of the thousands of businesses looking to jump on the AI bandwagon but want to stay compliant, a common mistake that far too many companies make is the assumption that AI will solve all of their recruiting issues. In reality, effective AI systems don’t just factor in how the technology works, but where it is deployed.
As such, organizations and hiring managers should approach psychometric tests with caution, especially when added to their pre-employment screening system. Thanks to anti-discrimination laws, assessment tools — especially cognitive ability tests — need to be job-relevant and well-validated. In the United States, because of the Americans with Disabilities Act, tests generally need to respect privacy and not endeavor to “diagnose” candidates in any way.
A recent example of an organization that has changed its in-house assessment policy due in part to concerns about racial discrimination and poor prediction of job performance is the National Football League, as covered by the New York Times. Unless a position specifically involves law enforcement, weaponry, or other special safety considerations, companies should not ask candidates to take an assessment that was designed for the purpose of diagnosing susceptibility to depression, risk for other kinds of mental illness, or any kind of personality disorder.
Once legally compliant, the next step is integrating an AI system that is capable of removing any human bias. Resume screening, chatbots, and even personality tests have all been cornerstones of recruitment sourcing strategies, but each of these is susceptible to bias from an AI perspective. Put simply, people lie on resumes, may not perform well during interviews, and qualifications on paper may not translate to being able to complete the desired task.
The solution? Job simulations.
Skilled workers are more important and in demand than ever before, placing much-needed pressure on organizations to validate candidates’ skills and predict job performance accurately to provide hiring confidence. Apart from the ability to see someone do the job before they get the job, job simulations are also incredibly effective AI talent acquisition software that removes potential bias during the process.
By testing the applicant’s skills instead of the quality of their CV, the AI simply doesn’t have the chance to make false assumptions. In turn, this not only helps organizations to find the best people for their roles but to work within the legal framework that applies to AI and recruitment in the United States.
How Vervoe is reinventing recruitment sourcing strategies
Traditional recruitment processes have been failing hiring managers for years, adding undue stress, length, and unreliability to an already tedious process. If this experience sounds all too familiar, it’s time to make the switch to a recruitment method that takes a pragmatic approach to hiring while staying compliant with relevant local laws that apply to artificial intelligence.
Vervoe is an end-to-end solution that is proudly revolutionizing the hiring process. By empowering businesses to create completely unique assessments that are tailored to suit the specific requirements of a role, Vervoe predicts performance using job simulations that showcase the talent of every candidate.
Machine learning can be susceptible to human hiring bias being introduced through the data set. If machine learning learns from biased signals like knowing whether an applicant is male or female then the outcomes will have bias. To prevent this from occurring at Vervoe, we started with a clean dataset that we built up over a two-year period with real candidates applying for real jobs being graded by real employers. Today, that data set continues to grow, and any human-induced bias from real-world data is flagged and rectified.
By assessing an applicant’s ability to perform the role through a skills assessment, our AI-powered job simulations focus on the work — and not the person. To see people do the job before they get the job, book a demo today and let our experienced team run you through Vervoe’s full range of ready-made and tailored solutions.