The hiring process is a critical gateway to economic opportunity, determining who can access meaningful employment and who can achieve financial success. New ways to apply for jobs via the internet has led to a huge increase in the number of applications. Google, for example, received over 4 million applications last year for just a few thousand posts. This deluge of applications – many of which lack the position’s desired qualifications – has prompted companies to seek intelligent software tools to help organise, evaluate and select the right applicant.
In response, software companies are increasingly building AI based technologies that can be used throughout the HR process. Some are aimed at helping sift and sort candidates and others to monitor the performance of current employees. Some of these technologies rely on machine learning, where algorithms are trained to detect patterns in existing data so the tool can be used to predict correct outputs. Once fed into a process, this can then influence what jobs are shown to which candidate on a job listing site, developing a shortlist from a pile of thousands of CVs, or within pre-employment assessments to measure aptitude, skills, and personality traits to differentiate potential top performers from other applicants.
As with all AI systems, there is significant potential for bias in this selection process. This is worrying as it affects the future of the individuals applying, as well as restricting the ability of the companies to identify the best talent. For example, gender bias can develop even if the names of the candidates are removed as the system can pick up on characteristics such as attendance at a women’s college or being part of a women’s club or the use of particular words or tone. Biases are potentially most easily exacerbated if the company recruits across international boundaries, where differences are amplified. For example applicants may come from unfamiliar educational establishments with different academic qualifications, job roles may have different titles and the person may be writing in a second language.
All these issues and more mean bias is an issue that companies using AI tech in HR will need to confront. Recent legislation in America puts the responsibility for the accurate and fair performance of such technologies, such as facial recognition technologies for pre-employment assessments, on the company using the technology and not the technology developer. This liability is likely to spread to wherever AI is used to select or influence the selection of candidates or the performance of employees. Hence, if a system is biased and leads to legally unfair selection of candidates based on protected characteristics such as gender, race or religion, the company using the technology is open to legal challenge. All companies will need to increase the work they do to validate and check that the AI systems they are using for HR purposes produces fair and equitable results.
Oxford Brookes’ Ethical AI Institute is working with companies to help develop a range of tools for these situations. These include
– tools to understand how a decision from an AI system is made
– techniques to validate that the outputs are robust, legal and fair
– understanding how to use AI systems so they stay within some agreed morals, norms and standards and
– helping companies understand the ethics, and law around the use of AI and social media in many areas including cyberhate and equality
– developing risk classification and assessment techniques appropriate for AI systems.
We welcome enquiries from companies whether they wish to discuss issues that surround their use of AI or if they wish to develop AI systems themselves. Speak to Rebecca Raper of Oxford Brookes Ethical AI Institute on [email protected]
About the author
Oxford Brookes Ethical AI Institute
Rebecca Raper works as a consultant and researcher for the Oxford Brookes Ethical AI Institute. She is also a PhD student at Oxford Brookes University and her thesis is about Autonomous Moral Artificial Intelligence.