Written By Michael Ferrara
Created on 2024-01-07 19:50
Published on 2024-01-10 13:51
Hilke Schellmann, in her insightful book "The Algorithm: How AI Decides Who Gets Hired," delves into the profound impact of artificial intelligence on the hiring process. Schellmann, an adept journalist, uncovers how companies increasingly rely on AI for evaluating candidates, from résumé screening to conducting interviews. Through meticulous research, she sheds light on the hidden biases and ethical challenges inherent in these automated systems. Her exploration extends beyond hiring, examining AI's role in employee surveillance, unfair layoffs, and promotions. By highlighting the intersection of technology and employment, Schellmann raises critical questions about privacy, fairness, and the future of work in an AI-driven world.
Schellmann outlines four main strategies that companies often use in conjunction with AI to reduce the number of job applicants. These strategies are:
Résumé Screeners: These tools are used to screen, evaluate, and rank résumés. When you submit a résumé through a job board or on a large company's website, it's highly likely that AI is used to evaluate your submission.
Assessments Including AI Games: These are various assessments and games powered by AI, designed to evaluate the suitability of candidates for specific roles.
One-Way Video Interviews: These interviews involve candidates recording their responses to predefined questions, with AI tools analyzing their responses.
AI Tools for Background Checks and Online Life Scans: These tools conduct thorough background checks and can scan candidates' activities and presence online.
Schellmann's exploration into AI's role in hiring reveals critical insights. Addressing fairness, transparency, accountability, and ethical considerations, her work underscores the importance of maintaining data privacy, inclusive design, and regular audits in AI systems. Schellmann advocates for robust human oversight, thorough training, and strict legal compliance to ensure AI's responsible use in recruitment.
Schellmann covers a broad range of topics related to AI in hiring. Let's overview some key points:
Fairness, Transparency, and Accountability in AI: Advocating for unbiased algorithms that do not discriminate based on gender, race, or other personal characteristics, while emphasizing the need for clarity in AI decision-making and holding organizations accountable for the outcomes of their AI systems.
Ethical AI Practices and Data Protection: Highlighting the importance of ethical considerations in AI development and implementation for hiring, along with advocating for stringent data privacy and security measures to protect candidate information.
Inclusive Design and Regular Audits: Encouraging the creation of AI systems that consider diverse candidate pools and do not exclude underrepresented groups, complemented by recommending regular audits and reviews to ensure AI systems function as intended and remain free from biases.
Human Oversight and Professional Development: Suggesting that human judgment should accompany AI decisions, particularly in critical hiring decisions, and advocating for comprehensive training and awareness programs for HR professionals and candidates about the role and limitations of AI in hiring.
Legal and Regulatory Compliance: Ensuring that AI hiring practices adhere to existing employment laws and regulations, to maintain ethical standards and legal conformity in the use of AI in recruitment.
Schellmann provides specific examples of how AI tools have been used in employee terminations:
Termination Due to Online Activity Monitoring: Employees were terminated for their online activities during work hours. According to Sommer Ketron, a consultant with Jumpstart:HR, she recalled instances of AI assisted terminations. One employee was dismissed for using the dating site Plenty of Fish, and another for watching inappropriate content. These activities were uncovered through computer logs recorded by algorithms monitoring every website visited on work computers.
Unfair Layoff Due to AI Error: In a case involving Estée Lauder Companies, Lizzie, a makeup artist, along with two other laid-off makeup artists, took legal action against their former employer after being laid off based on low scores from a HireVue interview. It was later discovered that the second interview, which was the basis for her layoff, was never actually scored due to a mistake in the AI program. Despite her good performance and success as a salesperson, the AI's error led to her unfair termination.
Automated Decision in Hiring Leading to Rejection: In the case of Martin Burch and Bloomberg, it was revealed that an AI algorithm was used to determine who got rejected and who advanced to the next round for a data analyst job. This scenario contradicted prior claims that AI tools in hiring do not make automatic decisions without human oversight. Burch, who had applied for a position at Bloomberg, was perplexed by the digital assessment that focused on pattern recognition rather than job-specific skills. His application was rejected based on the results of this AI-assessed test, a decision he later challenged. Burch's experience is indicative of a broader issue in the job market, where algorithms and artificial intelligence are increasingly used in hiring, often leading to the exclusion of potentially qualified candidates due to criteria unrelated to actual job performance. This incident at Bloomberg is a clear example of the growing reliance on, and the potential pitfalls of, algorithmic decision-making in employment processes.
These examples highlight the significant impact and potential issues associated with the use of AI in employment decisions, particularly in terminations and layoffs.
Schellmann does touch on the topic of AI in the context of employee promotions. It discusses how AI tools, such as the Eightfold tool, analyze data to track employees' career trajectories and progression within an organization. The tool can identify employees who are advancing faster than their peers, which might be interpreted as a sign of hard work and potential suitability for promotion.
However, she also raises concerns about this approach. It points out that promotions are not always based on employees doing their best work. For instance, employees on a slower trajectory, such as those who take time off or work part-time for personal reasons like childcare or eldercare, or those dealing with disabilities or chronic illnesses, might be overlooked by AI tools. This suggests that while AI can assist in identifying potential candidates for promotion, it might also inadvertently disregard employees whose career paths don't align with traditional patterns of progression, leading to potential biases in promotion decisions.
Schellmann, concerns are raised about algorithms that have too much authority, specifically those used for social media background checks. These algorithms, criticized for being overly simplistic (such as relying on keyword searches), can have significant authority in determining employment outcomes. The concern is that they might misinterpret content, like failing to distinguish between real threats of violence and sharing violent song lyrics, or misidentify instances of bullying online.
The danger of these algorithms is that they might wrongly exclude people from employment based on misunderstood social media content, including those who might be sharing personal struggles or experiences. The book highlights the case of an individual, Kai Moore, who was fortunate that their employer overlooked the findings of a social media background check. However, the implication is that others might not be so fortunate and could be unfairly judged or even fired based on these algorithmic assessments.
According to Schellmann, AI tools like Humantic's AI are used to predict various aspects of a candidate's personality and behavior. These AI tools analyze data, including text from work and social media feeds, to assess and predict:
Conscientiousness: Determining how careful, diligent, and organized a candidate is.
Team Player Quality: Assessing the candidate's ability to work well in a team environment.
Openness: Evaluating how open the candidate is to new experiences and ideas.
Emotional Stability: Analyzing the candidate's emotional resilience and stability.
Supervision Needs: Estimating how much supervision or guidance the candidate might require.
These predictive tools aim to understand more than just the information provided by candidates; they seek to delve into the nuances of personality and behavior as indicated by their online activities and interactions.
The CEO of Humantic AI explained how their algorithm relies on natural language processing (NLP) to analyze text, which relates to psycholinguistics, the study of language processing. Similarly, creators of Crystal use AI to assess personality traits based on text data, aligning with computational psychometrics.
For example, imagine an AI that analyzes all text associated with a user, including phrases used on LinkedIn profiles, job titles, and other related information. These examples illustrate how AI tools in hiring delve into psycholinguistics and computational psychometrics to evaluate candidates.
Schellmann's experience with varying AI personality predictions emphasizes a core problem with AI assessments: they can yield highly divergent results based on the data they analyze, such as different social media content. This inconsistency underscores the need for human judgment in interpreting these outcomes. Dr. Tomas Chamorro-Premuzic, a psychology professor, emphasizes that AI is a prediction tool, while human intervention is essential in determining how to use these predictions. When AI predictions conflict or differ significantly, it's crucial to cross-reference with other data, investigate the reasons behind variations, and consider broader context, like a candidate's work history. This process demands both competence and expertise to distinguish meaningful insights from AI-generated data, separating signal from noise.
Schellmann reflects her personal experience with AI-based hiring processes, particularly one-way video interviews. She says, "I feel like I am speaking into a void and have no idea how I am coming off." This statement captures the sense of uncertainty and disconnection that can accompany interactions with AI in hiring.
In a traditional interview, candidates receive immediate feedback through verbal and non-verbal cues from the interviewer, which helps them adjust their approach and understand how they are being perceived. However, in AI-driven interviews, particularly those without a live interviewer, candidates like Schellmann express feeling isolated and unsure about how their responses are being received and evaluated. This uncertainty can be unnerving and make the interview experience more challenging.
The quote highlights a broader concern about the increasing impersonal nature of job applications and interviews in the age of AI and technology. It brings attention to the emotional and psychological aspects of job applicants' experiences, which are often overlooked in automated processes. This sentiment underscores the need for a more human-centric approach in AI-driven hiring practices.
Relying excessively on algorithms during the hiring process has revealed numerous significant problems.
Inferences from Résumé Content: AI tools may make inferences about a candidate's gender or ethnic background based on pronouns and other details in the résumé. For example, membership in specific organizations might lead AI to infer a candidate's racial or ethnic background.
Biased Algorithms Affecting Large Groups: If an AI system has biases, it can potentially discriminate against large numbers of candidates. This is a significant concern because a biased AI can affect hundreds of thousands of people in a large corporation.
Optimization for Efficiency Over Quality: Many résumé screening tools reject candidates because they are optimized for efficiency rather than accurately identifying qualified candidates. Research shows that these tools often filter out highly skilled candidates who don't match the exact criteria of the job description.
Gender Bias in Job Platforms: AI tools might inadvertently perpetuate gender biases, leading to fewer opportunities for certain groups. For instance, women might receive fewer opportunities than men on job platforms due to these biases.
Neglecting Validation and Selection Processes: Some companies use AI tools without properly validating their selection processes, leading to potential biases and unfair rejections. This lack of validation and oversight can result in careless use of AI in hiring.
Misleading Claims about Human Oversight: Despite claims that there is human oversight in AI-driven hiring decisions, some cases reveal that AI tools do make automatic decisions that can lead to outright rejection of candidates without human review. This contradicts the assurance that human judgment plays a role in the final decision-making process.
Schellmann unveils the intricate impact of AI on hiring. This incisive work reveals how AI shapes candidate evaluation, from résumé screening to interviews, highlighting both the efficiencies and biases inherent in these systems. Schellmann's investigation extends to AI's role in employee surveillance, layoffs, and promotions, illuminating ethical concerns and potential injustices, such as unjust terminations exemplified by the case at Estée Lauder Companies. The book critically examines overreliance on AI in employment decisions, particularly the misinterpretation of online behaviors. Advocating for greater transparency, accountability, and inclusivity, Schellmann's work calls for a balanced approach to AI, emphasizing the need for human oversight in an increasingly automated hiring landscape.
#AIHiring #EthicalAI #FutureOfWork #AlgorithmicBias #TechInRecruitment
As I delve into the fascinating realms of technology and science for our newsletter, I can't help but acknowledge the crucial role of seamless IT networks, efficient desktop environments, and effective cloud systems. This brings to light an important aspect of my work that I am proud to share with you all. Besides curating engaging content, I personally offer a range of IT services tailored to your unique needs. Be it solid desktop support, robust network solutions, or skilled cloud administration, I'm here to ensure you conquer your technological challenges with ease and confidence. My expertise is yours to command. Contact me at michael@conceptualtech.com.
Tech Topics is a newsletter with a focus on contemporary challenges and innovations in the workplace and the broader world of technology. Produced by Boston-based Conceptual Technology (http://www.conceptualtech.com), the articles explore various aspects of professional life, including workplace dynamics, evolving technological trends, job satisfaction, diversity and discrimination issues, and cybersecurity challenges. These themes reflect a keen interest in understanding and navigating the complexities of modern work environments and the ever-changing landscape of technology.
Tech Topics offers a multi-faceted view of the challenges and opportunities at the intersection of technology, work, and life. It prompts readers to think critically about how they interact with technology, both as professionals and as individuals. The publication encourages a holistic approach to understanding these challenges, emphasizing the need for balance, inclusivity, and sustainability in our rapidly changing world. As we navigate this landscape, the insights provided by these articles can serve as valuable guides in our quest to harmonize technology with the human experience.