WSJ Updates


Indeed Shares 4 Ways Companies Can Ensure Their Use Of AI For Hiring Is Fair, Ethical, and Effective

AI is already playing a significant role in HR departments across many companies. A recent survey of over 250 HR leaders in the US revealed that 73% are leveraging AI in their recruitment and hiring processes. Surprisingly, an upcoming survey by Indeed found that 8% of Canadian HR and talent acquisition leaders are not currently utilizing AI tools.

Today, AI tools are versatile, aiding in tasks such as resume review, candidate scoring, talent sourcing, job description writing, employee promotion identification, and even automated applicant messaging.

Trey Causey, head of Responsible AI and senior director of data science at Indeed, remarked, “There’s virtually no task without an AI tool being developed for it.”

While AI holds the potential to reduce human bias, especially in hiring, and streamline repetitive tasks, it also carries the risk of perpetuating and amplifying existing biases, potentially leading to wasted resources. It’s crucial for organizations to understand the spectrum of risks associated with AI use and develop responsible strategies for its implementation.

1. Evaluate the risks and rewards for your organization: AI systems can streamline HR processes like candidate assessment, but they also introduce the potential for errors. Jey Kumarasamy, an associate at Luminos.Law, notes that even with 90% accuracy, processing thousands of applications can lead to significant mistakes. Organizations should acknowledge these imperfections and develop strategies to address biases or accept the associated risks. While some companies embrace AI for its productivity gains, others may find the margin of error incompatible with their values or regulatory requirements. When adopting AI, choose tools carefully; for example, AI that transcribes interview conversations is less risky than AI that assesses candidates based on video interviews, which can be error-prone, especially with non-native speakers.

2. Screen third-party vendors that provide AI-powered tools: When selecting AI tools, ensure vendors comply with current and emerging regulations. Failure to do so can impact product effectiveness and regulatory compliance. Asking vendors about their audit processes and willingness to cooperate with your audits is crucial. Additionally, inquire about metrics used for system testing, bias mitigation strategies, and post-deployment support for maintaining the system’s performance.

3. Identify and monitor bias: AI algorithms reflect the biases present in the data used to train them, potentially leading to discriminatory outcomes. While employers can’t alter algorithm development, they can mitigate risks by conducting third-party bias audits before deploying AI for hiring. Continuous monitoring of AI systems is essential to identify and rectify discriminatory patterns. Employers should also stay informed about developments in data science and AI to ensure their systems remain unbiased.

4. Stay ahead of evolving legislation:Automated HR tools face legal risks, including reputational and financial implications. Emerging legislation, such as the EU’s proposed AI Act and Canada’s Artificial Intelligence and Data Act (AIDA), seeks to regulate AI applications based on their safety and potential for discrimination. Employers must comply with existing laws, such as PIPEDA, which apply to AI-driven employment decisions. To prepare for evolving regulations, organizations should establish robust AI governance programs that outline principles and processes for assessing, detecting, and rectifying issues with AI tools. These programs should include cross-functional teams focused on AI ethics and compliance.

Implementing these strategies can help organizations leverage AI effectively while mitigating risks and ensuring fair and ethical practices in HR processes.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top