The United States Department of Labor has unveiled a new AI and Inclusive Hiring Framework, aiming to guide employers in the responsible integration of AI tools while addressing the risks of algorithmic discrimination.
United States Department of Labor Releases AI & Inclusive Hiring Framework
On 24 September 2024, the United States Department of Labor (DOL) publicly announced the introduction of an Artificial Intelligence (AI) & Inclusive Hiring Framework. This new guideline is designed to aid employers in effectively managing AI recruiting and hiring tools while minimising the risks associated with algorithmic discrimination. The Framework outlines ten key focus areas that employers should consider when integrating AI technologies into their hiring processes.
The Framework has been developed by the Partnership on Employment & Accessible Technology (PEAT), a group funded by the DOL’s Office of Disability Employment Policy. In constructing this Framework, PEAT took cues from the National Institute of Standards and Technology’s AI Risk Management Framework. This foundational guideline assists organisations in identifying, measuring, and managing risks associated with AI applications.
The ten Focus Areas put forth by the Framework include:
-
Legal Compliance: Employers should identify and adhere to legal requirements relevant to using AI in recruitment and hiring.
-
Assignment of Responsibilities: Establish specific roles and provide training for employees responsible for managing AI tools.
-
Tool Inventory and Risk Classification: Develop a comprehensive inventory of these tools, detailing their intended use, potential benefits, risks, and scope, while classifying any risks in line with legal standards.
-
Procurement Policies: Implement policies for engaging with vendors, ensuring measures are in place to identify and avoid algorithmic bias.
-
Impact Evaluation: Assess both the positive and negative implications of deploying AI tools in recruitment.
-
Applicant Accommodations: Provide necessary accommodations for job applicants to mitigate any disadvantage they might face.
-
Transparency: Inform job applicants of AI tools used in the hiring process and publish clear AI statements explaining how these tools operate.
-
Human Oversight: Ensure effective human oversight over AI tools to maintain fairness and accountability.
-
Failure Management: Create protocols for addressing tool failures and provide applicants with avenues to challenge decisions affecting their job candidacy.
-
Performance Monitoring: Conduct regular evaluations of the AI tools’ performance and impact.
While the Framework itself does not have the binding force of law, it serves as a comprehensive guide for employers looking to integrate AI tools responsibly and inclusively. PEAT suggests that organisations tailor their approach by selecting Focus Areas that best align with their operational practices.
In light of ongoing developments at various governmental levels concerning AI regulation, some aspects of the Framework, particularly those focusing on transparency and regular evaluations, align with existing or proposed legislation in several states and municipalities. As legislative bodies across the United States navigate the dual objectives of harnessing AI’s efficiency and mitigating algorithmic bias, this Framework provides informative perspectives that may shape future policies.
As AI continues to transform the workplace, scrutiny of legal and regulatory updates in this realm remains paramount. The implications of PEAT’s guidelines offer essential insights as companies and legislators work to balance innovation with ethical and legal considerations.
Source: Noah Wire Services