The New York City Council has taken proactive measures to address concerns surrounding the use of automated employment decision tools (AEDTs) and potential discriminatory outcomes. They have passed legislation mandating bias audits of these tools. Originally scheduled to be implemented on January 1, 2023, enforcement was postponed to April 15, 2023, in order to address the extensive public comments received during the first public hearing on the Department of Consumer and Worker Protection’s (DCWP’s) proposed rules. Following the submission of a revised version of the proposed rules and the subsequent second public hearing, the DCWP has now released the final version of its adopted rules. The final enforcement date has been set for July 5, 2023.
Although NYC 144 is currently expected to be applied to natural persons who are job candidates or are being evaluated for promotion in the city of New York, the legislation does not explicitly specify such a limitation. Furthermore, considering the increasing use of AI-supported autonomous decision-making systems in various aspects of life, NYC 144 could potentially serve as a precedent for future legal regulations in different countries. Therefore, even if NYC 144 does not directly apply to particular companies, it would be beneficial for them to understand the provisions and scope of the legislation and take necessary precautions to comply with the rules concerning the use of autonomous decision-making systems. In this regard, this article will examine the content and scope of NYC 144, as well as the regulations it imposes on companies utilizing autonomous decision-making technologies.
At this point, companies will need to determine whether they possess a technology that qualifies as an AETT under NYC 144 and how they use this tool and for what purposes.
The legislation defines AEDTs as technologies developed using machine learning, statistical modeling, data analytics, or AI-supported systems that play a significant role in employment decisions or directly make decisions in this process. These systems are considered to be capable of producing outputs such as scores, classifications, or recommendations, which are simplified representations of complex processes. It can be stated that the term refers to systems that are not directly designed by humans, learn through machine learning technology rather than statistical models, and are capable of self-improvement. For example, if a real person determines the necessary inputs for a test and their level of importance, the technology in question is unlikely to be considered an AEDT and would fall outside the scope of the legislation.
On the other hand, if the tool is programmed to adjust the criteria for decision-making based on past data sets and outputs without human intervention, then the legislation will be applicable. In this context, the law is limited to predictive technologies that utilize machine learning-based algorithmic techniques.
However, it is not the only criterion for companies to possess a technology that meets the definition of an AEDT to fall within the scope of the law. These technologies must also be used in accordance with the purposes and methods specified in the legislation.
In this context, the outputs provided by the system, such as scores, classifications, recommendations, or final decisions, must have a significant role in the employment process and effectively contribute to business decisions. Therefore, if a machine learning tool is employed that has a decisive impact on eliminating a job applicant or evaluating an employee for promotion, the company will be subject to the NYC 144.
Once within the scope of the law, companies will need to take four different actions:
1- Conducting an independent bias audit
Companies currently utilizing AI systems will be required to conduct a bias audit on their systems as soon as possible. According to the law, the bias audit must be conducted annually by independent auditors. Independent auditors, as defined by the law, are impartial individuals or groups who have never been involved in the system’s usage, development, or distribution, and have never been employed by the employer, vendor, or system developer. They must not have any direct or indirect financial interests with the vendor responsible for developing or distributing the system.
2- Publishing a summary of the audit results
According to the law, companies are obliged to publish a summary report of the bias audit they have conducted. This summary should include the content of the bias assessments, such as gender, race/ethnicity, and intersectional categories, the number of applicants in these categories, and the selection rates of candidates from these categories. Additionally, as per the law, an independent auditor has the authority to exclude a category representing less than 2% of the data used in the audit, providing a rationale for doing so. The purpose of this summary report is to ensure transparency and accountability in the audit results.
3- Notifying applicants and employees about the use and operation of the system and informing individuals affected by the system that they can demand compliance with the law or request an alternative decision-making process.
Before using an AI system on job applicants or employees, companies are required to clearly and prominently disclose the date of the most recent bias audit, the source and explanation of the data used, selection or scoring rates, and impact ratios for all categories on the employment section of their website. Additionally, the type of data collected for the AI system, its source, and the organization’s data retention policy must also be published.
Under NYC Local Law 144, candidates for hiring or promotion must be provided with a mandatory notice at least 10 business days before the use of the system. This notice should include:
(1) The use of an AI system in the candidate’s evaluation.
(2) The job qualifications and characteristics that the AI system will analyze.
(3) If not disclosed on the website or elsewhere, the data source and type of the AI system, as well as the employer’s personal data retention and deletion policy.
(4) The notification should also include information about the candidate’s ability to request an alternative selection process or demand compliance with the law.
This notification can be made through clear and prominent disclosure on the employer’s employment section, in a job posting, or through postal or email methods to the candidate. In practice, timely notifications added to existing career websites would be the preferred method.
According to NYC Local Law 144, any person violating the law may be subject to a penalty of $375 for the initial violation. For each subsequent violation not remedied within one day, a penalty ranging from $500 to $1,500 may be imposed. It should also be noted that failure to meet the requirements for applicant notification or fulfill the bias audit obligation will be considered separate violations in themselves.
Machine learning-based autonomous decision-making systems have the ability to make life-changing decisions about individuals. In processes where decision-making about humans is entirely delegated to AI, it is crucial to continuously test and audit these systems to prevent the perpetuation of existing discrimination in the real world. NYC Local Law 144 provides a concise yet comprehensive regulation in this regard. Due to the law’s inclusion of legal and technological definitions, it is of critical importance for companies that may be affected by the law to consult experts to understand whether they fall under its scope and what steps they need to take to ensure compliance.