Employment Law Report
Recent Developments on AI in the Workplace: What Employers Need to Know When Using AI in the Hiring, Promotion, and Disciplinary Process

Written by Isaac Keller
Artificial intelligence (“AI”) has become increasingly integrated into business operations and recent developments in federal AI policy have raised questions for employers. President Trump has executed a vast array of executive orders that aim to reduce government oversight of AI and expand its use. But while AI promises efficiency, consistency, and cost savings, its use also raises significant legal and ethical concerns under existing employment laws. Additionally, as the use of AI expands, employers can expect state and federal lawmakers to promulgate more rules and regulations governing AI in the workplace. Employers who use AI should take actions necessary to comply with federal and state discrimination policies and track developments in local, state, and federal AI laws.
Former EEOC Guidance on AI Use and Workplace Discrimination Remains Relevant
Since 2021, the Equal Employment Opportunity Commission (“EEOC”) has expressed concern that AI may unintentionally perpetuate bias, especially if algorithms were trained on historical data that reflect prior discriminatory practices. This led the EEOC to launch an agency-wide “Artificial Intelligence and Algorithmic Fairness Initiative” to ensure that the use of AI and other emerging technologies complies with federal civil rights law.
Throughout 2022-2023, and consistent with the EEOC’s agenda under the Biden Administration, the EEOC issued guidance regarding employer use of AI and compliance with federal non-discrimination laws. For example, the EEOC published guidance on AI and the American with Disabilities Act (“ADA”): titled, “The Americans with Disabilities Act and the Use of Software, Algorithms, and Artificial Intelligence to Assess Job Applicants and Employees.” The EEOC’s guidance informed employers that they could be liable for violating the ADA if they used software, algorithms, and AI that resulted in, for example, the failure to properly provide an employee’s reasonable accommodation request or the intentional or unintentional screening out of applicants with disabilities when these applicants could perform the position with reasonable accommodations.
The EEOC also published a technical assistance document, titled “Assessing Adverse Impact in Software, Algorithms, and Artificial Intelligence Used in Employment Selection Procedures Under Title VII of the Civil Rights Act of 1964,” which sought to provide employers with guidance regarding the application of federal non-discrimination laws in connection with an employer’s use of AI. The guidance specifically emphasized the prohibition of “disparate” or “adverse” impact discrimination, which could potentially result from the use of algorithmic decision-making tools.[1] Additionally, the EEOC issued a Joint Statement on Enforcement Efforts Against Discrimination and Bias in Automated Systems with officials from the Department of Justice, the Consumer Financial Protection Bureau, and the Federal Trade Commission, promising to “monitor the development and use of automated systems,” “promote responsible innovation,” and “vigorously” use collective authorities to protect individuals’ rights regardless of whether legal violations occur through traditional means or advanced technologies.
And amidst all the EEOC’s guidance, it settled what appeared to be its first case against an employer who allegedly used AI in a discriminatory manner while vetting applicants for employment. See EEOC v. iTutorGroup, et al., Case No. 1:22-CV-02565-PKC (E.D.N.Y. 2023).
Today, much of the above-mentioned guidance has been rescinded by the Trump Administration. President Trump’s rollback, however, does not remove an employer’s obligation to comply with existing employment laws. Make no mistake, employers can still be liable for discrimination that results from their use of AI.
Increased State and Local Regulation of AI in the Workplace
States and localities are moving forward with regulations on the use of AI in the workplace. In 2023, New York City was one of the first jurisdictions to regulate the use of AI in the employment context by enacting Local Law 144. The law prohibits employers and employment agencies from using an automated employment decision tool unless the tool has undergone a bias audit within one year of its use.
Over 20 states have since considered legislation to address AI-related discrimination in the workplace. This year, both Colorado and Illinois have enacted laws regulating AI systems used to make employment decisions.
The Colorado AI Act (effective February 1, 2026) regulates the use of AI that makes “consequential decisions” and requires, among other things, that employers use reasonable care to avoid algorithmic discrimination, establish risk management policies, complete annual impact assessments, provide notice when certain AI is used, and give employees an opportunity to appeal adverse decisions that result from AI.
Illinois HB 3773 (effective January 1, 2026) prohibits employers from using AI in any way that results in discrimination of an employee based on a protected class under the Illinois Human Rights Act. The Illinois law requires employers to notify workers when AI is used in recruitment, hiring, promotion, renewal of employment, training or apprenticeship decisions, termination, discipline, and other employment decisions.
While there is currently no Kentucky law specifically regulating the use of AI in the private workplace, a law governing AI use in state government was passed in March of this year (Senate Bill 4). SB 4, and the wave of laws proposed and passed in other jurisdictions, demonstrate that lawmakers are weary of the largely unregulated use of AI and its potential consequences.
How Employers Using AI Can Avoid Litigation
In short, employers would do well to track and comply with local, state, and federal laws on the use of AI in the workplace. And for employers in jurisdictions that do not specifically regulate the use of AI in employment, know that your AI systems must still comply with federal and state anti-discrimination laws. Oversight and regular internal audits are essential to prevent bias and ensure compliance. Moreover, employers should review any AI-liability provisions in their vendor agreements to understand what liability they may face if their vendor fails to comply with federal and state employment laws.
[1] Although the Trump Administration has taken action against the enforcement of disparate impact discrimination policies, employers should avoid implementation or use of AI in a way that could disproportionately impact individuals with characteristics protected under Title VII, such as race, color, religion, sex, or national origin. Employers remain liable for workplace discrimination and the elimination of government guidance does not alter existing anti-discrimination laws such as Title VII and the ADA.
