Artificial Intelligence (AI) is rapidly transforming workplaces across industries. From hiring to daily management, companies are adopting AI tools to streamline decisions and improve efficiency. In fact, according to the Society for Human Resource Management, about 1 in 4 employers now use AI to support HR activities, and interest in these tools has surged in the past year. For employees, this AI revolution brings both promise and risk. On one hand, AI can reduce bias in mundane tasks and increase productivity; on the other, it can also introduce new legal and ethical concerns. This post explores common uses of AI in employment and the risks they pose – particularly from the perspective of employees and the plaintiff-side employment lawyers who represent them.
Automated decision-making in hiring
One of the most prevalent uses of AI is in recruitment. Many employers deploy AI-driven software to screen resumes, evaluate applications, and even conduct preliminary interviews. These tools can automatically filter candidates based on keywords or scores, schedule interviews, or administer AI-scored aptitude and personality tests. Some companies have experimented with AI video interview platforms that analyze a candidate’s speech, facial expressions, and word choice to assess fit. This automation helps handle large applicant pools efficiently, but it also means algorithms are making initial decisions about who advances in the hiring process.
Biometric technology for security and time tracking
Employers are increasingly using biometrics – like fingerprints, facial recognition, or retina scans – to secure the workplace and track time. Biometric time-clock systems let employees clock in and out with a fingerprint or face scan instead of an ID card. A wall-mounted biometric time clock used by employees for clock-in. Such devices use unique personal identifiers (e.g. fingerprints) to verify identity. Biometric entry systems control access to buildings or rooms by scanning an employee’s face or iris. These AI-powered systems provide convenient security and accurate time records. However, they also collect highly sensitive personal data, raising concerns about privacy and proper consent (as discussed later in this post).
AI-driven workplace monitoring and productivity tracking
Employers today can monitor workers more closely than ever using AI. Advanced surveillance software can log keystrokes and mouse movements, take intermittent screenshots or webcam photos, track GPS location for field employees, and analyze emails or chat messages for sentiment. Some warehouses and delivery services use AI cameras and wearables to track employees’ movements and behaviors in real time. The data from these tools feed algorithms that flag “unproductive” periods or rule violations, and even automate managerial actions (like warnings for falling behind quota). AI can also aggregate performance metrics (calls handled, sales made, etc.) to rank or rate employees. While employers tout these systems as a way to boost output and accountability, constant algorithmic monitoring can create an atmosphere of surveillance and stress for workers.
“Pricing” AI for wage setting and resource allocation
Companies are beginning to apply AI to how they allocate resources and even determine pay. In gig and hourly work, for instance, algorithms set dynamic pay rates, bonuses, or schedules based on real-time data (demand, performance, location, etc.). This algorithmic wage-setting means two workers might be offered different pay for similar work, because an AI has calculated a personalized rate to incentivize each of them. Ride-share and delivery platforms have used such dynamic pricing models – sometimes leading workers to complain of unpredictable or opaque pay cuts. Employers may also use AI to decide how to distribute shifts or assign projects by predicting productivity, or to optimize budgeting and staffing levels across departments. While data-driven resource allocation can improve efficiency, it blurs the line between wage optimization and wage discrimination, as discussed below.
AI in performance evaluations and promotions
AI has also entered the arena of employee evaluations, promotions, and terminations. Some HR systems use algorithms to analyze performance data (sales figures, customer reviews, error rates, etc.) and score employees or even recommend personnel actions. For example, AI tools might rank employees against their peers or predict who is “high potential” for promotion. In other cases, algorithms may identify which employees are likely to quit (so management can intervene) or flag those viewed as low performers for additional training or termination. In theory, using AI in evaluations can help standardize assessments and base decisions on data. However, if the underlying data reflect workplace biases or if the algorithm isn’t carefully audited, these AI-driven evaluations can unfairly impact careers. Notably, the same types of automated tools used in hiring are now being repurposed internally – AI systems are available to analyze pay equity, guide layoffs (“reductions in force”), or screen candidates for promotion. This means the potential biases of AI can permeate all stages of employment, not just hiring.
While AI tools offer efficiency, they also introduce some legal and ethical risks. Employees may face discrimination or privacy invasions carried out by algorithms rather than humans – but the law can still hold employers accountable. Below are some common risk areas:
Bias and discrimination in hiring
Perhaps the biggest concern is that AI hiring tools can replicate or even amplify human biases. If an algorithm is trained on biased data (for example, past hiring decisions that favored certain demographics), it may systematically screen out women, older applicants, people of color, or other protected groups. For instance, the EEOC’s first AI-bias enforcement action involved recruiting software that automatically rejected older applicants. In that case, an AI tool used by iTutorGroup was allegedly programmed to disqualify women over 55 and men over 60, in violation of the Age Discrimination in Employment Act. Hundreds of qualified older candidates were never even considered due to their birthdates. In another closely watched lawsuit, Mobley v. Workday, a job seeker claimed that an applicant screening AI had a disparate impact against Black applicants, older workers, and those with disabilities. He argued the algorithm’s personality test unfairly filtered out candidates (including himself) with traits related to anxiety and depression, effectively acting as a gatekeeper biased against mental health conditions. These cases illustrate that discrimination via algorithm is still discrimination under the law. An employer is “no less responsible” for biased hiring decisions “since it uses AI,” as one court noted in the Workday case. In short, companies cannot hide behind “the computer did it” – if an AI tool has a discriminatory effect, employers (and even software vendors in some cases) may be liable.
Privacy concerns with biometric data collection
The use of biometric clocks and ID systems brings serious privacy risks. Biometric identifiers (fingerprints, facial scans, etc.) are highly sensitive personal data, and misuse or breaches can harm workers. Several laws regulate workplace biometric data, most notably the Illinois Biometric Information Privacy Act (BIPA). BIPA requires companies to obtain written consent before collecting an individual’s biometric data and to have a public policy on how that data will be used, stored, and destroyed. Uniquely, Illinois law gives employees the right to sue if their biometric privacy is violated – which has fueled a “boom in class action lawsuits” by workers over fingerprint scans and face recognition at work. Employers from warehouses to healthcare providers have been hit with massive BIPA lawsuits for using fingerprint timekeeping without the required notice and consent. (Illinois’s law is so employee-friendly that it’s led to over 2,000 suits in the last few years, whereas other states like Texas and Washington have biometric laws but no private right of action, resulting in far fewer cases.) The takeaway: if companies implement biometric entry or time-tracking, they must rigorously comply with privacy laws. Failure to do so not only violates employee privacy but can also result in hefty statutory damages and settlements.
Algorithmic wage discrimination and “pricing” AI
When AI starts setting pay or allocating work, it opens the door to new forms of unfair treatment. Algorithms that personalize wages or bonuses could end up paying equally qualified workers different rates for the same work, based on data quirks or predictions about their behavior. Scholars have dubbed this phenomenon “algorithmic wage discrimination,” likening it to price discrimination by AI – but applied to workers’ pay. Such practices “run afoul of … the spirit of equal pay for equal work laws,” potentially undermining longstanding labor standards. For example, if an AI learning platform decides to pay Worker A a lower hourly rate than Worker B (for the same role) because Worker A is “likely to accept less” based on her location or past behavior, that raises obvious fairness concerns. It may also violate laws like the Equal Pay Act or Title VII if the algorithm indirectly ties pay to protected characteristics (even unintentionally). Additionally, there are questions of transparency – workers often have no insight into how an AI determined their pay or schedule. Labor advocates report that “by all accounts, this is a prevalent practice” now, and they expect government agencies to take action against unfair algorithmic pay schemes in the near future (indeed, this problem extends beyond gig platforms into industries like retail, healthcare, and logistics). Regulators are also watching for anti-competitive uses of AI in pricing – for instance, a recent Department of Justice case targets an AI vendor that allegedly helped landlords collude on rent prices, hinting that similar scrutiny could fall on wage-setting algorithms that suppress pay. In sum, companies must ensure that AI-driven compensation or resource decisions don’t discriminate or exploit employees, or they could face legal challenges.
AI-driven workplace surveillance and its impact on labor rights
The expansion of AI monitoring tools has raised red flags with privacy advocates and labor regulators. Constant surveillance can chill employees’ exercise of workplace rights. For example, if every email, chat, or movement is tracked by AI, workers may fear that engaging in protected activities – like discussing workplace issues, union organizing, or whistleblowing – will be noticed and punished by an algorithm. Regulators are taking note. The General Counsel of the National Labor Relations Board (NLRB) has warned that intrusive electronic monitoring and automated management practices can violate employees’ Section 7 rights (the right to organize and act concertedly) by deterring them from engaging in protected activity. AI-powered surveillance cameras and monitoring software are increasingly used in workplaces. Overly broad monitoring can infringe on employee privacy and labor rights. In an NLRB memo, the agency vowed to crack down on “abusive” monitoring that interferes with organizing – for instance, AI tools that watch warehouse workers so closely that any attempt to privately discuss a union is impossible. Beyond labor law, AI surveillance can also lead to claims for invasion of privacy or violations of electronic communications laws if employers overstep boundaries. Companies using these tools should ensure they’re not crossing the line into unlawful surveillance (e.g. recording employees without consent, or using productivity scores to retaliate against legally protected conduct). As a rule of thumb, monitoring should be transparent, proportionate, and not destructive of workplace rights.
Emerging litigation trends and government scrutiny
As AI becomes ingrained in employment decisions, litigation is following. We are seeing novel legal theories being tested – such as holding AI vendors liable as “employment agencies” or “agents” of the employer (as in the Workday case) – and more class actions attacking systemic bias in algorithms. The plaintiff-side bar is actively looking for cases where an AI may have caused widespread harm (for example, a class of applicants wrongly rejected by the same algorithm) and bringing those claims to court. At the same time, government agencies have made AI in the workplace a priority. The Equal Employment Opportunity Commission (EEOC) has launched an AI initiative and is “on the lookout for potential bias,” even suing companies for algorithmic discrimination. The EEOC’s first settlement in this area (iTutorGroup) put employers on notice that biased AI is no defense under anti-discrimination laws. Likewise, the Federal Trade Commission (FTC) has indicated it will use its consumer protection powers against unfair or deceptive AI practices. (The FTC has issued business guidance warning that selling or using biased AI tools could be an unfair practice, inviting enforcement action.) State attorneys general are also tuning in – a coalition of states and the DOJ’s Antitrust Division filed the RealPage case noted above, and we may soon see states invoke consumer protection laws for AI-driven employment discrimination or privacy violations. Bottom line: The legal system is adapting to AI, and employers who rely on these tools without due care may find themselves in court or under regulatory investigation.
As AI becomes more embedded in workplace decisions, it’s critical to adopt best practices that mitigate risks and protect employees’ rights. Below are some steps and tips for employers, employees, and plaintiff-side employment attorneys navigating AI in the workplace:
For Employers
Companies utilizing AI in HR should be proactive in ensuring compliance and fairness. First, implement internal oversight: form an AI or algorithmics committee to vet any new AI tools and continuously monitor their use and outcomes. This team (which should include HR, legal, and IT professionals) can inventory all AI systems being used – from hiring software to monitoring apps – and evaluate where there might be bias or legal issues. Conduct regular bias audits of AI tools, even beyond what laws require, to check for disparate impact on protected groups. If an algorithm is making recommendations on hiring or promotion, test its results for equity and document those audits. It’s also wise to involve outside experts or use third-party audit services for an impartial review.
Next, update your policies and agreements: ensure you provide clear notice to employees and applicants about AI usage (transparency builds trust and is required by laws in some places), and obtain consent where needed (especially for biometric or video/voice AI). Any AI vendor contracts should include provisions that require the vendor to comply with anti-discrimination laws and data privacy standards – and ideally give you the right to access information about how the algorithm works (or at least the results of their bias testing). Basically, don’t use “black box” tools blindly; demand accountability from vendors. Employers should also establish a process for human review and override of AI decisions. For example, if an AI flags an employee for termination, have a human manager review the case to avoid errors or unjust outcomes. Providing an avenue for employees to appeal or question AI-driven decisions can prevent legal headaches and improve morale.
Lastly, stay educated on the law: assign someone (or a team) to keep up with the fast-changing legal requirements around AI, so your company can quickly adapt to new rules or guidance. By taking these steps – auditing algorithms, training staff on AI issues, and building compliance checks – employers can reap the benefits of AI while reducing the risk of lawsuits or regulatory violations.
For Employees
Workers and job applicants should be aware of how AI might be affecting them and know their rights in these situations. If you’re applying for jobs, understand that many companies use AI to filter applications or even to conduct video interviews. In some jurisdictions, you have specific rights: for instance, New York City job applicants must be informed if an AI hiring tool is being used and can request an alternative, non-AI evaluation. Similarly, Illinois applicants have to consent to AI video interviews, and can expect certain disclosures about how the AI works. Even outside these areas, it’s reasonable to ask an employer about their hiring process – you might inquire, “Will any automated systems be used in evaluating my application?” Knowing this can help you tailor your approach (for example, optimizing your resume with keywords so the AI doesn’t wrongly screen you out). If you are an employee, pay attention to company policies on monitoring and data collection.
Employers should inform you if they’re recording biometric data (fingerprints, face scans) or electronically monitoring your work. In states like Illinois, you must be given notice and asked for written consent before your employer takes your biometrics – if that didn’t happen and you’re being asked to use a fingerprint scanner, that’s a red flag. You have the right to refuse in such cases and consult a lawyer, since the law may protect you from retaliation for asserting your privacy rights.
More generally, if you suspect that an AI decision was unfair or discriminatory (maybe you keep getting passed over for jobs or promotions that you believe you’re qualified for, or you were terminated based on an enigmatic “productivity score”), you should document what happened. Save emails, screenshots of any automated notices, or anything that might shed light on the criteria used. You can consider raising the issue through internal channels – e.g. ask HR if a human can review your application or performance evaluation. And remember that anti-discrimination laws still protect you: if you believe an algorithm at work is disadvantaging you because of your race, gender, age, disability, or other protected trait, you can file a complaint with the EEOC or your state civil rights agency just as you would for a human decision-maker.
Likewise, if you feel overly surveilled or that your privacy is being invaded (say, a webcam monitoring you at home all day), you might have legal protections under state law or even the NLRA if the surveillance impedes collective action. Don’t be afraid to ask questions – sometimes simply bringing up concerns will prompt an employer to re-examine an AI practice. And if something feels off or you’re not getting good answers, reaching out to an employment attorney can help you understand your options.
AI is undeniably changing the landscape of employment – it can make hiring and management more efficient, but it also carries significant risks of bias, privacy invasion, and unfair treatment. As we’ve discussed, algorithms can discriminate just like humans (sometimes in more insidious ways), and constant surveillance can erode trust and rights in the workplace. The legal system is catching up: new laws and cases are emerging to address these challenges. For employees, the key message is that your rights do not disappear just because a decision was made by an AI. You have the right to fair treatment and privacy, and there are avenues to seek recourse if those rights are violated by an automated system. For employers, the message is to be proactive – implement AI carefully, audit it, and correct issues before they become legal problems.