“AI Is Telling My Employer I’m Not Productive Enough and now I'm being Written Up!” Is That Legal in California?
AI is officially in the workplace.
And for a lot of California employees, it doesn’t feel like “helpful technology.” It feels like a silent supervisor watching every move… and punishing people for being human.
If you’ve ever heard something like, “The system says you were idle too much,” or “Your productivity score is too low,” or “You have too much downtime,” or “Your numbers don’t match the AI,” you’re not alone.
Employers across California are using AI and automated tools to track productivity, rank employees, and even decide who gets coached, written up, or fired. But here’s the issue: AI doesn’t know why you slowed down. It only knows that you did.
And when companies treat AI metrics like the ultimate truth, that’s when things can cross the line into discrimination, retaliation, or wage and hour violations.
This article breaks down what AI productivity monitoring is, why it can be misleading, and when it may become illegal under California employment law.
What does “AI productivity tracking” actually mean?
AI productivity tracking is basically a digital scoring system. Employers use software and automated tools to measure how much you work and how fast you work, usually by tracking activity patterns.
Depending on the job, these systems can track how long you’re considered “active” versus “idle,” how many tasks you complete per hour or per shift, how quickly you respond to assignments, how often you move around or change locations based on GPS, or how long you stay between jobs. In some workplaces, the tracking can be even more aggressive, such as monitoring response time to messages, call activity, or computer use.
Some companies use AI tools to rank employees against each other, as if it’s a competition. That is how you end up in a workplace where an AI “productivity score” becomes more important than safety, common sense, or basic human needs.
The problem: AI doesn’t measure reality. It measures movement.
AI tools don’t understand context. They do not know that you slowed down because you were helping a customer, dealing with traffic, handling a safety concern, learning a new system, or simply using the restroom. They usually interpret all of that as the same thing: “downtime.”
That’s why employees get disciplined for “being idle” even when there was a valid reason for what happened. AI can make a normal human workday look like a performance problem on paper, especially in jobs involving driving, dispatch, customer service, fieldwork, or remote work.
When AI productivity discipline becomes illegal in California
Using AI at work is not automatically illegal.
But it becomes a legal problem when employers use AI-generated productivity scores to punish employees in ways that violate California employment laws, especially where disability, medical conditions, or protected complaints are involved.
Disability discrimination and failure to accommodate
One of the biggest legal risks for employers is disability discrimination under California’s Fair Employment and Housing Act (FEHA).
It is inevitable that productivity dips may happen for health reasons. Many medical conditions can affect pace, stamina, pain levels, bathroom needs, or the ability to work continuously without breaks.
If an employer knows an employee has a medical condition, and then disciplines or terminates them for “downtime” without considering the real-world reason for it, that can support a disability discrimination claim. It can also support a failure-to-accommodate claim if the employee needed basic adjustments, such as flexibility around breaks or other reasonable modifications to allow them to do the job.
California has also finalized regulations clarifying that discrimination laws can apply when employers use automated decision systems, including AI tools, to make employment decisions like discipline and termination. Those regulations took effect October 1, 2025.
The bottom line is simple: you do not lose your rights just because a computer is involved.
AI can “target” certain groups more than others
Employers sometimes argue that AI is fair because it treats everyone “the same.” But treating everyone the same can still create discrimination if the impact falls more heavily on certain groups of workers.
For example, a worker who is pregnant may need more breaks. A disabled employee may need flexibility. An older employee may not move as quickly. A worker assigned harder routes or more demanding tasks may look “less productive” on paper even though they are doing more difficult work. A worker learning the job may have slower metrics early on. In those situations, AI scoring can punish workers unevenly, even if no one intends for that to happen.
The U.S. Equal Employment Opportunity Commission has warned that anti-discrimination laws still apply when employers use AI tools, including where automated systems cause discriminatory results.
AI can be used as a cover story after someone complains
Another common pattern is that AI productivity discipline suddenly becomes an issue after an employee speaks up.
Many employees report that they complained about harassment, wage theft, unsafe conditions, discrimination, or other unlawful conduct, and then shortly after, they were written up or fired for “performance” or “productivity.”
The AI score becomes the excuse. That timing matters, and it can support a retaliation claim if the employer’s real motive was punishing the worker for speaking up.
“The AI said so” isn’t a legal defense
Employers often talk about AI scores like they are objective truth. But legally, AI tools are not the decision-maker. The employer still is.
If the AI data is flawed, incomplete, misleading, or applied unfairly, the company may still be responsible for the harm caused. Employers cannot escape liability by pointing to a system and acting like the outcome was automatic or unavoidable.
Even AI tools used in hiring and screening have faced legal scrutiny related to discrimination concerns.
AI pressure can lead to break violations and off-the-clock work
AI productivity systems can create a “never stop moving” culture.
Employees may feel like they cannot take meal breaks, rest breaks, or even restroom breaks without being punished for downtime. That becomes a serious legal problem in California if workers are discouraged from taking legally protected breaks or are pressured to cut breaks short to satisfy AI performance standards.
AI productivity monitoring can also lead to off-the-clock work, where employees feel forced to respond to messages, calls, dispatch instructions, or coaching communications after hours or after clocking out. If you are working, you should be paid, even if the work is “just a quick call” or “just a quick text.”
Signs the AI productivity discipline may be unfair or unlawful
There are certain warning signs that suggest an employee may be getting targeted unfairly through AI metrics. For example, the employee is being written up for downtime without clear explanation, the manager cannot explain what the productivity standard actually is, the employee is penalized for restroom use, the employee is disciplined for delays outside their control like traffic or lack of assignments, the discipline begins after the employee discloses a medical issue, or the company refuses to show the actual data behind the performance accusation.
What to do if your job is using AI to punish you
If you are worried that AI metrics are being used unfairly against you, it helps to get specific.
- Ask what exact productivity metric you are being judged by and what number is required.
- Request copies of any write-ups, coaching records, or performance reports.
- Document real-world reasons for downtime such as restroom breaks, safety issues, traffic delays, equipment problems, or supervisor-initiated interruptions.
- Save schedules, messages, and screenshots when possible.
If health is involved, it is often important to put it in writing that you are experiencing a medical issue and may need accommodations. And if you are terminated, request your personnel file because it can contain important documents and the employer’s stated reasoning.
When to talk to a California employment lawyer
If you were written up, suspended, or fired because of AI productivity tracking, especially after you disclosed a medical condition or made a complaint about illegal conduct, you may have claims under California law. That can include disability discrimination, failure to accommodate, retaliation, wrongful termination, wage and hour violations, and unpaid wages for off-the-clock work.
If you were recently terminated, acting quickly can also help preserve evidence such as app logs, performance dashboards, internal coaching notes, and call records that may be critical later.
AI may be the newest workplace tool, but California workers are still human and California law still applies.
If your employer expects you to perform like a machine, and punishes you for breaks, medical needs, safety decisions, or real-life limitations, you may have legal options. Send me an e-mail to discuss whether you have claims-- Emilia@antonyanmiranda.com. The first consultation is always free!
Call us at 619-696-1100 to speak with one of our concierge attorneys or visit us or send us an email.