How AI scheduling cuts hours, pay, and worker stability
Automated scheduling is quietly shrinking paychecks, destabilizing routines, and shifting control from managers to software. Workers like Valerus are seeing the cost first, but the policy gap is much wider.

The paycheck shock behind the software
Valerus felt the change in the simplest place possible: her bank account. In 2025, after her employer adopted new scheduling software and business conditions worsened, her hours became fragmented and unpredictable, and by year’s end her pay was almost 20% lower than the year before. She is a single mother supporting three children in Brooklyn, and the cut forced hard tradeoffs, including putting the internet bill ahead of utilities and stretching food budgets further than before.
Her case captures the hidden damage of automated management. Scheduling software is often sold as a tool for efficiency, but for hourly workers it can mean fewer hours, unstable shifts, and more time spent trying to plan family life around a moving target. When a paycheck depends on software-generated assignments instead of a stable schedule, the financial hit is not just lower wages. It is lost predictability, lost bargaining power, and a household life built around uncertainty.
How software turns labor into a cost target
The logic behind algorithmic scheduling is straightforward: cut labor costs and maximize productivity. Employers use software to match staffing levels to expected demand, but the same tools can also shift risk from companies to workers. Hours can be split into shorter blocks, changed at the last minute, or reduced without a clear human explanation, leaving workers to absorb the instability.
That pattern is not limited to one workplace or one industry. The Aspen Institute has described stable and predictable schedules as a defining feature of a good job, and has warned that unstable scheduling is a systemic problem affecting millions of workers in retail, food service, and healthcare. At a November 29, 2023 Aspen event, Daniel Schneider of Harvard University’s SHIFT project joined Terrysa Guerra of United for Respect, Silvija Martincevic of Deputy, Elizabeth Wagoner of the New York City Office of Labor and Policy Standards, and journalist Shalene Gupta to discuss how companies often cut hours or pay to transfer risk onto workers and their families.
The pressure is especially sharp at larger employers. Panelists at that event noted that companies may recruit workers with promises of 20 or 40 hours a week, then reduce hours later to avoid benefits or trim costs. That practice matters because many workers build childcare, transit, school pickup, and second jobs around the hours they are told to expect. When the schedule moves, the entire household absorbs the shock.
A worker story tied to a much larger system
Valerus works for LanguageLine Solutions, a company whose clients include the UK’s National Health Service and multiple New York City agencies. She and some colleagues are trying to unionize with the Communications Workers of America, a sign that workers are trying to regain some control over the technologies shaping their jobs. The company has also announced that it is experimenting with using AI to do basic interpretation work, which raises a deeper concern: workers are not only being scheduled by software, they may also be asked to compete with software for the work itself.
That is why labor organizers see Valerus’s experience as part of a broader warning. If workers do not get a say in how new technologies are introduced, the harms can mirror the worst parts of algorithmic scheduling. The issue is not just whether AI can perform a task. It is whether employers use AI to fragment work, suppress hours, and redesign jobs in ways workers cannot see or contest.
The company’s parent, Teleperformance, has also faced scrutiny over surveillance. It was accused of surveilling remote workers and later reached an agreement with a labor union federation over those practices. That history matters because scheduling software rarely stands alone. In many workplaces, the same digital systems that assign work can also track performance, monitor attendance, and shape discipline.
Why the problem is spreading beyond platform jobs
The concern no longer belongs only to app-based gig work. The International Labour Organization counted more than 777 active digital labor platforms globally in 2021, up from 142 in 2010, showing how quickly this model has expanded. Human Rights Watch says workers are increasingly hired, compensated, disciplined, and fired by algorithms, and that many of these practices were normalized, if not pioneered, by platform companies.
The scale is not theoretical. Human Rights Watch said 16% of people in the United States had worked for a digital labor platform at least once, and 31% of current or recent workers said it was their main source of income. Those numbers show that platform-style management has become a major part of the labor market, not a niche corner of it.
Fairwork’s 2025 U.S. report goes further, saying algorithmic management is fundamental to digital labor platforms and that AI-powered tools are now used for hiring, scheduling, paying, managing, and surveilling workers. In healthcare, for example, scheduling software can approve a shift, notify the worker and facility, handle clock-in and clock-out, and then send a paycheck. That chain gives the appearance of seamless efficiency, but it also concentrates decision-making in systems that workers may not understand or be able to challenge.
Fairwork warns that this technology can displace traditional management relationships, transparency, and accountability. It can also hyper-quantify work to improve investor-facing KPIs and cut operational costs. In practice, that means the software is not only recording work. It is actively redefining what counts as productive, affordable, and dispensable.
The European warning and the push for transparency
The United States is not the only place wrestling with these questions. Reuters reported in March 2024 that drivers and delivery riders in Europe said opaque algorithmic management could lead to random job assignments, performance ratings, and even account deactivation, with direct effects on earnings and morale. That reporting came alongside the European Union’s Platform Work Directive, which was presented as a step toward greater transparency, human oversight, and access to information about AI-driven workplace decisions.
That approach points to the core policy question: who gets to see, explain, and challenge the decisions made by workplace algorithms. Without disclosure rules, workers may never know why hours were cut, why a shift disappeared, or why pay fell. Without human oversight, software can become the default manager even when the consequences are deeply personal.
Why researchers are calling for a new labor standard
A 2026 Equitable Growth brief adds another warning sign. It says an audit of 500 AI labor-management vendors found that traditional employers in healthcare, customer service, logistics, and retail are using automated systems to set compensation structures and calculate individual wages. The brief warns that, without policy intervention, these practices could become normalized, increasing income uncertainty, bias, and wage-setting opacity.
That is the heart of the problem. When software controls schedules, pay, and discipline at once, workers may face a system that is efficient for employers but difficult to audit, challenge, or regulate. The danger is not only lower pay in a single week. It is the slow creation of a labor market where instability becomes standard operating procedure.
Valerus’s lost hours are therefore more than one worker’s story. They are a preview of what happens when scheduling, surveillance, and compensation are folded into the same algorithmic system. The stakes now are whether labor protections can catch up before that model becomes the default way work is managed.
Know something we missed? Have a correction or additional information?
Submit a Tip

