The real danger that artificial intelligence poses to work is not just job loss – it is the growing divide between people who use AI to extend their skills and those whose working lives are increasingly shaped by opaque, AI-powered systems of surveillance and control.
The debate about artificial intelligence and how it will affect workers is stuck in the wrong place. On one side are warnings that machines are coming for millions of jobs. On the other are claims that AI will turbocharge productivity. Both stories miss what is already happening in workplaces across the world, from Britain to Kenya to the United States.
For some, AI can help remove the drudgery from daily work. These are often people in better-paid, higher-autonomy roles: analysts, consultants, lawyers, academics, managers. In these jobs, provided AI is being rolled out to augment workers rather than replace them, it can feel like a copilot. It can support human judgment, speed up routine tasks and create space for more creative thinking.
For many others, though, AI is not an assistant. It is a boss.
It appears in scheduling and monitoring tools, route optimisation software and automated performance dashboards – all systems that decide who gets what shift, how long a task should take and whether someone is performing at their maximum capacity. In these workplaces, AI is not something you use. It is something that watches and rules you.
That is the new divide we should all be paying attention to.
A third of UK employers are already using “bossware” technology to monitor workers’ online activity. This already prevalent worker surveillance is a glimpse of what is yet to come.
This is why the question of whether AI is “good” or “bad” is pointlessly crude. The truth is more nuanced. Employers are using AI to empower some workers while subjecting others to more intensive, inhumane forms of oversight. It is creating new opportunities at the top of the labour market while tightening control lower down.
And further down the line, the same methods of algorithmic management and surveillance that are being honed in warehouses, delivery vans and gig work platforms are likely to spread to corporate headquarters, hospitals and schools. We’re already seeing this at companies including Amazon, as its software engineers say they’re being surveilled and pressured to use AI to achieve more productivity, even when it counterintuitively slows them down. And Meta plans to track and capture its employees’ keystrokes, mouse movements and clicks to train its AI models. Some of the same workers benefitting from the rise of AI now are poised to eventually lose that advantage.
My own research over the past decade on worker-AI coexistence, which was cited in the 2024 White House economic report, suggests that the most pressing issue about AI’s impact on work is not immediate mass unemployment. It is the widening gap in skills, autonomy and wellbeing between those who get to work with AI and those who are finding themselves managed by it. Many jobs will remain in the future, but they will be more pressured, more fragmented and less human.
That matters because work is not just about income. It is also about dignity, trust and control.
During the pandemic, many people became acutely aware of how deeply work affects mental wellbeing. AI-managed workplaces are only intensifying the pressures of work. When every click, step, call or pause a worker makes can be measured and graded by a system that they cannot fully see or challenge, the effect is stress.
For people in warehousing, retail, hospitality, logistics, customer service or the gig economy, it can mean being pushed harder by systems that are presented as neutral, objective or efficient, even when they are anything but.
This is not just a technical problem. It is a social, political and moral one.
Take Britain, which likes to present itself as being ambitious about AI. There are now major plans to expand AI skills across the workforce. All of that sounds positive. But beneath the rhetoric lies a more uncomfortable reality: many organisations are still poorly prepared to introduce AI fairly.
A recent global survey of business leaders found that although most say AI skills are now a source of competitive advantage, relatively few dedicated a meaningful budget amount to develop their employees’ AI skills. Even fewer have strong governance in place. Many managers still have little real responsibility for helping their teams adapt. That is how inequality hardens.
If better-paid workers are trained to use AI while lower-paid workers are simply exposed to it through surveillance and automated management, then this will not be a story of shared progress. It will be a story of deepening imbalance.
Workers across the economy need access to meaningful training, not just in using digital tools but in building the wider skills that matter even more in an AI age: judgment, communication and critical thinking.
We also need basic democratic principles in the workplace. Systems that affect pay and performance should be transparent and contestable. Most of all, workers need a voice in how these technologies are introduced. AI should not be something used on people behind closed doors and then justified in the language of efficiency. It should be shaped by the people whose lives it will affect – and research has found that involving workers in the process improves their job quality and allows employers to integrate AI more effectively.
The choice about how AI will reshape work is not being made in Silicon Valley boardrooms or summit speeches. It is being made right now, workplace by workplace, across Britain and around the world. And unless we pay attention, the new AI divide will become one more inequality that arrives quietly, embeds itself deeply and is only recognised once it is already everywhere.








