Investigating how AI is understood and impacting Welsh workers and trade unionists
Wales TUC are looking into how AI is affecting workers across different sectors, and how trade unionists are managing the increasing roll out of data-driven technologies
Wales TUC - supported by Dr Juan Grigera from Kings College London and Adam Cantwell-Corn from Connected by Data - are investigating how workers in a range of sectors are responding to digitalisation and AI at work. This write up is intended to capture the key comments by the workers, which will later inform a full report.
Artificial intelligence is having a dehumanising effect on workers as they are continuously monitored. What’s more, it’s leading to workers being deskilled, their tasks restructured and sometimes managed out of their jobs.
These were the views of an experienced group of trade union representatives in the public sector in the first in a series of workshops.
In their role as volunteers supporting colleagues at work, the reps identified several concerns related to AI and digitalisation. These included exposing the underlying motivations of management behind rolling out software, a creeping continuous surveillance of workers, evident flaws in the software used to automate tasks, and the irreplaceable need of human labour for many jobs.
“It’s being driven by a lack of trust in staff”
M, a civil servant, said that software-based surveillance is an issue in the civil service. She said:
“Our employer is interested in using Microsoft’s ‘Viva Insights’ to look at how staff’s time is spent. […] I believe that it’s being driven by a lack of trust in staff and pressures on budgets which require everyone to work even harder.”
Major concerns have been raised by privacy campaigners about Microsoft’s 365 platform’s capabilities to intensively monitor workers, without their knowledge. Microsoft has urged employers to focus on enhancing productivity and the worker experience rather than surveillance. But M was sceptical:
“Senior managers have said that the data can help them work out where workloads are excessive, which they can then address”. M added that “at the moment, nothing actually happens when you do raise excessive workloads on behalf of staff. The idea that data from Microsoft would answer the problem is unpersuasive.” In effect, management’s push to justify the roll out of a sophisticated software system seems misleading, and is actually saying to workers “listening to you doesn’t count”.
Surveillance is also a live issue for delivery workers. How the data is collected could fail to consider differences among workers in terms of age and physical ability and be used to sanction those workers who ‘under-perform’.
For instance, J who works for a large delivery company said:
“Tech should benefit workers. It shouldn’t be used as a stick to beat people with. My colleagues have experienced the overbearing use of technology to push for better performance. Workers are now issued with electronic devices with GPS trackers. If we stop for one minute a yellow dot comes up on a map – and it is reported to a manager.”
“There may be very good reasons for a one-minute stop. For example, you could be talking to a customer. The new system can lead to people being hauled in for a conversation and told ‘you stopped for fifteen minutes over the course of a week; effectively you weren’t working then.’ They are asked to justify themselves. I think the new app dehumanises the workplace. […] we are treated like robots. We all have different abilities and ages, but the performance app won’t take that into account.”
Algorithmic surveillance and the intensification of work has been a hot topic recently, with Amazon and Royal Mail grilled by parliamentarians and in the press over their practices. At an event jointly hosted by the TUC and Connected by Data in June 2023, workers from both companies were in direct conversation with cross-party MPs about how law and policy can protect worker’s rights.
“The tools are flawed”
Workers are also concerned about flaws in the design and implementation of technology. In particular, programmes which measure productivity fail to account for the diverse processing times required to handle issues of different complexity.
G, a civil servant managing sensitive case work issues from citizens, they reflected that:
“There have always been performance targets and surveillance. But the tools now are flawed. They pick up when you start a piece of work and when you end it but not the middle bit when you’re considering the detailed legal side. People are being dismissed for their performance … things are not great.”
G has seen how AI has been used to make automated decisions on simpler cases. However, this hasn’t been positive for workers, nor for the citizens making the complaints:
“The less complex jobs are gone, so those people who dealt with those are being performance-managed out. It’s worrying. It’s hard when you have to account for every minute of your day. There have always been people who take the mickey – managers knew that – and could deal with it. But this is different.”
G stated that middle managers are then deciding to manage colleagues out of employment – with dire consequences for relationships and workplace culture.
Asides from the impact on jobs, G also reported that flaws in a critically important programme had caused serious issues for her department, and in turn, citizens:
“As part of our work we pay fees to expensive expert external service providers. The system is now automated – ‘untouched by human hands.’ But it has a fault rate of 19%! That’s compared to a 3% fault rate for those payments which are manually inputted.”
“You need human involvement”
Scepticism about AI’s ability to fully replace human work was also raised. D, a civil servant who works on Welsh language translation said:
“As a translator, I use tech a lot. In effect I’ve become an editor. I can now produce more text easily and quickly. I’m seeing the advantages. There was a nasty time about 10 years ago – when it was said ‘we should be paying Welsh language translators less if they are using technology’ – but that went away.”
D also reflected on the other types of documents produced at his place of work and whether AI could be used and how it would interact with human input.
“AI couldn’t be relied on to produce press releases, letters and strategies. You need human involvement. AI could never assess politically whether something should be said in a complex changing world.”
C is a radiographer at a hospital, responsible for various x-ray-based studies to visualise internal body parts. Recently, their department received AI-enabled equipment for diagnosis, although it’s not yet in use. C explained the concerns regarding the potential for AI to misdiagnose, a process of staff deskilling in the mid-term, and highlighted the urgent need for legislation to govern its safe use.
The Welsh Workforce Partnership Council and ‘Principles of Digitalisation’
The Workforce Partnership Council is “a tripartite social partnership structure of the trade unions, employers and Welsh Government covering the devolved public services in Wales”. The council has produced principles for ‘Managing the Transition to Digital Workplace’ that set out a framework for a worker-centred approach to digitalisation.
However, the reps in the workshop had not heard of or seen the principles previously. Though generally agreeing that the principles could be useful, the reps queried some of the details. For example, the first principle of Employee Voice and Participation states that workers should be “consulted at an early stage over the introduction of digitalisation”. However, as M pointed out, “we are already down the road of digitalisation – so when is it new [and introduced]?”
D agreed that the sequencing is important, and that worker participation needs to be ongoing not just at the beginning. D said that sometimes:
“It is difficult to foresee when something’s introduced what the problems might be. We can’t just be Luddites – we have to go with them. But we have a clear mandate to protect members when things go wrong. It goes without saying that unions should also be consulted from the start.”
What can these experiences tell us about a worker-centred approach to digitalisation and AI?
The contributions from these workers showed how varied the experience of AI and digitalisation is. What the participants had in the common was a concern that technological change would worsen the workplace.
In 2022, the TUC warned that intrusive worker surveillance tech and AI risks is “spiralling out of control” without stronger regulation to protect workers. As part of a broader effort to ensure digitalisation improves rather than degrades the worker experience, the TUC called for a statutory duty to consult trade unions before an employer introduces AI and automated decision-making systems.
The TUC have also called for an employment bill which includes the right to disconnect, digital rights to improve transparency around use of surveillance tech, and a universal right to human review of high-risk decisions made by technology.
Unfortunately, the UK government’s Data Protection and Digital Information Bill actively undermines the already insufficient protections within the UK GDPR. The TUC, Connected by Data and others are campaigning to amend the Bill as well as the TUC’s project to advance a new AI and workers’ rights Bill.
Initiatives such as the Workforce Partnership digitalisation principles are crucial steps to proactively ensuring technological change benefits workers. Yet the responses from this group reveal the need to further engage shop floor level workers in training and awareness around how to negotiate and understand technological changes in the workplace.
This industrial level organising would be supported by the legislative and regulatory changes recommended by the TUC.
This write up is from the first workshop in a project investigating the impact and understanding of AI and digitalisation among Welsh workers, led by Wales TUC and supported by Dr Juan Grigera from King’s College London and Adam Cantwell-Corn from Connected by Data.