Connected Conversation: Generative AI and Worker Rights

Jeni Tennison

Across industries, companies are seeking to exploit content generated by their workers past and present to create customised generative AI. The same pattern is appearing across journalism, education, creative industries, the legal profession, public sector, research and consultancy: wherever people write documents, the organisations that own the rights to that text – often their employers – are aiming to reuse it to build or customise AI language models.

We’re also seeing organisations introducing general purpose generative AI tools thanks to their incorporation into office suites, such as Microsoft Copilot and Google’s Gemini, or just expecting (or blocking) workers from using ChatGPT.

Some workers welcome AI as a new tool that can help them in their work, but many are concerned about the use of content they have created as training data. They’re worried about the impacts of AI on the kind of work they do, the quality of how they experience work, and how much they’re paid. We’ve also heard concerns about reduced opportunities for building skills for those earlier in their careers.

And that’s just the end users of these systems. We also know that data workers earlier in the supply chain are being used as cheap labour to label data and do fine tuning in sweatshop conditions.

In this Connected Conversation we brought together people advocating for worker rights from across different sectors to learn from and build solidarity with each other. We explored questions like:

  • What aspects of the adoption of generative AI do workers and their representatives need to guard against?
  • Where are unions putting in place red lines, and what are they negotiating for?
  • How and why does this differ across sectors and contexts?
  • What kinds of rights do workers need in law to enable them to protect their interests?
  • How can workers show solidarity across sectors and across the supply chain to achieve change?

Our three speakers shared their provocations around generative AI and workers’ rights.

Janis Wong, Data & Technology Law Policy Advisor at The Law Society

There is a history of technology being deployed in the legal system (Law Tech) and AI is now starting to be implemented in ways such as case management and research. This is raising questions for the profession about client confidentiality, legal privilege and responsibilities towards regulators and clients. AI impacts across the profession differently as there are many areas of practice and different organisations in the space - from law firms that are innovating and developing AI systems, to SMEs and in-house counsels. Some are merely seeking to get to grips with understanding ‘off the shelf’ generative AI and The Law Society is seeking to ensure that a two tier system is avoided (the have and have nots). There is also a ‘service’ nature of the legal profession, where solicitors (for example) are in service to their clients. This raises a question about client needs and wishes in relation to AI being used.

From a workers’ rights perspective there are many different roles within the legal profession (e.g. barristers, solicitors, paralegals, court staff) and all are directly or indirectly impacted by the “billable hours” business model. Those employed in the legal profession are not traditionally considered to be “data workers” but increasing as technology developments are implemented, they are.

The UK’s Master of the Rolls, our second most senior judge, is a strong advocate for technology and would offer a provocation that the future legal system may include a scenario that the legal profession may be liable if not using technology, if it is proven to be more accurate and efficient.

Adio Dinika, Research Fellow at DAIR

There is a large scale invisible workforce (sometimes referred to as ‘ghost workers’ ) being exploited by the development of AI with humans having to annotate content and label data to train LLMs. The public face of AI is the billionaires that own the big companies and are promoting its deployment, and not the large numbers of displaced individuals in the economic and social margins such as Syrian refugees in Lebanon, and Venuzeulans being paid in Amazon vouchers. There is a global exploitation of data workers and the lines of the data supply chain map those of the slave trade. There will be more on this topic in a forthcoming book:

Where does AI come from? A global case study across Europe, Africa, and Latin America, New Political Economy , (Antonio Casilli, Paola Tubaro, Maxime Cornet, Clément Le Ludec, Juana Torres Cierpe). [forthcoming 2024]

The exploitation is greatest for those already disadvantaged but it can be argued that we are all being exploited. We are acting like these data annotators when completing the image-based CAPTCHA tests. Our data - including our social media content - is also increasingly being used to train AI systems so we should not think this is an issue that only affects other people.

This leads to a question of content ownership and who has the right to do what with what? All films include a disclaimer (as part of the film studios accountability) that “no animals were harmed in the making of this film”. Should we be calling for similar accountability around the use of AI, and require a similar disclaimer that states no humans were hurt? Companies need to be held accountable for their use of AI and our data.

Aparna Surendra, Manager at AWO

AWO is named with a nod to Richard Brautigan’s poem “All Watched Over by Machines of Loving Grace.” The slides for this provocation are available here.

The Trades Union Congress (TUC) and AWO have partnered to create a generative AI policy toolkit. It covers policy and legal perspectives and seeks to support collective bargaining for union members. AWO has also been working with the professional footballer’s union (FIFPRO) and Workers Info Exchange. They also provided support in the process of a AI choral model, being developed for exhibition at the Serpentine, ensuring all choirsters whose voices were used in the development were appropriately engaged and had ownership of their data.

When considering the deployment (rather than development) of generative AI, there are a number of elements to consider:

  1. Concerns vary by sector – the creative industries in particular have concerns about IP and labour displacement; at the same time there are individuals wanting to use and embrace AI and seeking training to best understand it.
  2. Changing job descriptions / degrading of roles – understanding this impact and managing it most effectively requires mapping across kinds of displacement and across sectors.
  3. The entry point for generative AI in the workplace – some organisations will be purchasing or developing bespoke AI but generally it will be enterprise software such as the Microsoft Office suite that will be adding AI in, and then rolling it into workplaces. PwC is already demonstrating the adoption, and integration, of generative AI (e.g. ChatGPT) into service delivery. It is anticipated that generative AI will fairly rapidly become part of our day-to-day IT stack.
  4. What is it good for? – Generative AI is useful, and does work, in some specific situations but, as of now, there is little applied research . Workers are likely to be the first to identify where it is “working” and where it isn’t.
  5. Accuracy – it is well known that generative AI can falsify information and this can lead to unclear liability issues. In an early example AirCanada was found liable for inaccurate policy information provided by its chatbot. There are questions about where workers are in this process. Are they expected to check every chatbot response? If workers don’t correctly identify an LLM-produced error, will they be protected? Fact-checking may be manageable with small amounts of text, but what if generative AI is producing longer reports or integrated in spreadsheets?
  6. Data is being used to train AI systems – Whether it be through lawsuits or partnerships, employers are setting the terms of how content can be used by GenAI providers. Often, this content is generated by workers through the course of their employment contracts. Where are workers within these negotiations?
  7. Solidarity – different sectors are being impacted differently but there are common themes. Professional footballers and Deliveroo drivers are both sets of workers generating large amounts of data about their workload and performance and are then judged and managed accordingly.

Discussion

Participants expressed concern that human critical thinking is being pushed to the side, that there is a risk it is “making us dumber”, creating false “facts” and forming an AI to AI loop. This concern links to findings such as an experiment that AI boosts creativity on an individual level but lowers it collectively. This technological development currently risks driving individualisation.

Considering the question “what should we be asking, what should be the negotiating red lines”, participants noted that a lot of AI / technology is implemented as a “pilot” or “trial” which can inhibit union engagement (depending on union/sector/workplace) and can make it difficult to challenge further down the implementation process. Participants felt it was important that unions should engage earlier in these processes.

Adam talked about some of the work we’ve done at Connected by Data around ‘work’ and with unions. There was agreement that there is a plurality of views around AI deployment, particularly in less exploitative workplaces. Some workers are concerned, but some are happy to be embracing AI and merely seeking training to better understand and utilise it. As a result workers may need to get more specific around the parameters in which AI is going to be implemented to identify if it is going to be deskilling or displacing. In turn, collective bargaining agreements should be picking up these granular distinctions. Similarly there is a variation between employers regarding transparency of technology being used and procured. How was the vendor selected? How will the pilot be measured? Will worker productivity be measured as part of ‘success’? If the drive for implementation is cost cutting then this often risks reducing transparency or meaningful reflection on success factors.

Participants noted that some organisations argue “there’s a human in the loop” to mitigate concerns about ‘robots taking over’. In practical terms, however, it is often the person that bears the liability, even though we know the AI is producing inaccurate results. It is bad for the human as well as the final output. The degrading nature of the work that increased use of AI may result in, is also a concern. Using AI to triage or deal with ‘easy’ requests so only harder cases are passed to the human results in a workload that is more draining. Participants observed that where this is being implemented currently, productivity demands on workers aren’t being adjusted so remain at the average target (based on a combination of simple and complex cases) rather than acknowledging that workers are now dealing with complex cases that will take longer.

The discussion explored the role of data governance in these more labour and AI-governance issues. Could collective data rights or licensing help? It was felt that there is a need to keep repeating “no AI governance without data governance”. There may need to be a shift to affirmative/enabling rather than consent-based legal frameworks to ensure a more meaningful engagement. There is a model being explored by some, including Fifpro, to see whether Unions could become stewards of data for workers (like Data Trusts).

A side discussion acknowledged that all discussions about AI are framed in majority languages and many are excluded simply by being a ‘minority language’ user and requires humans to continue doing the tasks that elsewhere are being automated. TUC Cymru has commissioned research on the impact of AI on workers using minority languages that will be published in the Spring.

Finally, participants asked if there are lessons workers can learn across sectors and not just from moments of crisis. Can journalists – and others in the early waves of being impacted – warn workers in other sectors about where generative AI is going to come in. We must try to learn from a full range of approaches and not just one policy. We can look at things like the right to do mechanical tasks to give your brain a break, for example. We are ultimately all data workers and participants observed that generative AI is “like a beast that will come for us all”.

Do you collect, use or share data?

We can help you build trust with your customers, clients or citizens

 Read more

Do you want data to be used in your community’s interests?

We can help you organise to ensure that data benefits your community

 Read more