“Negotiated Change Not Automation” – Gina Neff talking at the ODI

How AI and Workers Can or Should Unite

.

Author picture

Professor Gina Neff is not a cat. This is Pepper, my cat. Contemplating her data rights as a key worker. And a shameless attempt to grab attention to tell you more about this fascinating webinar.

Gina Neff, Minderoo Centre for Technology and Democracy University of Cambridge, gave her presentation at the ODI’s data centric webinar.

It focused on workers’ access to data to make AI work properly. Gina Neff started by saying generative AI is the biggest, global, social experiment humans have experienced collectively. And at lightening speed. She highlighted in this excitement we miss how people make choices around these systems and how these choices transform daily life.

Gina Neff asked, ‘Will AI outsource misery? Where is the solidarity with people whose labour we can’t see in the system but benefit from?’

She then mentioned Mary L Gray’s work, Ghost Work, which immediately lead me to search this.

Neff’s own research has investigated the neglected agency of people who work with AI tools. What are the choices they make with, and to adapt to, the tools? How does AI operate with the existing rules around them?

She makes three key arguments for opening data to workers. These are:

  1. Increased sociotechnical risks for AI systems
  2. Ability to maintain existing legal protections
  3. Innovation and adoption at the coalface of change.

Focusing on the human in the AI chain of practice means we preserve accountability. Just as important is preserving the skill of ‘humans-in-the-loop’ and bringing oversight to data model shifts.

With long supply chains of data, services, apps and models, Gina Neff argues people must continue to be accountable for decisions that bring harm and avoid , ‘the moral crumple zone’. Similar to a car’s crumple zone to absorb impact, the human in a complex automated system should be prepared to manage the impact of moral and legal responsibilities.

Skilled workers-in-the-loop unlock the power of AI to make assessment and judgement calls. Gina Neff quoted a study from Cornell University which highlighted the importance of maintaining traditional training routes alongside robotic surgery training. The traditional methods build foundational skills and experience that are crucial for making informed judgment calls during surgery.

There are also evolving roles like the paraclinical who manages and develops data systems in healthcare. They bring oversight to data or model drift making sure the data is right, patient access is right and the system stays on course to provide the benefits intended.

Automating processes are not automatic. It requires people to design, install, adapt, maintain and make judgements based on their knowledge of the operational context.  People see the hidden connections in a system.  

Professor Neff spent 16 years studying large scale digital changes in the construction sector. She observed negotiated innovation is better than automation. When we watch workers with new tech there are four distinct stages: sensemaking, expectation, practice and negotiating change. These allow the benefits and cost of change to be more balanced and fair.

What can we do in our projects and workplaces? Gina Neff makes four recommendations:

  1. Plan for workplace impact not just worker displacement
  2. Embrace that AI sociotechnical safety relies on workers-in-the-loop
  3. Data are plural, workers need collective data sets
  4. Capacity building for AI relies on people being able to innovate.

I found this session fascinating as it linked so many aspects which are crucial to business success but are often siloed. Workforce futures, project delivery and therefore how we tender for work as end-clients themselves and the support they need changes.

Need help with proposals?data ethics?projects?policy?workforces?

Let's talk

Image description