We had a Connected by Data team retreat day last week, in which we got into some provocative discussion, not in the least the question of whether generative AI is an inherently harmful technology that should be resisted. I think broadly yes. But this also prompted me to think a little more about the kinds of AI which made me interested in the field in the first place; robotics. I think there are interesting ways this is very different from generative AI (in my view the roomba vastly outstrips ChatGPT in importance/usefulness), but there are also commonalities across the discourse of AI types.
In our team session we also discussed public participation, focusing on who wants to participate, why, and when, versus the notion that the public really just want to leave important decisions to “experts” and that being consulted is boring. For an organisation whose purpose is to increase participation in data and AI decision making, having these discussions is helpful in articulating our rationale. It’s also made me think more about public participation as a goal in and of itself.
I’ll pick up both threads in my notes for this week.
On robots
This week I discovered that ‘Flippy’ is back - a failed burger flipping robot from circa 6 years ago has resurfaced in the headlines of today. It’s important to note that Flippy is no longer attempting to flip burgers, but has been downgraded to shaking fry baskets. I’m loosely interested in the ongoing story of Flippy because;
- The discourse around Flippy, and what it means for workers, is revealing of how socially acceptable this type of technology is (or is not).
- It demonstrates a belligerent persistence of the idea that (fast, if not all) food can and should be made by robots.
Point a) indicates some change. Back in the 2018 BBC article the head of tech at Cali Burger claimed the robot’s failure was because “kitchen staff needed to learn to “choreograph” their movements” around it. In today’s Guardian piece, Cali Burger makes claims about freeing up staff time to do more “customer engagement” work, and offers the vapid “learn more skills” argument. But I think this points to an alignment over this timeframe in public sympathies with the worker, pushing companies to change how they frame the introduction of these types of tech.
Point b) is I think indicative of a deeply held sci-fi inspired vision of the future (flying cars, super intelligent AI, and a burger produced for you entirely by shining chrome with no human hands involved). It is noteworthy that Flippy’s developers have had to scale down ambitions for what the robot can do, but without giving in to the thought that perhaps human skill is actually necessary in the making of a meal.
It’s important to note the safety and unpleasantness concerns of working a chip fryer now cited as why Flippy is good - but these were not what drove its development. This post hoc rationale for why we should welcome tech (that’s not delivering on its actual aims) is I think a feature of the discourse we will see more and more of.
I think we may be seeing the real beginnings of a clash of the sci fi vision against reality. Last week saw the story break that Amazon’s automated shops were in fact run by Indian data labellers and not by AI. Which was not surprising in the slightest, just another case in the “Mechanical Turk” genre of trick. But what’s interesting is how people seem very ready to believe that AI/machines could do these things, perhaps even inevitably will do these things, despite the repeated evidence of how much better humans are at them. I’m interested to see if there will be a shift in the believability of these sci-fi visions, and if over time they will become rather dated and silly like personal jetpacks.
Coupling these two points, I can see a possible coalescence of a different narrative, which centres the importance of the human worker, and maybe even challenges thinking about the worthiness of work in service, retail and food production. In Resisting AI, Dan McQuillan calls for methods of critical pedagogy as both learning and unlearning is needed to reevaluate what is possible, and what is desirable. I think these kinds of cases point to junctures where that can start to happen, and I hope that just as we’ve seen social acceptability of certain AI narratives shift, we will see critical consciousness around AI building too.
Flippy (especially now he’s been downgraded from a flipper to shaker), also makes me think of Shakey, who fascinated me in master’s robotics seminars. Seminars that also heavily featured Rodney Brooks, who recently tweeted his own 3 laws of robotics; like good students, we can apply them here:
- Flippy is designed for show, advertising to those who look forward to sci-fi futures, and vastly underperforms.
- Flippy in its original conception was meant to take away agency from workers (i.e. replace them), and the only reason it is not doing so now is because it cannot.
- Interestingly, Brooks stipulates over 10 years of steady development needed for a robot to deliver 99.9% of the time, but I think Shakey might have done alright jiggling a fry basket back in the 70s.
I think here we can see interesting divergence and intersection between robotics and generative AI. I’ve thought about whether Brook’s laws could apply to AI across the board, but I think there are important differences when it comes to robotics. Essentially, I think Flippy’s developers are talking the generative AI talk that centres future visions over reality, but because they’re operating in the physical robotics realm it’s much harder to hide the lack of walk.
I don’t expect to be sampling Flippy’s chips any time soon. I do expect to see more AI hype-meets-reality news.
On Public Participation and Field Building
Last week I also attended a workshop about Field Building hosted by Place Matters and delivered by Bridgespan Group. This made me reflect about my role at Connected by Data as Field Building Lead. I understand my job to be facilitating connection making across the participatory AI space broadly conceived, linking people, sharing ideas, and fermenting work towards increasing public voice in data and AI decision making. The workshop framed field building more tightly, oriented around specific change goals (for example changing public health policy around smoking). Which has made me wonder, is “more public participation” too loose a goal?
I’ve spent a lot of time thinking about the arguments for public participation in AI, mostly being wary of claims that it is a good in and of itself. I’ve made the case for various better outcomes if the public are involved, framing public participation pragmatically. But reading Resisting AI, and thinking about how certain future visions (see Flippy) are driving technology development, I increasingly think that public participation in shaping the narratives, visions, hopes and aims for AI is inherently important. I think that taking part in deliberation processes with other members of the public and collectively engaging in learning (and unlearning) about AI is an important way of building critical consciousness. Yes, the outcomes of the deliberation process are valuable for practical applications. But the process itself is, for me, potentially of more value even if of a kind that is not easy to measure and thus advocate for.
I think “more public participation” is a goal I can happily work towards. Maybe the challenge I am going to set myself is in articulating the value of it in and of itself.