Weeknotes

Emily Macaulay

Emily Macaulay

Emily Macaulay

What I’ve been doing

I’ve attended three webinars this week:

  • the second ONS one on ‘citizens and environmental statistics’ which was interesting for how data from counts are gathered and extrapolated (amongst other insights);
  • Social warming with Charles Arthur (hosted by SCVO) was also an interesting exploration of the parallels between the damage being caused by social media and global warming, and perhaps more worryingly how little change or action is being taken (in both) despite us all knowing there is a problem. Charles noted that we live in the attention economy and studies show the attention grabbing content is that which is most polarising and mostly of people doing bad (rather than good) so this will continue to be the devotion of social media platforms;
  • Platformland with Richard Pope & Emer Coleman in conversation (lots of people in my current professional circle are talking about the book) in which I enjoyed the focus on public sector, from a supportive perspective (rather than a privatisation one).

What I need to take care of

Tomorrow is our next team monthly meetup. Always nice to be in the same physical space. Different timings to usual though so I need to take care of an early night!

At the moment my diary next week has a clear day - which I want to use for focus time to progress some chunkier items. To achieve that I’ll need to have cleared the more administrative / weekly actions earlier in the week to reduce the distraction.

I’m also getting married one month today! Probably ought to keep on eye on taking care of that.

What I’ve been inspired or challenged or moved by

The article I read this week about AI Chatbots (see below) was challenging.

What I’ve been reading

Adam shared a ‘long’ read to the team recently with the headline ‘Horny Robot Baby Voice’, written by James Vincent on AI Chatbots. When sharing it Adam suggested I may like it - and I did. I found my brain going in so many different directions with it:

  • wondering if AI will bring about a fundamental shift as to how we interact (the parallel with our phone addictions resonates here)
  • the sociological impacts (research showing that if a machine can give the sense of being needed we can bond with it very quick - again parallels with things like tamagotchis or simply ‘streaks’ such as DuoLingo, or even just how we care about our favourite stuffed animal)
  • an echo chamber concern (the article starts with an example of a relationship AI chatbot affirming to someone that they should try to kill the queen)
  • a gender violence concern (when AI chatbots are ‘replicating’ women that will do whatever men want, when they want it, and how that will spill into real life when our interactions with other humans aren’t programmable)
  • the impact on our mental health (as chatbots are developed that use data from a dead loved one to generate their ‘voice’ - so you can still have conversation with them)
  • the educational role (I felt massively uncomfortable about people being able to chat to an AI bot of Hitler, but would love to chat with Nelson Mandela - and I guess morally we can’t pick on or the other…but I do refer back here to the individual who was in an echo chamber with a chatbot and acted on their confirmation, what could a white supremacist ‘chatting’ their ideas through with Hitler have the potential to lead to?).
  • AI improving the sophistication of scams / cons (including reference to the example where someone was scammed in a Zoom call - that was completely made up of AI generated colleagues!)

Overall I think my concern is less about what it is to be human (a theme the article does thread through) and more the lack of critical thinking in potential future engagements, and the impact of that. It is the drive towards individualism when history proves (repeatedly) we are ‘best’ (most resilient, most productive, most kind etc) when we embrace diversity and are a broad collective. To coin the Americans … “we the people”.

It’s never possible to predict the future and we’re often reminded of things people predicted in the 80’s that we’d have now (and don’t) and there’s been numerous examples of tech that people have thought would rapidly change our way of life (virtual reality) and hasn’t. But, there’s something about AI and how people (Government, businesses - large and small, developers, civil society, general public - in all its glorious diversity) are talking about it - and already interacting with it, that makes this feel a bit different. One of the reasons I’m proud to work for Connected by Data is our goal to ensure people have a strong voice in decisions about AI, because one thing I am sure of is that whatever the AI future is bringing, we shouldn’t be sleepwalking into it and abstaining from any engagement with the change.

Loosely related - Tim wrote in his weeknotes about a ‘trial’ we did with a People’s Advisory Panel we’re currently running where expert testimony was provided by an AI generated video - it’s worth a read.

And - I attended our recent Connected Conversation on generative AI and workers’s rights and it was fascinating to get insight into legal liability (and how that is starting to be tested) as well as many other elements of exploitation. Worth catching the write up of that when we publish it (I’ll try to remember to share it in later weeknotes, it’d definitely go on our socials).

Do you collect, use or share data?

We can help you build trust with your customers, clients or citizens

 Read more

Do you want data to be used in your community’s interests?

We can help you organise to ensure that data benefits your community

 Read more