Tim has spent the last 20 years working at the intersection of technology, participation and governance as both a researcher and practitioner. From piloting digital tools bringing youth voice into local decisions, to developing data standards that enable community scrutiny of billions of dollars of public spending, or writing about the political dynamics of open data initiatives, his work has explored how shared social challenges need participatory, collaborative and collective responses.
Tim was lead for the World Wide Web Foundation’s Open Data in Developing Countries research network (2013 - 2015), and led development of the Open Data Barometer. He was co-editor of The State of Open Data: Histories and Horizons (2019), and founding director of the Global Data Barometer project. From 2015 - 2018 Tim was a co-founder and director of Open Data Services Co-operative, a worker owned team providing the technical backing to initiatives including 360Giving, the Open Contracting Data Standard and OpenOwnership.
Tim is a former fellow of the Harvard Berkman Klein Center for Internet and Society, and a senior fellow of the Datasphere Initiative. He is a graduate of the Oxford Internet Institute (Social Science of the Internet), and Oriel College, Oxford (Politics, Philosophy and Economics).
He lives in the People’s Republic of Stroud where he is involved in various Green politics.
This is a brief follow-up bulletin from the People’s Panel on AI, sharing the full summary findings report by panel facilitators Hopkins Van Mil, as well updates on next steps for the People’s Panel. You can find earlier bulletins from the People’s Panel here and in case you missed them the AI Summit week rush, the Panel’s recommendations are in Bulletin 5.
This is the fifth People’s Panel on AI Bulletin, and our last daily update - covering the Panel’s presentation event. You can find earlier bulletins from the week here. Look out for one more update soon with the final report, and please do take a few minutes to provide your feedback through the anonymous evaluation survey.
This is the fourth bulletin from the People’s Panel on AI, sharing observations and insights from Connected by Data as an observer. You can find earlier bulletins here.
Today, the panel focus turned from exploring AI, to thinking about the ways industry, academia, government and civil society can respond.
This is the third bulletin from the People’s Panel on AI, sharing observations and insights from Connected by Data as an observer. You can find earlier bulletins here.
Today, as well as joining plenary sessions at the AI Fringe, the People’s Panel members have been engaged in lots of small-group work: from hands-on learning with generative AI tools, to one-to-one conversations with scientists at the Hopes and Fears lab.
This is the second People’s Panel on AI Bulletin. You can find earlier bulletins here. In this update, reflections from our first half-day on:
But first, a quick summary of the day.
On Wednesday, the 12 members of the People’s Panel on AI met on Zoom for the first time, in a two-hour introduction and context-setting session. Next week they’ll spend four days in London engaging with the AI Fringe, to engage with experts, and follow updates from the AI Safety Summit.
Earlier, we held a briefing session to provide more background about the panel. You can find the recording here.
From Tuesday -> Friday next week we aim to share a daily bulletin with you - providing insights, updates and questions coming from panel deliberations.
Challenge 9 of the Open Gov Challenge calls for a focus on digital governance: strengthening transparency and public oversight of AI and data protection frameworks.
It is abundantly clear that, as uses of data and AI continue to grow in power - both in the public and private sectors - that protecting and extending democratic control over these technologies requires a concerted effort.
One month ago, after listening to an episode of the Facilitating Public Deliberation podcast on Citizen’s Juries and the Oregon Citizens Initiative Review I put together and shared around a rough concept note to propose a deliberative citizen’s review of the upcoming AI Safety Summit and AI Fringe. The idea is simple: if the impacts of AI are going to be felt by everyone, then we need more than just industry, government or elite institutional voices to be shaping the debate.
How should the toolkit of open government be applied to the governance of data and AI? That’s the question we set out to ask with our design lab workshop on the fringes of the Open Government Partnership (OGP) summit earlier this week.
The answer we arrived at: we need policy commitments that move beyond transparency alone, to centre the informed voice of citizens and affected communities in deliberating on and setting out the social licence for data and AI systems to operate, and in monitoring their procurement, implementation and impacts.
At the Data Justice Conference in Cardiff a few weeks ago we ran the first public play test of a card game designed to support conversations about collective and participatory data governance.
It’s the first iteration of the output from our participation design lab process exploring game design both as a method for researching methods to involve communities in data governance, and as a way of generating resources that might help inspire and embed new ways of working, particularly within private sector contexts.
This is the second post in a series produced as part of the analysis for the Measuring Data Values Around the World project.
We have previously scoped out how existing primary data collected from the Global Data Barometer might map to the Data Values framework. As a multi-dimensional composite index, the Global Data Barometer is based on both primary and secondary data sources.
In this post, we consider if there are elements of data values measurement which could be addressed by drawing on existing secondary indicators or by incorporating additional secondary data sources. These could feed into future iterations of the GDB, or be used in Data Values measurement products, tools or analysis based partially on the GDB.
As part of our project exploring how the Global Data Barometer might be used to provide insights and metrics for measurement against the Data Values framework, I’ve been looking into how Large Language Models (LLMs) like ChatGPT might impact upon the methodology of expert survey studies like the Barometer.
This post contains some initial notes from this exploration.