Meeting the digital governance challenge - prospective open government actions
How should the toolkit of open government be applied to the governance of data and AI? That’s the question we set out to ask with our design lab workshop on the fringes of the Open Government Partnership (OGP) summit earlier this week.
The answer we arrived at: we need policy commitments that move beyond transparency alone, to centre the informed voice of citizens and affected communities in deliberating on and setting out the social licence for data and AI systems to operate, and in monitoring their procurement, implementation and impacts.
We’ll be sharing a full workshop report soon, but in the meantime, I wanted to share a sketch of the three draft ‘model commitments’ we put together, and some of the considerations that went into them.
For each commitment we generated a set of key features, a set of points to make the case for why the commitment matters, and supporting notes on how the commitment might be delivered in practice in different contexts.
Commitment 1: Oversight bodies for technology procurement
Public sector organisations should create local oversight groups to supervise procurement processes that involve technology and AI platforms.
This group should include representatives from different stakeholders, including government IT teams, information governance teams, workers, civil society, and informed citizens.
- An oversight group would be kept informed about technology procurement, be able to advise, be empowered to ask questions, and would be able to issue public reports on processes.
- Publishing oversight minutes would provide a degree of transparency.
- Training could be provided for members to feel equipped to engage with technical, environmental and substantive/impact aspects of any technology and AI systems being procured.
- The group would be able to take a deep-dive look at strategic contracts at planning, procurement or implementation stages.
This model commitment draft was designed with city and local government specifically in mind. It is intended to cover all technology procurement, as increasingly technology products are having AI features added (including post-procurement) that need to be evaluated and considered. For example, the introduction of Co-pilot in Office 365 may affect how public servants write reports, or assess applications.
By creating a multi-stakeholder oversight group and by including elements of training, an oversight group contributes to capacity building, and breaking down the silos between technology teams, delivery teams, and affected communities or public service users.
A national government, or international campaign, could support adoption of this commitment by providing advice, training and knowledge sharing support to local technology procurement oversight groups.
The concept of a procurement oversight board is not specific to technology procurement: but there is value in having a group with the right combination of members, skills and training to be able to critically support adoption of technology and AI within a public agency.
Commitment 2: Participatory development of national data and AI strategies
Governments should adopt a robust co-creation approach in the development of any national data and AI strategies.
This must involve a transparent multi-stakeholder approach that allows the country to develop a shared vision for AI development, and to establish ongoing and participatory mechanisms that can oversee its implementation.
It is important that data and AI policy is shaped by citizen voice, not just industry or efficiency agendas that can have a limited view of both the harms, and the potential benefits, of data and AI.
A broad participatory process around data can contribute to national capacity building.
Policy co-creation should involve stakeholders from different sectors, recognising that data and AI are cross-cutting issues that affect the whole of society.
Processes should have a focus on inclusion, and on identifying the third-party oversight needed to monitor policy implementation.
This model commitment draft was developed in recognition that many governments are actively developing data and AI strategies, but are often reliant on a narrow range of stakeholders, inputs and ideas.
Thinking about data and AI as national infrastructure can help to highlight the importance of multi-stakeholder participatory processes. In many countries there are already well established precedents and models for formal public engagement in the planning, delivery and monitoring of physical infrastructure projects that might provide lessons to apply to the development of data and AI infrastructures.
Selecting participation processes that involve elements of capacity building, such as deliberative fora, can support engagement that is better able to address different trade-offs.
Commitment 3: Regulators and the social licence for AI
Working in partnership with civil society, the government should carry out a mapping of existing regulatory tools and capabilities for governing the impacts of AI.
The government should support a broad public deliberation focussed on the terms that should be in the social licence to operate granted to AI firms. The outcomes of this should be shared with, and used by, regulators to guide their work.
Each regulator should identify and establish at least one standing public engagement mechanism to make sure they are able to detect and respond to emerging impacts and opportunities related to AI.
The impacts and opportunities of AI exist in many different sectors: and sectoral regulators have an important role to play. It is important to understand the capacity building needs of regulators.
Understanding how different communities are being impacted by AI, and setting regulatory priorities, might require a range of approaches, including, but not limited to:
- Consumer and community feedback processes - allowing regulators to hear and act on reports direct from affected individual and communities;
- Peer-research function - drawing on a network of peer-researcher to gain bottom-up insights into how technologies are impacting regulated issues;
- Standing citizens jury - embedding a trained group within the regulator to input on different topics of concern or focus;
- Joint audit and participatory audits - where regulators either provide a gateway to respond to civil society or academia initiated audits (particularly useful when regulators have low capacity), or when regulators co-deliver audit with involvement of affected communities.
This model commitment draft is focussed on how open government practices can be applied in the regulation of private sector applications of AI (as opposed to regulating how the public sector directly uses data and AI, as in commitment draft 1)
The concept of social licence to operate (SLO) is well established in a number of industries, and Verhulst and Saxena have championed its application to activities around data. Investopedia defines it as “the ongoing acceptance of a company or industry’s standard business practices and operating procedures by its employees, stakeholders, and the general public”. A suitable inclusive public deliberation, such as a deliberative citizens assembly, has the potential to generate a set of legitimate principles to inform judgements of how far firms have social licence. In some cases, this may be able to inform existing regulatory action; in other cases, comparing citizen views on social licence to the current regulatory toolkit might reveal gaps that need future legislative reform.
Shared assessment tools and donor-support assessments (e.g. AI regulatory readiness assessments) may have an important role to play in supporting states to deliver the first part of this commitment: multi-stakeholder mapping of current governance/regulation capability.
We specifically explored how this commitment might be applied in the context of **election integrity **where there are significant concerns about use of generative AI to produce misinformation, or fragment public debate through micro-targeting. By asking for a mapping of regulatory capacity, this commitment should enable identification of existing tools that could be adapted to bring transparency to the use of AI in elections (e.g. campaign disclosure and finance rules). By advocating for public dialogue, it could generate clearer consensus on the appropriate uses of AI in election campaigning. And by calling for creation of ‘bottom up’ sensing and research capacity for the regulator, it should increase opportunities for abuses of AI to be detected. For example, regulators might run, or partner in, the crowdsourcing of election adverts that individuals have seen in order to identify inappropriate micro-targeting or misinformation.
We also explored how appropriately design and governed AI might be used to help regulation at scale: such as supporting analysis of citizen reports through a feedback mechanism.
Building better digital governance commitments
Photo: Participants at the workshop Data and AI Governance - securing meaningful commitments on transparency, participation and redress (Design Lab).
The model commitment drafts above were developed from a process that explored current transparency, participation, redress and capacity building methods that have been put forward in the context of data and AI. However, we quickly realised that focussing on measures such as algorithmic transparency registers, or one-off work on public opinions of AI, are too divorced from the contexts in which data and AI are applied, and from the triggers for powerful public voice.
Building on insights and advice from OGP experts (that effective commitments need owners, collaboration in both design and implementation, to build on existing laws and frameworks, to be SMART, and to be goal oriented) the drafts above emerged. In our discussions, we were drawing as much upon other areas of the open government field, such as open contracting, social audit, extractives industry transparency and so-on, as we were on ideas rooted in the data and technology field. The concept of social licence particularly resonated and informed a number of the ideas above, as did a recognition that AI is not just being introduced in ‘AI tools’, but is finding its way into private and public sector delivery through upgrades and add-ons to existing products and services in use.
We were also reminded that we should not be restricted to a ‘harm frame’ in thinking about data and AI, or to safeguarding the status quo, but that we should have ambition to see data and AI used as part of efforts to challenge and overcome current biases, discriminatory practices and inequalities of opportunity. As we have seen in past OGP commitments, critically used, data and technology can be powerful tools to deliver greater accountability.
Of course - the drafts above need a lot more development. They are shared here as an open resource for anyone to build on. We’re looking forward to talking with workshop participants, OGP partners and others about if and how they might be taken forward further. If you’ve got an idea on that - do get in touch!