Building trust through civil engagement in data and AI: some lessons from Brazil
With the UK’s Data Protection and Digital Information Bill being back in the Commons, and our concerns regarding how it misses an opportunity to build public trust in technology and give people and communities a powerful say in the matter, we have been looking to learn from other countries on ways to address these issues.
In this post Sao-Paulo based research associate Maria Luciano outlines how Brazil’s ups and downs in regulating data and AI can provide an example of where civic engagement and inclusive processes can contribute to a more robust rights-based approach to technology governance.
Opening the Brazilian Congress to the people
The efforts to regulate artificial intelligence systems in Brazil were inaugurated with the launch of the Brazilian Strategy for Artificial Intelligence (BSAI/EBIA) by the Ministry of Science, Technology, and Innovation, linked to the Federal Government in April 2021. The document aimed to guide “the Brazilian government in the development of actions, in its various aspects, that stimulate research, innovation and development of AI solutions, as well as their conscious and ethical use.” However, the strategy was criticised for not recognizing the importance of public input and meaningful participation in AI governance structures.
A few months later, in September 2021, Brasil experienced another setback in its AI public debate: the approval of the draft bill 21/20 by the House of Representatives after only a few weeks of internal discussions, with no public hearings nor much public attention staggered many. Although it aimed to “establish principles, rules, guidelines, and foundations to regulate the development and application of artificial intelligence” in the country, it established a regulatory regime in which meaningful regulation of AI was the exception rather than the default. Aligned with the private sector interests, the bill also created a decentralised model in which each economic sector would regulate its own applications of AI, and was silent on specific obligations or sanctions for the private companies developing or employing these systems. Following its approval in the House of Representatives, the bill was sent to the Senate for another round of discussions (committees stage) and voting.
In the Senate, following the backlash from such opaque and rapid processing, the proceedings were slowed down. In February 2022, the President of the Senate established the Commission of Jurists, composed of eighteen experts, including prominent figures from the legal profession, academia, and the public sector.1 Chaired by Villas Bôas Cueva, a justice of the Superior Court of Justice, the Commission was now responsible for supporting the drafting of a substitute draft bill to three bills that were already under discussion in the National Congress: bill 5,051/2019, proposed by Senator Styvenson Valentim (PODEMOS/RN); bill 872/2021, presented by Senator Veneziano Vital do Rêgo (MDB/PB); and the aforementioned bill 21/2020, proposed by Federal Deputy Eduardo Bismarck (PDT/CE). Installed on March 30, the group presented a work plan organised in three stages from April to December 2022: installation of the Commission and public participation; working meetings and international inputs; drafting and consolidation of input for the drafting of a substitute.
The Brazilian civil society considered the Commission a victory, but the lack of racial and regional diversity, as well as of different areas of knowledge, in its composition did not go unnoticed - despite the acknowledgment that the Senate’s Internal Regulations stipulate that only individuals with legal training can be appointed.
The Commission’s response was to convene twelve public hearings, and a public consultation, for which they received 102 contributions. Sixty people from different areas of knowledge and regions of the country participated in those hearings. For many of them, this was the first moment of contact with the legal debate around AI, holding a distinct cultural significance.
The Commission submitted its report in December 2022, and in May 2023, the president of the Senate presented a new bill for AI regulation replicating the Commission’s proposal. The formal legislative process will now continue in the Senate, with a list of thematic committees that will analyse the text, and a debate and voting in the Plenary.
At the end of this entire process, any bill approved by the Senate will need to return to the House of Representatives, since it is up to the revising House to decide which text to submit for presidential approval.
The Brazilian Commission of Jurists’ Bill on AI
Following the inclusion of the right to data protection in the list of constitutional fundamental rights in 2022, the first pillar of the draft bill proposed by the Commission is a list of rights of “individuals who are affected by AI systems” (rights-based), associated with a precautionary approach to risk analysis. It ought to be observed by the suppliers and operators of these systems, and it can be asserted before the competent administrative and judicial authorities (“individually or collectively”), and applied independently of the technology’s risk classification.
Article 5. Individuals affected by artificial intelligence systems have the following rights, to be exercised in the manner and under the conditions described in this Chapter:
I – the right to prior information regarding their interactions with artificial intelligence systems;
II – the right to an explanation of the decision, recommendation, or prediction made by artificial intelligence systems;
III – the right to contest decisions or predictions made by artificial intelligence systems that have legal effects or significantly impact the interests of the affected party;
IV – the right to determination and human participation in decisions made by artificial intelligence systems, taking into account the context and the state of the art in technological development;
V – the right to non-discrimination and correction of direct, indirect, illegal, or abusive discriminatory biases; and
VI – the right to privacy and data protection, in accordance with relevant legislation.
Overall, it establishes two main mechanisms to empower individuals to exercise their rights: open data and impact (or risk) assessments.
As reducing information asymmetry is one of the main goals of the proposed regulation, the creation of a public database with impact assessments of high-risk artificial intelligence systems is also suggested by the Brazilian proposal.
Article 43. It is the responsibility of the competent authority to create and maintain a database of high-risk artificial intelligence accessible to the public, containing public documents of impact assessments, while respecting trade secrets and industrial secrets, as defined by the regulations.
The initial idea of some of the jurists was to propose a centralised public database. But after some concerns were raised following the issues faced by similar initiatives (lack of budget, lack of coordination between all the over 3,000 municipalities in the country), the final version of the bill included a decentralised transparency requirement allowing municipalities to publish information on their own websites.
The bill also establishes impact assessments as the tool to classify a system’s risk category. By describing its methodological steps and public nature, the text puts the risk classifications under scrutiny and, ultimately, as the result of public deliberation.
Article 24. The methodology for impact assessment shall include, at least, the following stages:
I – preparation;
II – risk awareness;
III – mitigation of identified risks;
IV – monitoring.
§ 1. The impact assessment shall consider and record, at least:
a) known and foreseeable risks associated with the artificial intelligence system at the time of its development, as well as risks reasonably expected from it;
b) benefits associated with the artificial intelligence system;
c) probability of adverse consequences, including the number of potentially affected individuals;
d) severity of adverse consequences, including the effort required to mitigate them;
e) logic of operation of the artificial intelligence system;
f) process and results of tests and evaluations, and mitigation measures conducted to verify potential impacts on rights, with special emphasis on potential discriminatory impacts;
g) training and awareness actions regarding the risks associated with the artificial intelligence system;
h) mitigation measures and indication and justification of the residual risk of the artificial intelligence system, accompanied by frequent quality control tests;
i) transparency measures to the public, especially to potential system users, regarding residual risks, particularly when involving a high degree of harm or danger to the health or safety of users, in accordance with articles 9 and 10 of Law No. 8,078, dated September 11, 1990 (Consumer Protection Code);
§ 2. In compliance with the precautionary principle, when using artificial intelligence systems that may generate irreversible or difficult-to-reverse impacts, the algorithmic impact assessment will also take into account incipient, incomplete, or speculative evidence.
§ 3. The competent authority may establish other criteria and elements for the preparation of impact assessments, including the participation of different social segments affected, according to risk and economic size of the organisation.
§ 4. The competent authority shall regulate the periodicity of updating impact assessments, considering the life cycle of high-risk artificial intelligence systems and the fields of application, and may incorporate sectoral best practices.
§ 5. Artificial intelligence agents that, after their introduction to the market or use in service, become aware of unexpected risks that pose a threat to the rights of natural persons, shall immediately communicate this fact to the competent authorities and to the individuals affected by the artificial intelligence system.
Article 26. While preserving industrial and commercial secrets, the conclusions of the impact assessment will be public, containing at least the following information:
I – description of the intended purpose for which the system will be used, as well as its context of use and territorial and temporal scope;
II – risk mitigation measures, as well as their residual level once such measures are implemented;
III – a description of the participation of different affected segments, if it has occurred, in accordance with the provisions of Article 24, Paragraph 3 of this Law.
Lastly, it is worth mentioning how the draft bill addresses one of the most debated topics around AI within the Brazilian public debate: the use of facial recognition systems by the public sector.
While civil society has advocated for the complete ban of facial recognition systems in public spaces, especially for public security purposes, the bill ended up creating a type of moratorium, establishing that these uses will only be allowed “when there is a specific federal law provision and judicial authorization in connection with the activity of individualised criminal prosecution” (art. 15). This provision has been highly criticised, with the Rights in Network Coalition stating that “systems of artificial intelligence used for the purposes of criminal investigation and public security, as well as analytical study of crimes related to natural persons” should not be considered high-risk, but of “unacceptable” risk.
Participation changes policy
Although the draft bill in Brazil has some way still to go if it is to become law, the journey outlined above demonstrates the vital importance, and impact of a process that looks beyond industry interests when regulating data and AI.
Footnotes
-
The creation of this type of commission is established by the internal regulations of the House of Representatives (art. 205) and the Senate (art. 374) whenever a new code is under discussion. Within tech, a similar effort was made in 2019 by the then President of the House of Representatives, Rodrigo Maia (DEM-RJ), who created a Commission of Jurists with fifteen legal experts to draft a preliminary bill on the processing of personal data for purposes of public security, national defence, and investigation of criminal offences. It is up to the president of both Houses to choose the members of these commissions, and no justification regarding the appointments is needed (nor has been given). ↩