If you want to join this letter, fill out this form!
Acesse a versão em português aqui!
Versión en español aquí!
In Brazil, a human rights-based approach to regulation for Artificial Intelligence Systems is urgent.
It is undeniable that artificial intelligence (AI) systems have the potential to benefit society, particularly in promoting the 17 UN Sustainable Development Goals.
However, the lack of binding rules to regulate AI development, implementation, and use has enormous potential to exacerbate known risks and harms to people and communities. AI is already facilitating and generating concrete harms and violations, for instance, by reinforcing discriminatory practices, excluding historically marginalized groups from access to essential goods and services, supporting misinformation, undermining democratic processes, facilitating surveillance, exacerbating climate change, accelerating the epistemicide of Indigenous and local languages and cultures, and intensifying job insecurity.
To ensure AI systems promote innovation based on human rights, ethics, and responsibility, it is crucial to establish minimum rules to safeguard the rights of affected individuals, obligations for AI agents, governance measures, and the definition of a regulatory framework for oversight and transparency. This does not impede development and innovation; on the contrary, effective regulation that protects rights is an indispensable condition for the flourishing of responsible AI products and services that enhance human potential and the democratic rule of law.
Bill 2338/2023, which focuses on risks and rights, is a good guide for AI regulation, taking into account what is being developed in other international contexts. Although there is still space for improvement, as we show below, this approach would facilitate dialogue between the legislation of different countries (regulatory interoperability/convergence), reducing the effort of organizations to adapt to the Brazilian context. Moreover, AI regulation based on rights and attuned to its risks would help to position Brazil as a pioneer in providing and adopting responsible technologies.
Debunking myths and false trade-offs: regulation as a driver of responsible innovation and inclusive economic development
Actors who oppose the comprehensive regulation of AI in Brazil are precisely those who benefit from this unregulated scenario by creating arguments and narratives that do not hold up in practice.
- Regulation vs. Innovation
There is no conflict between regulation and innovation; both can and should coexist, as seen in the Brazilian Consumer Protection Code and the General Data Protection Law. In addition, rights-based regulation of AI systems allows the promotion of responsible innovation, fostering economic, technological, and social development that prioritizes well-being and the promotion of fundamental rights. The Brazilian Academy of Sciences published a report on AI regulation, confirming that stimulating the national AI industry and protecting fundamental rights are perfectly compatible agendas.
- “Increased need for dialogue”
The civil society has advocated for a more inclusive and systematic dialogue. However, those contrary to a prescriptive regulation use the same argument to impair and slow down the legislative process. Delaying the adoption of a responsible regulation allows the continued development and implementation of risky technologies.
- Unknown technology
The argument that AI is inherently disruptive and unmanageable for regulatory purposes does not hold because (a) studies on the subject, both in academia and the private sector, have accumulated eight decades of experimentation and analysis of social impacts; (b) AI agents, especially developers, have agency in making decisions about the ideation, development, and implementation of technology, including the option not to implement it if mechanisms for transparency, quality control, and accountability are deemed inadequate.
In addition to fallacious arguments and narratives, there is a strong mobilization of productive sectors and technology companies to prevent the vote on the Bill, whether through the imposition of a flurry of last-minute amendments, requests for public hearings, or lobbying directly with parliamentarians. The industry lobby is massive, including international trips and private events organized by big techs for the senators most involved in the debate.
After a successful lobbying to postpone the vote on the bill, a new round of public hearings was called. The first request indicating names for the hearings only included individuals from the private sector, disproportionately made up of white men from the southeast of the country, disregarding other sectors, especially civil society, as well as social markers of race, gender, and territory. It was up to civil society to fight for the inclusion of some of its representatives.
In these last hearings, representatives of the private sector insisted on the (fallacious) argument that AI regulation in Brazil would impede innovation in the country, being a huge cost for startups and small and medium-sized companies, in favor of innovation at any cost (including to the detriment of basic fundamental rights) – which disregards the specific chapter on stimulating innovation and growth for these companies, including with milder and more proportional obligations. There was also the use of arguments linked to censorship that have no legal basis, such as comparing the preliminary assessment of AI systems (whose purpose is only to verify the degree of risk of a given AI system) with prior censorship of developers.
With the approach of the vote on the project by the CTIA (which has already been prevented by the private lobby on more than one occasion), the intensification of the use of networks, especially X, to disseminate untrue content about PL 2338 was observed, classifying it as “Bill of censorship” and associating risk classification or governance measures with government’s political strategies. This strategy of relating bills that confer legitimate governance obligations on platforms with censorship has already been observed at other times, such as during the legislative process of Bill 2630/2020.
Brazilian Regulatory Landscape: June-July 2024
As the final work of the Brazilian Federal Senate’s Temporary Commission on Artificial Intelligence (CTIA), a report was published on June 7, 2024, containing a new proposal for Bill 2338, which was updated again on June 18 and July 4th, 2024. It is important to highlight that thelast proposal includes elements considered essential for the proper regulation of AI systems in Brazil, namely:
- Guaranteeing basic rights for individuals potentially affected by AI;
- Defining unacceptable uses of AI, that pose significant risks to fundamental rights;
- Creating general governance guidance and obligations, with specific requirements for high-risk systems and the public sector;
- Maintaining algorithmic impact assessments to identify, mitigate risks, and evaluate opportunities;
- Paying special attention to the Brazilian context of structural racism by incorporating measures throughout the text to prevent and combat different forms of direct and indirect discrimination, as well as to protect vulnerable groups;
- Establishing an oversight framework where the competent authority works together with sectorial regulators.
Also, we would like to highlight important improvements that were added to the original text of Bill 2338 by the latest proposal:
- Explicit prohibition of autonomous weapons systems;
- Creation of specific governance measures for general-purpose and generative AI, which is critical because such systems may not fit neatly into risk-level categorizations;
- Provide for societal participation in governance processes;
- Definition of the civil liability regime for operators, suppliers and distributors of artificial intelligence systems as set out in the Consumer Protection Code (objective liability) for consumer relations and in the Civil Code for the other cases. At the same time, it also guarantees the duty to reverse the burden of proof in case of vulnerability and lack of understanding and resources of the victim or when the characteristics of the AI system make it excessively burdensome for the victim to prove the requirements of civil liability;
- Direct designation of the Brazilian Data Protection Authority as the competent authority to harmonize the supervisory system, in collaboration with other actors.
Despite the advances outlined above, the last version maintains or exacerbates critical issues that contradict the central goal of regulating AI to protect rights and provide for responsible innovation.
What can be done to improve the Brazilian regulation?
→ Excessive Risks of AI Systems (Prohibitions)
Initially, it is important to highlight that the prohibited uses of AI systems must not be linked to the intention of the agent. Thus, the expression ‘with the purpose of’ for the prohibited uses (art. 13, I) must be excluded to encompass all harmful technologies and their effects, regardless of verification of intent from AI agents in being purposefully harmful or not.
Also, the prohibitions couldn’t also be linked to causality, such as causing or being likely to cause damage. So, it is necessary to amend 13, I (a) and (b) to exclude the causal link of ‘causing or being likely to cause harm to health, security or other fundamental rights of yourself or a third party’ for the prohibitions on the use of techniques to induce behaviour and to exploit vulnerabilities.
Furthermore, the use of facial recognition technologies for public security and criminal justice must be banned, considering that these are highly sensitive areas due to their potential to restrict fundamental rights such as freedom of expression and assembly and reverse the presumption of innocence. These uses also reaffirm the discriminatory potential of such technologies, which often lead to errors – known as false positives – leading to unacceptable situations, such as the unjust imprisonment of a man in Sergipe, the aggressive treatment of a young man after a mistaken identification in Salvador, and the unjust arrest of a woman in Rio de Janeiro. Cases like these are serious and recurrent.
Constant, far-reaching, and indiscriminate surveillance constitutes a violation of people’s rights and freedoms and limits civic and public space. What is known about facial recognition systems is that they are inefficient and create unnecessary costs for public administration due to their high error rates. A recent study indicated an average cost to public resources of 440,000 reais per arrest through facial recognition. In addition to their inefficiency, such systems have been consistently denounced for their discriminatory impact, disproportionately affecting black populations and, to a greater extent, women.
We see the authorization to use facial recognition systems as a violation of the most cherished fundamental rights of the Brazilian people. Moreover, the manner in which their use is allowed, without specific safeguards and protections given the nature of these systems, exacerbates the already recognized problem. Equally worrying is the lack of specific data protection legislation for public security activities.
→ High-Risk AI Systems
Also, there is a need for change in high-risk category provisions, mainly (a) regarding the assessment of indebtedness capacity and the establishment of credit scores and (b) harmful uses of AI.
a. Assessment of indebtedness capacity and the establishment of credit scores
A credit score is a tool used by banks and financial institutions to assess whether an individual is a reliable borrower based on their risk of default, for example, when making decisions about access to credit in these institutions.
Access to financial credit is fundamental as a prerequisite for the exercise of a range of constitutionally guaranteed rights, which underscores the importance of robust safeguards in AI-defined credit scoring. This is especially crucial due to its proven potential for discrimination, as evinced by research, including studies conducted in the Brazilian context. It is important to note that in addition to the critical importance of credit as a prerequisite for access to essential goods and services such as healthcare and housing, credit models are fed with a large amount of personal data, which in itself requires enhanced security protection.
Considering this application in the high-risk classification is consistent with other international regulations, such as the European Regulation on Artificial Intelligence, the AI Act.
b. Harmful uses of AI
Article 14, IX, X and XI list the uses of AI shown in various national and international studies to have more negative than positive effects and may be contrary to international human rights law. These refer to the analytical study of crimes involving natural persons to identify behavioral patterns and profiles; to assess the credibility of evidence; predictive policing; and emotion recognition.
This is mainly because such uses are associated with techno-solutionism and Lombrosian theories, which would reinforce structural discrimination and historical violence against certain groups in society, particularly black people.. In 2016, ProPublica published a study showing how an AI system used to generate a predictive score of an individual’s likelihood of committing future crimes showed biased results based on race and gender that did not match the reality of individuals who actually committed or reoffended. Thus, subsection X of Article 14 would potentially allow AI to be used in a similar manner to “predict the occurrence or recurrence of an actual or potential crime based on the profiling of individuals,” which has not been scientifically proven and, to the contrary, has been shown to potentially increase discriminatory practices.
Subsection XI, on biometric identification and authentication for emotion recognition, is also of concern, as there is no scientific consensus on the ability to identify emotions based solely on facial expressions, leading, for example, Microsoft to discontinue the use of AI tools for such purposes.
In this sense, we consider the necessity of classifying the uses provided for in Article, 14, IX, X and XI as unacceptable risks.
→ Civil society participation in governance and oversight system
The actual version of the bill improved the definition of the oversight system. The text now proposes the creation of the National AI System (SIA), composed of the competent authority, pointed out to be the Brazilian Data Protection Authority (ANPD), sectoral authorities, the new Council for Regulatory Cooperation on AI (CRIA), and the Committee of AI Experts and Scientists (CECIA).
Considered an important step forward, the Council for Regulatory Cooperation, which will serve as a permanent forum of collaboration, now includes the participation of civil society. However, it is important to ensure that this participation is meaningful, guaranteeing civil society an active and effective role in AI governance and regulation.
→ Other modifications
With the lobbying of big tech and industry, as mentioned previously, there was a reduction in important governance obligations for AI systems in the latest version of July 4th, especially linked to transparency, beyond the reduction of the rights of potentially affected people. Therefore, for regulation to remain coherent in its primary function of protecting fundamental rights (especially those of vulnerable groups), combating discrimination, and ensuring processes for the development, use, and implementation of more responsible systems, it is essential to return articles 6, 8th, and 17th of the version of the bill published on June 18, 2024.
For the approval of Bill 2338/2023 with the inclusion of the improvements
With all said, the subscribing organizations express support for advancing the legislative process of Bill 2338/2023 within the Commission of AI (CTIA) and the Senate plenary towards its approval, provided that the following improvements are made:
- Amendment of Article 13, I header to exclude the expression ‘with the purpose of’ for the prohibited uses, to encompass all harmful technologies and their effects, regardless of verification of intent from AI agents in being purposefully harmful or not;
- Amendment of Article 13, I (a) and (b) to exclude the causal link of ‘causing or being likely to cause damage’ for the prohibitions on the use of subliminal techniques and to exploit vulnerabilities;
- Amendment of Article 13, VII to exclude the exceptions for using biometric identification systems remotely, in real-time, and in spaces accessible to the public, therefore banning the use of these systems for public security and criminal prosecution. Or, at a minimum, we call for a moratorium that authorizes uses in the listed exceptions only after approval of a federal law that specifies the purposes of use and guarantees compliance with sufficient safeguards measures (at least those guaranteed for high-risk AI systems);
- Return of the credit score or other AI systems intended to be used to evaluate the creditworthiness of natural persons to the high-risk list in Article 14, with the possibility to create an exception for AI systems used to detect financial fraud;
- The change of classifications for systems mentioned in Article 14, IX, X and XI into the category of unacceptable risks;
- Ensure that civil society participation in the National Artificial Intelligence Governance and Regulation System (SIA), through the composition of the Artificial Intelligence Regulatory Cooperation Council (CRIA), is effective and meaningful;
- Return articles 6, 8th, and 17th of the version of the bill published on June 18, 2024.
If you want to join this letter in defense (criticism) of PL 2338/2023 and the clear need for protective regulation of fundamental rights in Brazil, fill out this form.
Organizations that sign this open letter:
- ABRA – Associação Brasileira de Autores Roteiristas
- Access Now
- AFROYA TECH HUB
- Amarc Brasil
- ANTE- Articulação Nacional das Trabalhadoras e dos Trabalhadores em Eventos
- AQC – ESP
- Assimetrias/UFJF
- Associação Brasileira de Imprensa
- BlackPapers
- Câmara dos Deputados- Liderança do PT
- Câmara Municipal do Recife
- Centro de Direitos Humanos de Jaraguá do Sul – SC
- Centro de Estudos de Segurança e Cidadania (CESeC)
- Centro de Pesquisa em Comunicação e Trabalho
- Coalizão Direitos na Rede
- Coding Rights
- Coletivo artistico – Guilda dos Quadrinhos
- Complexos – Advocacy de Favelas
- data_labe
- Data Privacy Brasil
- Digital Action
- DiraCom – Direito à Comunicação e Democracia
- FaleRio- Frente Ampla pela Liberdade de Expressão do Rio de Janeiro
- Fizencadeando
- FNDC – Fórum Nacional pela Democratização da Comunicação
- FONSANPOTMA SP
- Fotógrafas e Fotógrafos pela Democracia
- FURG
- GNet (Grupo de Estudos Internacionais em Propriedade Intelectual, Internet & Inovação)
- Grupo de Pesquisa Economia Política da Comunicação da PUC-Rio/CNPq
- Grupo de Pesquisa e Estudos das Poéticas do Codiano – EPCO/UEMG
- Idec – Instituto de Defesa de Consumidores
- IFSertãoPE – Campus Salgueiro
- Iniciativa Direito a Memória e Justiça Racial
- INSTITUTO AARON Swartz
- Instituto Bem Estar Brasil
- Instituto Lidas
- Instituto Minas Programam
- Instituto de Pesquisa em Internet e Sociedade (IRIS)
- Instituto Sumaúma
- Instituto Soma Brasil
- Instituto Telecom
- IP.rec – Instituto de pesquisa em direito e tecnologia do Recife
- Intervozes – Coletivo Brasil de Comunicação Social
- Irmandade SIOBÁ/Ba
- Laboratório de Políticas Públicas e Internet – LAPIN
- MediaLab.UFRJ
- Movimento Dublagem Viva
- MST
- Niutechnology
- OBSCOM-UFS (Observatório de Economia e Comunicação da Universidade Federal de Sergipe)
- Open Knowledge Brasil
- PAVIC – Pesquisadores de Audiovisual, Iconografia e Conteúdo
- Rebrip – Rede Brasileira pela Integração dos Povos
- Rede Justiça Criminal
- Rede LAVITS
- SATED- SÃO PAULO
- sintespb
- Transparência Brasil
- Voz da Terra
- UFRN
- ULEPICC-Brasil (União Latina de Economia Política da Informação, da Comunicação e da Cultura, Capítulo Brasil)
- UNIDAD
Individuals signing this open letter:
- Admirosn Medeiros Ferro Júnior
- Alê Capone
- Aliete Silva Mendes
- Alvaro Bastoni Junior
- Ana Carolina Sousa Dias
- Ana Gretel Echazú Böschemeier, PhD
- Andreza Rocha
- Angela Couto
- Aurora Violeta
- Bianca Soares Monteiro
- Breno Augusto Silva de Moura
- Camila Silva Lima
- César Ricardo Siqueira Bolaño
- César Saraiva Lafetá Júnior
- Delmo de Oliveira Torres Arguelhes
- Diego Rabatone Oliveira
- Dirce Zan
- Emanuella Ribeiro
- Erika Gushiken
- Flávia da Silva Fernandes
- Fran Raposo
- Francisco S Costa
- Fransergio Goulart
- Frido Claudino
- Giovanna Mara de Aguiar Borges
- Haydée Svab
- Huan Pablo da Silva dos Santos
- Inaê Batistoni
- Ivan Moraes
- Janaina Souto Barros
- Janaina VIsibeli Barros
- Jane Caraça Schneider
- Jolúzia Batista
- Jonas Cardoso Borges
- José Eduardo De Lucca
- José Maria Rodrigues Nunes
- Juliana Vieira
- Lais Azeredo Rodrigues
- Larissa de Medeiros
- Larissa Vieira
- Lauro Accioly
- León Fenzl
- Livia Maria Santos Silva
- Louise Karczeski
- Luciano Félix Ferreira
- Luis Gonçalves
- Luís Gustavo de Souza Azevedo
- Luiza Vieira
- Luli Hata
- Maíra Vilela Francisco
- Marcelo Rodrigues Saldanha da Silva
- Marcelo de Freitas Rigon
- Maria José Braga
- Mariana Alves Araújo Lopes
- Mateus Mendes
- Melissa Cipriano Vanini Tupinambá
- Mônica Martins
- Mônica Zarattini
- Naiara Lopes
- Nilo Sergio Silva Gomes
- Olga Teixeira de Almeida
- Ogan Valdo Lumumba
- Paula Guedes Fernandes da Silva
- Paulo Rená da Silva Santarém
- Pedro Silva Neto
- Poema Portela
- Raíssa Trevisan
- Raquel de Moraes Alfonsin
- Ricardo Calmona Souza
- Rodrigo da Silva Guerra
- Rosana Moreira da Rocha
- Samuel Consentino de Medeiros
- Sérgio Luiz Homrich dos Santos
- Taís Oliveira
- Tânia Regina Homrich dos Santos
- Terezinha de Fátima dos Santos
- Thamires Orefice
- Vera Lucia Freitas
- Victor Sousa Barros Marcial e Fraga
- Wagner Moreira Da Silva
- Wallace de Oliveira de Souza
- Ydara de Almeida, Advogada
If you want to join this letter, fill out this form:
Organizations:
Individuals: