Open Letter: Advocating for Brazilian AI regulation that protects human rights

If you want to join this letter, fill out this form!

Acesse a versão em português aqui!
Versión en español aquí!

In Brazil, a human rights-based approach to regulation for Artificial Intelligence Systems is urgent.

It is undeniable that artificial intelligence (AI) systems have the potential to benefit society, particularly in promoting the 17 UN Sustainable Development Goals.

However, the lack of binding rules to regulate AI development, implementation, and use has enormous potential to exacerbate known risks and harms to people and communities. AI is already facilitating and generating concrete harms and violations, for instance, by reinforcing discriminatory practices, excluding historically marginalized groups from access to essential goods and services, supporting misinformation, undermining democratic processes, facilitating surveillance, exacerbating climate change, accelerating the epistemicide of Indigenous and local languages and cultures, and intensifying job insecurity.

To ensure AI systems promote innovation based on human rights, ethics, and responsibility, it is crucial to establish minimum rules to safeguard the rights of affected individuals, obligations for AI agents, governance measures, and the definition of a regulatory framework for oversight and transparency. This does not impede development and innovation; on the contrary, effective regulation that protects rights is an indispensable condition for the flourishing of responsible AI products and services that enhance human potential and the democratic rule of law.

Bill 2338/2023, which focuses on risks and rights, is a good guide for AI regulation, taking into account what is being developed in other international contexts. Although there is still space for improvement, as we show below, this approach would facilitate dialogue between the legislation of different countries (regulatory interoperability/convergence), reducing the effort of organizations to adapt to the Brazilian context. Moreover, AI regulation based on rights and attuned to its risks would help to position Brazil as a pioneer in providing and adopting responsible technologies.

Debunking myths and false trade-offs: regulation as a driver of responsible innovation and inclusive economic development

Actors who oppose the comprehensive regulation of AI in Brazil are precisely those who benefit from this unregulated scenario by creating arguments and narratives that do not hold up in practice.

  1. Regulation vs. Innovation 

There is no conflict between regulation and innovation; both can and should coexist, as seen in the Brazilian Consumer Protection Code and the General Data Protection Law. In addition, rights-based regulation of AI systems allows the promotion of responsible innovation, fostering economic, technological, and social development that prioritizes well-being and the promotion of fundamental rights. The Brazilian Academy of Sciences published a report on AI regulation, confirming that stimulating the national AI industry and protecting fundamental rights are perfectly compatible agendas.

  1. “Increased need for dialogue”   

The civil society has advocated for a more inclusive and systematic dialogue. However, those contrary to a prescriptive regulation use the same argument to impair and slow down the legislative process. Delaying the adoption of a responsible regulation allows the continued development and implementation of risky technologies.

  1. Unknown technology  

The argument that AI is inherently disruptive and unmanageable for regulatory purposes does not hold because (a) studies on the subject, both in academia and the private sector, have accumulated eight decades of experimentation and analysis of social impacts; (b) AI agents, especially developers, have agency in making decisions about the ideation, development, and implementation of technology, including the option not to implement it if mechanisms for transparency, quality control, and accountability are deemed inadequate.

In addition to fallacious arguments and narratives, there is a strong mobilization of productive sectors and technology companies to prevent the vote on the Bill, whether through the imposition of a flurry of last-minute amendments, requests for public hearings, or lobbying directly with parliamentarians. The industry lobby is massive, including international trips and private events organized by big techs for the senators most involved in the debate.

After a successful lobbying to postpone the vote on the bill, a new round of public hearings was called. The first request indicating names for the hearings only included individuals from the private sector, disproportionately made up of white men from the southeast of the country, disregarding other sectors, especially civil society, as well as social markers of race, gender, and territory. It was up to civil society to fight for the inclusion of some of its representatives.

In these last hearings, representatives of the private sector insisted on the (fallacious) argument that AI regulation in Brazil would impede innovation in the country, being a huge cost for startups and small and medium-sized companies, in favor of innovation at any cost (including to the detriment of basic fundamental rights) – which disregards the specific chapter on stimulating innovation and growth for these companies, including with milder and more proportional obligations. There was also the use of arguments linked to censorship that have no legal basis, such as comparing the preliminary assessment of AI systems (whose purpose is only to verify the degree of risk of a given AI system) with prior censorship of developers.

With the approach of the vote on the project by the CTIA (which has already been prevented by the private lobby on more than one occasion), the intensification of the use of networks, especially X, to disseminate untrue content about PL 2338 was observed, classifying it as “Bill of censorship” and associating risk classification or governance measures with government’s political strategies. This strategy of relating bills that confer legitimate governance obligations on platforms with censorship has already been observed at other times, such as during the legislative process of Bill 2630/2020.

Brazilian Regulatory Landscape: June-July 2024

As the final work of the Brazilian Federal Senate’s Temporary Commission on Artificial Intelligence (CTIA), a report was published on June 7, 2024, containing a new proposal for Bill 2338, which was updated again on June 18 and July 4th, 2024. It is important to highlight that thelast proposal includes elements considered essential for the proper regulation of AI systems in Brazil, namely:

  • Guaranteeing  basic rights for individuals potentially affected by AI;
  • Defining unacceptable uses of AI, that pose significant risks to fundamental rights;
  • Creating general governance guidance and obligations, with specific requirements for high-risk systems and  the public sector;
  • Maintaining algorithmic impact assessments to identify, mitigate risks, and evaluate opportunities;
  • Paying special attention to the Brazilian context of structural racism by incorporating measures throughout the text to prevent and combat different forms of direct and indirect discrimination, as well as to protect vulnerable groups;
  • Establishing an oversight framework where the competent authority works together with sectorial regulators.

Also, we would like to highlight important improvements that were added to the original text of Bill 2338 by the latest proposal:

  • Explicit prohibition of autonomous weapons systems;
  • Creation of specific governance measures for general-purpose and generative AI, which is critical because such systems may not fit neatly into risk-level categorizations;
  • Provide for societal participation in governance processes;
  • Definition of the civil liability regime for operators, suppliers and distributors of artificial intelligence systems as set out in the Consumer Protection Code (objective liability) for consumer relations and in the Civil Code for the other cases. At the same time, it also guarantees the duty to reverse the burden of proof in case of vulnerability and lack of understanding and resources of the victim or when the characteristics of the AI ​​system make it excessively burdensome for the victim to prove the requirements of civil liability;
  •  Direct designation of the Brazilian Data Protection Authority as the competent authority to harmonize the supervisory system, in collaboration with other actors.

Despite the advances outlined above, the last version maintains or exacerbates critical issues that contradict the central goal of regulating AI to protect rights and provide for responsible innovation. 

What can be done to improve the Brazilian regulation?

Excessive Risks of AI Systems (Prohibitions)

Initially,  it is important to highlight that the prohibited uses of AI systems must not be linked to the intention of the agent. Thus, the expression ‘with the purpose of’ for the prohibited uses (art. 13, I) must be excluded to encompass all harmful technologies and their effects, regardless of verification of intent from AI agents in being purposefully harmful or not.

Also, the prohibitions couldn’t also be linked to causality, such as causing or being likely to cause damage. So, it is necessary to amend  13, I (a) and (b) to exclude the causal link of ‘causing or being likely to cause harm to health, security or other fundamental rights of yourself or a third party’ for the prohibitions on the use of techniques to induce behaviour and to exploit vulnerabilities.

Furthermore, the use of facial recognition technologies for public security and criminal justice must be banned, considering that these are highly sensitive areas due to their potential to restrict fundamental rights such as freedom of expression and assembly and reverse the presumption of innocence. These uses also reaffirm the discriminatory potential of such technologies, which often lead to errors – known as false positives – leading to unacceptable situations, such as the unjust imprisonment of a man in Sergipe, the aggressive treatment of a young man after a mistaken identification in Salvador, and the unjust arrest of a woman in Rio de Janeiro. Cases like these are serious and recurrent.

Constant, far-reaching, and indiscriminate surveillance constitutes a violation of people’s rights and freedoms and limits civic and public space. What is known about facial recognition systems is that they are inefficient and create unnecessary costs for public administration due to their high error rates. A recent study indicated an average cost to public resources of 440,000 reais per arrest through facial recognition. In addition to their inefficiency, such systems have been consistently denounced for their discriminatory impact, disproportionately affecting black populations and, to a greater extent, women.

We see the authorization to use facial recognition systems as a violation of the most cherished fundamental rights of the Brazilian people. Moreover, the manner in which their use is allowed, without specific safeguards and protections given the nature of these systems, exacerbates the already recognized problem. Equally worrying is the lack of specific data protection legislation for public security activities.

→ High-Risk AI Systems

Also, there is a need for change in high-risk category provisions, mainly (a) regarding the assessment of indebtedness capacity and the establishment of credit scores and (b) harmful uses of AI. 

a. Assessment of indebtedness capacity and the establishment of credit scores 

A credit score is a tool used by banks and financial institutions to assess whether an individual is a reliable borrower based on their risk of default, for example, when making decisions about access to credit in these institutions.

Access to financial credit is fundamental as a prerequisite for the exercise of a range of constitutionally guaranteed rights, which underscores the importance of robust safeguards in AI-defined credit scoring. This is especially crucial due to its proven potential for discrimination, as evinced by research, including studies conducted in the Brazilian context.  It is important to note that in addition to the critical importance of credit as a prerequisite for access to essential goods and services such as healthcare and housing, credit models are fed with a large amount of personal data, which in itself requires enhanced security protection. 

Considering this application in the high-risk classification is consistent with other international regulations, such as the European Regulation on Artificial Intelligence, the AI Act.

b. Harmful uses of AI

Article 14, IX, X and XI list the uses of AI shown in various national and international studies to have more negative than positive effects and may be contrary to international human rights law. These refer to the analytical study of crimes involving natural persons to identify behavioral patterns and profiles; to assess the credibility of evidence; predictive policing; and emotion recognition.

This is mainly because such uses are associated with techno-solutionism and Lombrosian theories, which would reinforce structural discrimination and historical violence against certain groups in society, particularly black people.. In 2016, ProPublica published a study showing how an AI system used to generate a predictive score of an individual’s likelihood of committing future crimes showed biased results based on race and gender that did not match the reality of individuals who actually committed or reoffended. Thus, subsection X of Article 14 would potentially allow AI to be used in a similar manner to “predict the occurrence or recurrence of an actual or potential crime based on the profiling of individuals,” which has not been scientifically proven and, to the contrary, has been shown to potentially increase discriminatory practices. 

Subsection XI, on biometric identification and authentication for emotion recognition, is also of concern, as there is no scientific consensus on the ability to identify emotions based solely on facial expressions, leading, for example, Microsoft to discontinue the use of AI tools for such purposes.

In this sense, we consider the necessity of classifying the uses provided for in Article, 14,  IX, X and XI as unacceptable risks.

Civil society participation in governance and oversight system

The actual version of the bill improved the definition of the oversight system. The text now proposes the creation of the National AI System (SIA), composed of the competent authority, pointed out to be the Brazilian Data Protection Authority (ANPD), sectoral authorities,  the new Council for Regulatory Cooperation on AI (CRIA), and the Committee of AI Experts and Scientists (CECIA). 

Considered an important step forward, the Council for Regulatory Cooperation, which will serve as a permanent forum of collaboration, now includes the participation of civil society. However, it is important to ensure that this participation is meaningful, guaranteeing civil society an active and effective role in AI governance and regulation.

→ Other modifications

With the lobbying of big tech and industry, as mentioned previously, there was a reduction in important governance obligations for AI systems in the latest version of July 4th, especially linked to transparency, beyond the reduction of the rights of potentially affected people. Therefore, for regulation to remain coherent in its primary function of protecting fundamental rights (especially those of vulnerable groups), combating discrimination, and ensuring processes for the development, use, and implementation of more responsible systems, it is essential to return articles 6, 8th,  and 17th of the version of the bill published on June 18, 2024.

For the approval of Bill 2338/2023 with the inclusion of the improvements

With all said, the subscribing organizations express support for advancing the legislative process of Bill 2338/2023 within the Commission of AI (CTIA) and the Senate plenary towards its approval, provided that the following improvements are made: 

  1. Amendment of Article 13, I header to exclude the expression ‘with the purpose of’ for the prohibited uses, to encompass all harmful technologies and their effects, regardless of verification of intent from AI agents in being purposefully harmful or not;
  2. Amendment of Article 13, I (a) and (b) to exclude the causal link of ‘causing or being likely to cause damage’ for the prohibitions on the use of subliminal techniques and to exploit vulnerabilities;
  3. Amendment of Article 13, VII to exclude the exceptions for using biometric identification systems remotely, in real-time, and in spaces accessible to the public, therefore banning the use of these systems for public security and criminal prosecution. Or, at a minimum, we call for a moratorium that  authorizes uses in the listed exceptions only after approval of a federal law that specifies the purposes of use and guarantees compliance with sufficient safeguards measures (at least those guaranteed for high-risk AI systems);
  4. Return of the credit score or other AI systems intended to be used to evaluate the creditworthiness of natural persons to the high-risk list in Article 14, with the possibility to create an exception for AI systems used to detect financial fraud;
  5. The change of classifications for systems mentioned in Article 14,  IX, X and XI into the category of unacceptable risks; 
  6. Ensure that civil society participation in the National Artificial Intelligence Governance and Regulation System (SIA), through the composition of the Artificial Intelligence Regulatory Cooperation Council (CRIA), is effective and meaningful;
  7. Return articles 6, 8th, and 17th of the version of the bill published on June 18, 2024.

If you want to join this letter in defense (criticism) of PL 2338/2023 and the clear need for protective regulation of fundamental rights in Brazil, fill out this form.

Organizations that sign this open letter:

  1. ABRA – Associação Brasileira de Autores Roteiristas
  2. Access Now
  3. AFROYA TECH HUB
  4. Amarc Brasil
  5. ANTE- Articulação Nacional das Trabalhadoras e dos Trabalhadores em Eventos
  6. AQC – ESP
  7. Assimetrias/UFJF
  8. Associação Brasileira de Imprensa
  9. BlackPapers
  10. Câmara dos Deputados- Liderança do PT
  11. Câmara Municipal do Recife
  12. Centro de Direitos Humanos de Jaraguá do Sul – SC
  13. Centro de Estudos de Segurança e Cidadania (CESeC)
  14. Centro de Pesquisa em Comunicação e Trabalho
  15. Coalizão Direitos na Rede
  16. Coding Rights
  17. Coletivo artistico – Guilda dos Quadrinhos
  18. Complexos – Advocacy de Favelas
  19. Data Privacy Brasil
  20. Digital Action
  21. DiraCom – Direito à Comunicação e Democracia
  22. FaleRio- Frente Ampla pela Liberdade de Expressão do Rio de Janeiro
  23. Fizencadeando
  24. FNDC – Fórum Nacional pela Democratização da Comunicação
  25. FONSANPOTMA SP
  26. Fotógrafas e Fotógrafos pela Democracia
  27. FURG
  28. GNet (Grupo de Estudos Internacionais em Propriedade Intelectual, Internet & Inovação)
  29. Grupo de Pesquisa Economia Política da Comunicação da PUC-Rio/CNPq
  30. Grupo de Pesquisa e Estudos das Poéticas do Codiano – EPCO/UEMG
  31. Idec – Instituto de Defesa de Consumidores 
  32. IFSertãoPE – Campus Salgueiro
  33. Iniciativa Direito a Memória e Justiça Racial
  34. INSTITUTO AARON Swartz
  35. Instituto Bem Estar Brasil
  36. Instituto Lidas
  37. Instituto Minas Programam
  38. Instituto de Pesquisa em Internet e Sociedade (IRIS)
  39. Instituto Sumaúma
  40. Instituto Soma Brasil
  41. Instituto Telecom
  42. IP.rec – Instituto de pesquisa em direito e tecnologia do Recife 
  43. Intervozes – Coletivo Brasil de Comunicação Social 
  44. Irmandade SIOBÁ/Ba
  45. Laboratório de Políticas Públicas e Internet – LAPIN
  46. MediaLab.UFRJ
  47. Movimento Dublagem Viva
  48. MST
  49. Niutechnology
  50. OBSCOM-UFS (Observatório de Economia e Comunicação da Universidade Federal de Sergipe)
  51. Open Knowledge Brasil
  52. PAVIC – Pesquisadores de Audiovisual, Iconografia e Conteúdo
  53. Rebrip – Rede Brasileira pela Integração dos Povos
  54. Rede Justiça Criminal
  55. Rede LAVITS
  56. SATED- SÃO PAULO
  57. sintespb
  58. Transparência Brasil
  59. Voz da Terra
  60. UFRN
  61. ULEPICC-Brasil (União Latina de Economia Política da Informação, da Comunicação e da Cultura, Capítulo Brasil)
  62. UNIDAD

Individuals signing this open letter:

  1. Admirosn Medeiros Ferro Júnior
  2. Alê Capone
  3. Aliete Silva Mendes
  4. Alvaro Bastoni Junior
  5. Ana Carolina Sousa Dias
  6. Ana Gretel Echazú Böschemeier, PhD
  7. Andreza Rocha
  8. Angela Couto
  9. Aurora Violeta
  10. Bianca Soares Monteiro
  11. Breno Augusto Silva de Moura
  12. Camila Silva Lima
  13. César Ricardo Siqueira Bolaño
  14. César Saraiva Lafetá Júnior
  15. Delmo de Oliveira Torres Arguelhes
  16. Diego Rabatone Oliveira
  17. Dirce Zan
  18. Emanuella Ribeiro
  19. Erika Gushiken
  20. Flávia da Silva Fernandes
  21. Fran Raposo
  22. Francisco S Costa
  23. Fransergio Goulart
  24. Frido Claudino
  25. Giovanna Mara de Aguiar Borges
  26. Haydée Svab
  27. Huan Pablo da Silva dos Santos
  28. Inaê Batistoni
  29. Ivan Moraes
  30. Janaina Souto Barros
  31. Janaina VIsibeli Barros
  32. Jane Caraça Schneider
  33. Jolúzia Batista
  34. Jonas Cardoso Borges
  35. José Eduardo De Lucca
  36. José Maria Rodrigues Nunes
  37. Juliana Vieira
  38. Lais Azeredo Rodrigues
  39. Larissa de Medeiros
  40. Larissa Vieira
  41. Lauro Accioly
  42. León Fenzl
  43. Livia Maria Santos Silva
  44. Louise Karczeski
  45. Luciano Félix Ferreira
  46. Luis Gonçalves
  47. Luís Gustavo de Souza Azevedo
  48. Luiza Vieira
  49. Luli Hata
  50. Maíra Vilela Francisco
  51. Marcelo Rodrigues Saldanha da Silva
  52. Marcelo de Freitas Rigon
  53. Maria José Braga
  54. Mariana Alves Araújo Lopes
  55. Mateus Mendes
  56. Melissa Cipriano Vanini Tupinambá
  57. Mônica Martins
  58. Mônica Zarattini
  59. Naiara Lopes
  60. Nilo Sergio Silva Gomes
  61. Olga Teixeira de Almeida
  62. Ogan Valdo Lumumba
  63. Paula Guedes Fernandes da Silva
  64. Paulo Rená da Silva Santarém
  65. Pedro Silva Neto
  66. Poema Portela
  67. Raíssa Trevisan
  68. Raquel de Moraes Alfonsin
  69. Ricardo Calmona Souza
  70. Rodrigo da Silva Guerra
  71. Rosana Moreira da Rocha
  72. Samuel Consentino de Medeiros
  73. Sérgio Luiz Homrich dos Santos
  74. Taís Oliveira
  75. Tânia Regina Homrich dos Santos
  76. Terezinha de Fátima dos Santos
  77. Thamires Orefice
  78. Vera Lucia Freitas
  79. Victor Sousa Barros Marcial e Fraga
  80. Wagner Moreira Da Silva
  81. Wallace de Oliveira de Souza
  82. Ydara de Almeida, Advogada

If you want to join this letter, fill out this form:

Organizations:

Individuals: