Many civil society organizations and advocates monitor how public institutions have conceived and deployed algorithmic systems in Latin America to support critical determinations affecting people's lives and rights. They play a vital role in shedding light on flaws and fighting for putting human rights and constitutional guarantees at the center of government use of AI.

We asked for a small but representative collection of them to share challenges they have encountered on the ground and highlight how leveraging Inter-American human rights standards can be instrumental to facing these challenges. You can find their take below.

  1. Jamila Venturini (Derechos Digitales, Brazil/LatAm)
  2. Priscilla Ruiz Guillén (Artículo 19 México y Centroamérica, México)
  3. Tomás Pomar (Observatorio de Derecho Informático Argentino - ODIA, Argentina)
  4. Clarice Tavares (InternetLab, Brazil)
  5. Juan Diego Castañeda (Fundación Karisma, Colombia)

Jamila Venturini (Derechos Digitales, Brazil/LatAm)

Since 2021, Derechos Digitales develops analysis on the impacts of artificial intelligence (AI) on human rights in Latin America, with a particular focus on government use of AI in public policies. Our  main concern has been with how AI deployment has served to reinforce structures of dependency and oppression in our region, as well as how regulatory frameworks have failed to protect groups in situations of vulnerability from discrimination, surveillance, and exclusion. Some of our works are: "AI & Inclusion in Latin America" ; "Towards a feminist framework for AI development: from principles to Practice"; and "Latin America facing artificial intelligence: mapping of regulatory initiatives in the region".

As the use of AI advances stealthily in sensitive areas of public policy, Latin American governments are proposing to discuss its regulation, in line with global trends in the field. It is fundamental that the region can actively participate in such discussions and reflect on how it wishes to see AI develop based on its territory, history, context, and the social demands of each country. However, we cannot forget that AI is not implemented in a vacuum. The new EFF report is crucial in recalling the human rights obligations that Latin American states have assumed at the Inter-American level. In addition, it highlights how existing frameworks must be considered and can provide important clues to tensions that persist when thinking about regulating AI. At the same time, it reiterates how existing commitments and obligations are not altered by technological changes and must be obeyed regardless of the existence or not of specific rules.

At Derechos Digitales we use the principles of legality, necessity, and proportionality to analyze how States do or do not comply with their human rights obligations when implementing AI systems. We have developed six case studies in the region and we are about to launch four new studies. Our finding so far is that multiple shortcomings persist. We have seen, for example, how the very task of monitoring this type of use from civil society is challenged by the absence of transparency and accountability, or even by the lack of compliance with requirements for access to public information. In the face of this troubling scenario, we welcome EFF's thorough analysis of the Inter-American standards and trust that it will serve as a basis for strengthening public pressure for the use of technologies in favor and respectful of rights in Latin America. We hope that Latin American States and the Inter-American System can receive this contribution, along with the many studies developed in the region on the subject, as a key tool for strengthening the existing framework.

Priscilla Ruiz Guillén (Artículo 19 México y Centroamérica, México)

ARTICLE 19 Mexico and Central America office is an independent NGO promoting the exercise of people's right to access information and express themselves in an environment of freedom, security and equality. ARTICLE 19 works to link the promotion of public policies, the accompaniment of local processes and the promotion of the highest international human rights standards in the development, use, acquisition and implementation of emerging technology tools in order to contribute to the strengthening of democracy.

The Inter-American Human Rights System provides a point of reference for those governments in the Americas to act in accordance with and apply the Inter-American corpus iuris which, throughout its existence, has channeled the fundamental pillars of a democracy. The guiding principles established in Articles 1 and 2 of the American Convention on Human Rights (ACHR) are a guide for member states to fulfill their commitments, such as the obligation to respect, protect, guarantee, and promote human rights.

At this moment, global discussions on the development, acquisition, and use of emerging technologies by the public sector, particularly considering generative AI, consciously deserve the generation of public policies. Regulations, to name a public policy, must be per se necessary and proportional to the extent that they not only enhance the economy of countries but also adopt human rights standards such as transparency, responsibility, accountability, and the protection of all human rights in the implementation of emerging technologies.

When considering both the inter-American corpus iuris and the current contexts in the Americas region, it is relevant to determine the level of creation and implementation of public policies for the development of clear AI policies and protocols and efficiencies for the public sector, in order to avoid incidents that put at imminent risk the full exercise of human rights. Finally, it is clear that there is still a need for collaboration between the public sector, civil society organizations, and the private sector to ensure ethical and responsible AI practices that respect international human rights standards and thus continue to promote technological innovation for the welfare of society.

Tomás Pomar (Observatorio de Derecho Informático Argentino - ODIA, Argentina)

El Observatorio de Derecho Informático Argentino is an NGO founded by lawyers from the postgraduate course in Computer Law at the University of Buenos Aires and several technology specialists. Its work is oriented to fight for the development of a full digital citizenship as well as the defense of computer sovereignty in the Argentine Republic. Among the various actions carried out, the research, design, and implementation of strategic litigation in cases related to the defense of human rights in contexts mediated by the use of new technologies stand out.

The report prepared by the EFF is a fundamental tool for researchers and activists dedicated to the defense of human rights in digital environments. Throughout this research, it presents and develops essential principles of human rights enforceable by citizens in the implementation of automated decision-making systems. The paper offers a comprehensive approach by analyzing the legal standards applicable to these technologies, not only exploring the emerging limits of human rights, but also proposing ways in which public policies should be implemented, ensuring social participation. It is also important to highlight the report's focus on enforceable limitations, based on the provisions of the Inter-American Convention, with respect to the exceptions generally used by States in the area of transparency.

One of the great merits of the report is that it manages to systematize in a practical way aspects deeply related to some of the first judicial pronouncements on these technologies. In this sense, many of the proposed lines of analysis have been echoed in the debate generated by the amparo action brought by ODIA against the City of Buenos Aires, which sought the disconnection of the Facial Recognition of Fugitives system on public roads through the use of AI. In that case, civil society actions began after a unilateral announcement by the City Government on the implementation of the system. According to the report, a key point that motivated the decision to order the disconnection of the system was the lack of transparency inherent to this type of technology. It is also worth highlighting what the report points out about the control and auditing bodies of these systems, an aspect that was fundamental in the debate in Buenos Aires. Finally, it is crucial to underline the importance of the development of the work in relation to the exceptions to the regime of access to public information, especially with regard to the application of industrial secrecy to AI systems for public use. This aspect has been central to the debate in Buenos Aires, where, despite having obtained a favorable court ruling, the manufacturer refuses to show the datasets and other elements of the system provided to the State. This point is nowadays a crucial aspect in the determination of citizens' rights in a reality increasingly mediated by computer systems.

Clarice Tavares (InternetLab, Brazil)

InternetLab  is a independent inter-disciplinary think tank promoting academic debate and the production of knowledge in the areas of law and technology. Established as a non-profit organization, InternetLab acts as a point of articulation between academics and representatives of the public, private, and civil society sectors, encouraging the development of projects that address the challenges of drafting and implementing public policies in new technologies, such as privacy, human rights, and issues linked to social markers of difference.

The relationship between state and citizen is, quintessentially, an asymmetrical one. This asymmetry occurs in multiple layers: of power, informational capacity, and punitive capacity, among others. With the introduction of digital technologies in the sphere of public power and increasingly complex methods for collecting and processing data, information asymmetries are deepening every day, with a profound expansion in the datification of public policies and the automation of decisions by the Brazilian public administration. While, on the one hand, these datification and automation processes can simplify analyses, speed up the granting of benefits, or mitigate fraud, on the other, they can also deepen inequalities and asymmetries between citizens and the state.  However, these two concerns do not always go hand in hand among those responsible for formulating public policies. In a context of concern about agility and fraud, concerns about optimization, security, and fraud prevention are often put ahead of safeguards for rights such as the right to privacy, non-discrimination, or due process.

InternetLab research has identified challenges in safeguarding citizens' rights in public policy formulations. To illustrate a small part of these challenges, it is worth recalling the case of Emergency Aid, a cash transfer program aimed at alleviating the economic and social effects of the Covid-19 pandemic, in which it was identified that the absence of a human decision or administrative processes to challenge the automated decision had an unequal impact on people who were denied benefits due to inconsistencies in their registration bases. The Emergency Aid case sheds light on the need for public policies that adopt automated systems to transversally adopt the guarantees of due process of law, in the light of the Inter-American Human Rights Standards, to prevent mistaken or arbitrary decisions and to ensure that there are procedures in place so that people affected by the decision can challenge it. This small case related to Emergency Aid illustrates the importance of drawing up an operational framework based on the Inter-American Human Rights Standards, to offer paths and some answers to the new challenges facing governments that are increasingly investing in automation as a response to social inequalities.

Juan Diego Castañeda (Fundación Karisma, Colombia)

Fundación Karisma is a civil society organization in Colombia that works for digital technologies to protect and ensure human rights and social justice. We have analyzed data-intensive systems for social programs in Colombia. Some of these reports are:"Experimentando con la pobreza: el Sisbén y los proyectos de analítica de datos en Colombia," y “Datos y dignidad: Guía para el uso justo de datos en la protección social desde el caso del Sisbén”.

The report outlines several Inter-American human rights standards that are applicable to the system for identifying potential beneficiaries of social programs (SISBEN) and the Universal Income Registry (RUI) in Colombia. In order to observe these standards, the authorities must establish mechanisms for active and effective participation in the design of this classification system. In this sense, it should be verified that participation has resulted in changes and reformulations in the process. Finding solutions and mechanisms for people's participation in the design of the SISBEN and the RUI, which are highly biased systems, would help to reduce the negative impact they have on individuals and communities.

Some other standards that need to be incorporated into the SISBEN are related to due process and informational self-determination. Beneficiaries have the right to be heard by the corresponding administrative authorities in order to be able to discuss the classification they receive, since the recognition of access to social programs depends on it. To date, beneficiaries do not have procedures to be heard, nor do they know the procedure by which their classifications are assigned or by which the changes that result in the loss of social benefits are decided. This opacity goes through the ignorance of the guarantees of informational self-determination such as the right to know the data held by the authorities or the right to request the rectification, modification, and updating of their data when it is erroneous, incomplete, or outdated.