Brazil stands at a critical juncture in AI regulation, with its landmark – Artificial Intelligence (AI) Act moving through the legislative process. After extensive deliberation spanning two years, the Senate passed a modified version of the bill in December 2024, which is now headed to the lower house of Congress. The proposed law aims to establish a comprehensive regulatory framework for AI development, deployment, and use while protecting fundamental rights and ensuring the implementation of secure and reliable AI systems that benefit human dignity, democratic values, and scientific-technological development.
Following a risk-based approach similar to the EU AI Act, it establishes three tiers of AI systems:
• excessive risk (prohibited),
• high risk (heavily regulated), and
• other systems (basic requirements).
The Act will be enforced by a designated authority with powers to impose significant penalties – up to R$50 million per violation or 2% of company revenue in Brazil. If enacted, it will take effect one year after publication, giving organizations time to adapt to the new requirements. The law emphasizes human-centric values, mandates algorithmic impact assessments for high-risk systems, establishes civil liability frameworks, and promotes innovation through regulatory sandboxes. Notable provisions include mandatory human oversight, rights to explanation and contestation of AI decisions, and strict requirements for public sector AI use.
This act places Brazil among the first Latin American countries to introduce comprehensive AI regulation, joining the global conversation on AI governance alongside the EU and other jurisdictions developing AI regulatory frameworks.
The journey began in February 2022 when Senate President Rodrigo Pacheco formed a committee of legal experts to draft guidance for AI legislation. The experts delivered their report in December 2022, recommending a risk-based approach. This led to Pacheco’s bill proposal in May 2023, followed by creating a temporary commission for artificial intelligence that conducted 24 meetings and 14 public hearings over 19 months.
The Senate’s December 2024 approval came with notable modifications from the original proposal. While Data Privacy Brazil affirmed that “The legislation remains protective of fundamental rights,” several key provisions were altered or removed due to industry pressure. Most notably, algorithms for social media content curation and recommendation were removed from the high-risk category, reducing platform scrutiny requirements.
Recent political shifts in the United States, including changes in tech industry alignment with the Trump administration, may influence the bill’s journey through Brazil’s lower house. Meta’s announced return to “free expression” and pledge to “work with President Trump to push back on governments around the world” signals potential increased resistance to regulation.
Scope of the AI Act
The Brazilian AI Act adopts a risk-based approach, similar to the EU AI Act, categorising AI systems based on the risks they pose to individuals, society, and institutions. The law applies to all stages of the AI lifecycle, covering development, deployment, and post-market monitoring. It targets a broad audience, including:
• Domestic and international companies offering AI services in Brazil.
• Developers, deployers, and operators of AI systems.
• Organizations use AI to make decisions that affect individuals or public interest.
The scope of the Act is comprehensive, covering both public and private entities, including natural persons and legal entities involved in the development, provision, deployment, or use of AI systems. However, it does not apply to AI used for non-professional personal activities.
Additionally, the law introduces the concept of regulatory sandboxes, encouraging innovation by allowing controlled experimentation under regulatory oversight. It also incorporates provisions for a risk-based approach, meaning different obligations apply based on the level of risk posed by AI systems – ranging from minimal to high risk. Special provisions are set for government use, and additional requirements are outlined for high-risk AI applications to ensure they are deployed safely and ethically.
This broad and flexible scope aims to ensure the regulation of AI in a way that is adaptable to emerging technologies while maintaining fundamental rights, human oversight, and public safety.
The proposed Brazilian AI Act emphasizes a comprehensive framework that balances innovation with the protection of rights and management of risks. It covers key areas such as risk management, transparency, human oversight, accountability, and consumer protection.
Risk Management: AI systems must undergo a preliminary risk assessment and be categorized into three levels:
• excessive risk (prohibited applications),
• high risk (additional requirements), and
• other risks (basic requirements).
The law mandates risk assessment and mitigation processes based on these classifications to ensure that AI systems are safe for public use.
Transparency: The Act requires AI systems to be explainable and auditable. Companies must ensure that users can understand how decisions are made by AI systems. This includes:
• the Right to prior information,
• the Right to an explanation of AI decisions, and
• the Right to contest decisions.
Human Oversight: The law mandates effective human supervision throughout the AI lifecycle. This includes the Right to human intervention and emphasizes non-discrimination in the use of AI systems.
Accountability: Legal persons behind AI systems are held accountable for compliance with the law and for addressing any resulting harms. This includes establishing liability frameworks, with strict liability for high-risk systems, as well as providing defences and exceptions for responsible actors.
Consumer Protection: Users are granted privacy, data protection, and the Right to challenge AI-driven decisions. The Act also outlines specific rights to protect individuals, such as the Right to non-discrimination and the Right to privacy and data protection.
In addition to these key principles, the Act includes specific provisions for governance, civil liability, oversight, innovation support, and compliance mechanisms. These include documentation requirements, incident reporting, and the promotion of best practices.
Special provisions for public sector AI use and regulatory sandboxes allow for controlled experimentation under oversight. Through these measures, the Brazilian AI Act ensures the responsible use of AI systems while fostering innovation and protecting fundamental rights.
The Brazilian AI Act strongly emphasizes foundational values, which form the backbone of AI development, deployment, and usage in Brazil.
These values include:
Values |
1. Human-centric approach |
2. Respect for human rights and democratic values |
3. Free development of personality |
4. Environmental protection and sustainable development |
5. Equality, non-discrimination, plurality, and respect for labour rights |
6. Technological development and innovation |
7. Free enterprise, free competition, and consumer protection |
8. Privacy, data protection, and informational self-determination |
9. Promotion of research and development to stimulate innovation in productive sectors and public administration |
10. Access to information education, and awareness about AI systems and their applications |
By explicitly outlining these values, the Act offers clear guidance for legal professionals, businesses, and society at large on the ethical priorities that AI systems should align with. These values are critical for establishing an AI ecosystem that is not only innovative and competitive but also responsible, inclusive, and respectful of individual rights.
The importance of these values lies in their role in combating the growing trend of ‘AI exceptionalism,’ where AI technologies are often treated as exceptions to traditional legal frameworks. By prioritizing these core values, the Brazilian Act sets a strong ethical foundation for AI governance and ensures that AI systems are deployed in a way that benefits society as a whole, rather than merely serving commercial interests. Unlike the EU AI Act, which implies similar values, Brazil’s Act offers a more explicit and detailed list, which can help businesses align their AI policies with these ethical guidelines more clearly.
The principles outlined in Brazil’s AI Act are key guidelines for how AI systems should be designed, implemented, and operated.
These principles include:
Principles |
1. Inclusive growth, sustainable development, and well-being |
2. Self-determination and freedom of decision and choice |
3. Human participation and effective human supervision in the AI cycle |
4. Non-discrimination |
5. Justice, equity, and inclusion |
6. Transparency, explicability, intelligibility, and audibility |
7. Reliability, robustness of AI systems, and information security |
8. Due process, contestability, and contradictory |
9. Traceability of decisions as a means of accountability |
10. Accountability, responsibility, and full reparation of damages |
11. Prevention, precaution, and mitigation of systemic risks |
12. Non-maleficence and proportionality |
They emphasize the need for fairness, reliability, and robustness in AI systems, alongside human oversight throughout the AI lifecycle. By focusing on these principles, the Act ensures that AI systems are not only safe and effective but also just, transparent, and accountable.
These principles are significant because they help provide a concrete framework for interpreting and applying the law in various AI-related scenarios. They serve as a practical guide for businesses and AI developers to align their operations with the regulatory requirements of the Act. Moreover, the principles are meant to instil trust in AI technologies by ensuring that they are deployed in a way that respects individual rights and societal values.
When compared to the EU AI Act, the Brazilian Act’s principles are more comprehensive and specific. While the EU AI Act includes similar principles, it is less detailed and does not provide the same level of explicit guidance on what constitutes responsible AI. Brazil’s inclusion of detailed principles ensures a more actionable framework for AI governance, particularly for businesses looking to ensure their AI systems are compliant with both legal and ethical standards.
For companies operating in Brazil, the AI Act presents both challenges and opportunities. Consider a recruitment firm utilizing AI-driven tools to screen job applicants. Under the new legislation, the firm must:
• Ensure Transparency: Inform candidates when AI is used in the evaluation process.
• Provide Human Oversight: Offer applicants the option to have decisions reviewed by a human recruiter.
• Mitigate Bias: Regularly assess AI systems to identify and eliminate any discriminatory biases.
By adhering to these requirements, businesses can enhance trust with stakeholders and demonstrate a commitment to ethical AI practices.
To stay ahead, companies should consider the following steps:
Comprehensive AI Audits | Identify all AI systems in use and assess their risk classifications under the new law. |
Policy Development | Establish clear guidelines that align with the AI Act’s provisions, ensuring all AI applications adhere to ethical standards. |
Employee Training | Educate staff on the implications of the AI Act and best practices for responsible AI use. |
Stakeholder Engagement | Foster open communication with stakeholders, including customers and partners, about the company’s AI practices and compliance efforts. |
A dedicated regulatory body will oversee the enforcement of the Brazilian AI Act, ensuring compliance through audits, reporting mechanisms, and investigation of violations. The Executive Branch will designate a competent authority to carry out these roles, which include regulation development, implementation oversight, and sanction application. Non-compliance with the provisions of the Act could result in substantial administrative sanctions, including fines of up to R$ 50 million (approximately US$ 10 million) per violation. For private companies, fines may reach up to 2% of revenue in Brazil, excluding taxes.
The enforcement of these provisions is designed to ensure accountability and adherence to the law, similar to the financial deterrence measures seen in the EU’s GDPR. In addition to fines, violators may face publication of the violation, prohibition or restriction from participating in regulatory sandboxes for up to five years, temporary or permanent suspension of AI system operations, and prohibition of processing certain databases.
The application of sanctions follows an administrative due process and considers factors such as the severity of the violation, the economic benefit gained, the cooperation level, and the promptness in corrective measures.
Special provisions are in place for excessive risk systems, which are subject to mandatory minimum penalties, including a fine and suspension of activities for legal entities. Sanctions do not exclude other civil or criminal liabilities, and consumer protection law sanctions remain applicable. An appeals process is available, and the regulatory authority will be required to provide detailed justification for the sanctions, ensuring transparency and fairness in the enforcement process.
Brazil’s AI Act draws heavily from the EU AI Act, reflecting the “Brussels Effect.” Both laws emphasize a risk-based approach, human oversight, and transparency. However, there are notable differences:
Principle-Based Framework | Brazil’s act provides a comprehensive list of values and principles, offering clarity and interpretative guidance. The EU AI Act, while implying similar principles, lacks an explicit and enforceable list. |
Simplicity and Accessibility | The Brazilian draft is more succinct and organized, avoiding the verbosity of the EU AI Act, which can lead to complexity and enforcement challenges. |
Local Adaptation | While inspired by the EU framework, Brazil’s Act integrates domestic priorities, such as environmental sustainability and respect for labour rights. |
Brazil’s AI Act represents a pivotal step in shaping the future of AI governance in Latin America. By blending global best practices with local priorities, the legislation provides a robust foundation for trustworthy and ethical AI. For businesses operating in Brazil, aligning with these new regulations is not just a compliance issue – it’s an opportunity to contribute to a human-centric, innovative, and sustainable AI ecosystem.
As countries worldwide draft their AI regulations, Brazil’s principle-based approach offers valuable insights, demonstrating how to balance global inspirations with domestic needs effectively.
The coming months will be crucial in determining whether Brazil can maintain its comprehensive approach to AI regulation or if industry pressure and international political shifts will lead to further modifications. The outcome may serve as a key indicator of how US tech companies, with renewed political backing, approach global AI regulation.
Disclaimer: This blog post is intended solely for informational purposes. It does not offer legal advice or opinions. This article is not a guide for resolving legal issues or managing litigation on your own. It should not be considered a replacement for professional legal counsel and does not provide legal advice for any specific situation or employer.