GDPR and Artificial Intelligence

According to some estimates, developments in artificial intelligence (AI) could boost the global GDP in 2030 by 14 percent—or in absolute terms, $15.7 trillion. In attempting to capture gains from this economic growth, governments worldwide have been competing to support AI development and adoption.

But that growth may be affected by the way governments regulate AI and the large volumes of digitally stored data on which AI depends. In 2018, the European Union (EU) introduced what has been described as “the toughest privacy and security law in the world,” the General Data Protection Regulation (GDPR). The GDPR enshrines a series of data protection principles and regulates entities that “process the personal data of EU citizens or residents” or “offer goods or services to such people” regardless of whether such entities are located within the EU. To encourage compliance, the GDPR allows each EU member state’s data protection authority—the “independent public authorities that supervise” GDPR application—to fine violators the greater of either 20 million euros “or 4 percent of the firm’s worldwide annual revenue from the preceding financial year.”

As AI and machine learning evolve, regulators seek to protect the public without stifling innovation. Because these technologies rely on ever-growing volumes of data, laws such as the GDPR could limit AI development. In a recent paper, Joel Thayer of Phillips Lytle LLP and Bijan Madhani of the Computer & Communications Industry Association consider whether compliance with the GDPR is even possible for companies developing and using machine learning and AI. They argue that the GDPR articulates four rights that could pose a significant challenge to AI development: the right against automated decision-making, the right to erasure, the right to data portability, and the right to explanationAlthough the GDPR and its companion Directive on Data Protection in Criminal Matters “clearly give the right to the data subject not to be subjected to a fully automated decision, including profiling, the exceptions to this right hollow it out to the extent that the exceptions themselves become a rule,” Maastricht University’s Maja Brkan argues. Brkan suggests that these weaknesses become even more apparent where “the member states or the Union itself might provide for further exceptions to allow for a broader use of automated decision-making.” Brkan further argues that “data subjects should have the right to familiarize themselves with the reasons why a particular decision was taken” to protect themselves using the GDPR, but the Directive on Data Protection in Criminal Matters “does not provide for such a right, which puts into question the compatibility of its provision on automated decision-making with the EU Charter of Fundamental Rights.”

Focusing on the GDPR’s Article 22 and the right to an explanation, the University Carlo Cattaneo’s Elena Falletti argues that, to be appropriate, the measures called for by this provision require human intervention—that is, “someone who has the necessary authority, ability, and competence to modify or revise the decision disputed by the user.” Falletti also addresses the idea that in striving to provide transparency, explanations of technical subject matter such as AI “may not be sufficient if the information received is not comprehensible to the recipient.” Falletti asserts that instead of explaining how an algorithm works, providing comprehensible information and describing the relative emphasis placed on different information would be appropriate.