“Personal data breaches? Like, someone gaining unauthorised access to personal data? Why? I guess it makes sense to be prepared, but I doubt that this happens often, right?”
Great questions many ask, but not quite; it happens more often than one might think. For instance, over 21 million records have been breached in October 2025 alone. One might also assume this impacts smaller companies to a greater degree; however, it also “shocks” major industry leaders. In 2023, it was T-Mobile that encountered a personal data breach when personal data of more than 37 million customers was stolen by a malicious actor (including names, billing addresses, phone numbers, and dates of birth). Although T-Mobile contained and publicly disclosed the data breach, critics have highlighted their history of repeated incidents, effectively casting a shadow on the company’s approach to personal data protection.
Whether blissfully unaware that a data breach could happen or equally unaware that you may never be fully prepared to stop it from happening, you should feel discouraged to rely on either position. Not all data breaches warrant the same approach; not all of them are reportable, and not all security incidents are personal data breaches, among many other variables that factor into what constitutes a personal data breach and how to act accordingly.
Of course, you can always be lent a helping hand from professionals, such as us, at GDPRLocal, for timely and accurate response when a data breach does occur; however, as with all regulatory matters, it is better to at least have a solid foundation and a strong level of preparedness beforehand. In other words, instead of assuming a data breach won’t occur, plan as though it could happen tomorrow.
So, let’s take it a step back – what is a personal data breach?
Most data protection legislation defines personal data breaches similarly. The GDPR (and UK GDPR), for instance, provides a definition in Article 4(12): “a breach of security leading to the accidental or unlawful destruction, loss, alteration, unauthorised disclosure of, or access to, personal data transmitted, stored or otherwise processed”.
Moreover, the Article 29 Working Party has issued an Opinion 03/2014 on Personal Data Breach Notification, categorising personal data breaches as (i) confidentiality breaches, (ii) integrity breaches and (iii) availability breaches. Of course, despite this categorisation, personal data breaches refer to security incidents that negatively impact or threaten to negatively impact personal data. This may further manifest in a range of adverse effects on individuals, including physical, material, or non-material damage, and affected individuals may seek compensation for the damage suffered.
Personal data breaches may occur due to cyberattacks, lost or stolen devices, or insider threats, with research showing that human error is among the top contributors to data breaches, pressuring organisations to focus more on employee training.
In case a personal data breach does occur, your first step would be to contain the data breach as best as possible. Then, you should assess the impact of the data breach – this helps you determine its severity and recovery options. Once the initial assessment is completed, you should investigate how the data breach occurred, what happened with the compromised data and the effect it has on individuals. If you arrive at the conclusion that the data breach should be reported, as it imposes a high risk to the data subject’s rights and freedoms, the GDPR, for instance, mandates that you should notify affected data subjects without undue delay and the relevant supervisory authority within, but no later than 72 hours. As a final step, you should identify and implement measures to prevent these types of incidents from recurring.
While we can acknowledge that this is easier said than done, and additional or fewer actions may be required in different jurisdictions, having procedures in place that address the above can greatly assist in these circumstances.
The development of new technologies has given rise to increasingly innovative methods of conducting cyberattacks. One mind-boggling way in which they have “evolved” throughout the years is their potential to dictate geopolitics.
For instance, spikes in the Geopolitical Risk Index are followed by a 35-45% increase in cyber incidents targeting U.S. government systems and critical infrastructure. Namely, heightened global tensions lead to an increase in cyberattacks. It becomes more alarming when we take into consideration that these attacks are intended for crucial sectors, such as healthcare, energy and transportation, to name a few. This has the potential to expose enormous amounts of sensitive data and possibly affect energy grids, disrupting transportation operations, supply chains and alike.
And what about the use of AI? Researchers have found that the number of users of generative AI apps has tripled over the past year. During this period, prompt activity surged by a staggering 500%, with organisations now sending an average of 18.000 prompts per month; a significant increase from the prior 3.000. The same report has shown that individuals, including employees, are treating AI tools as a secure and trusted place to share information; however, often, they are also sharing highly protected data (e.g. bank details or medical records).
With respect to the effect data breaches have on large industry leaders, the recent Mixpanel incident that resulted in a breach of OpenAI data initiated a conversation about data protection risks with the involvement of AI, including whether deployment of such systems alters how risks are handled. Although this might have been a lingering question, seeing one of the largest AI deployers issue a statement addressing a data breach opened the gateway to even more concerns.
In a recent “Cost of a Data Breach” report by IBM, in analysing companies’ approach to access controls for AI systems, it was concluded that “a gap between AI adoption and oversight already exists, and threat actors are starting to exploit it”. This, in turn, poses a threat to the exposure of personal data. Within the scope of the research, it was highlighted that 63% of breached organisations either do not have an AI governance policy or are still developing one; of which, only 34% are performing regular audits for unsanctioned AI. Furthermore, one in five organisations reported a breach due to shadow AI and only 37% have policies to manage AI or detect shadow AI. Lastly, 16% of breaches studied involved attackers using AI tools, most often for phishing or deepfakes.
However, this study also highlights some positive conclusions: due to the emergence of AI, data breaches are uncovered and contained faster, i.e., the breach lifecycle has hit a “record low”, and the global average cost of a data breach has also declined. Notwithstanding, healthcare breaches remain among the most “expensive” as breaches in this sector take the longest to identify and contain. Not to mention that healthcare breaches may unveil the most sensitive categories of personal data.
Another review of more than 40 articles found that “25.87% of studies identify human factors as critical vectors for AI privacy breaches, challenging traditional security approaches that prioritise technical controls over organisational and behavioural considerations”.
It is important to note that not only organised hacker groups exploit AI disadvantages. Between 2020 and 2021, an individual exploited weaknesses in ID.me’s automated identity verification system to file at least 180 fraudulent unemployment claims in California. Using stolen personal data and forged driver’s licenses containing his own image, the individual bypassed biometric checks and obtained verification under stolen identities. Dozens of claims were approved, resulting in approximately $3.4 million in improper benefit payments. The scheme was later detected by ID.me and the California Employment Development Department, and in May 2023, the individual was sentenced to six years and nine months’ imprisonment for wire fraud and aggravated identity theft. The case underscores the risks of relying on automated and AI-assisted identity verification systems without sufficient safeguards.
As with all emerging technologies, the rise and development of AI has its pros and cons. One of the biggest concerns is the risk it poses to individual privacy, as organisations collect large volumes of data to train these systems, and such large datasets make them a tempting target for hackers, in turn, increasing the possibility of a data breach.
There are several ways in which AI can also help in resolving data breaches, such as better detection of threats and anomalies, determining whether a data breach is reportable or non-reportable, and so on. Contrary to this, however, the ways in which AI can assist in engineering or otherwise effectuating a data breach are far larger.
AI development opens a gateway to easier and automated ways for cyber-attackers to breach companies’ data, and in so doing, abuse privacy rights. Some top-of-the-head examples would be phishing, malware installation, data exfiltration, DoS/DDoS attacks and alike. AI can even be used in performing security incidents without personal data being the primary target, although it remains a vulnerable asset susceptible to harm.
Statistics from the preceding year show that 87% of worldwide organisations have dealt with an AI-powered cyberattack. It is also worth mentioning that AI has helped cyber-attackers to perform highly individualised attacks by way of profiling and predicting individual behaviour.
One prominent issue with the increased volume of “AI data breaches” is the so-called “black box AI”. A central challenge is the lack of transparency in AI decision-making, commonly described as the “black box” problem. Because the internal logic of many AI systems is not readily explainable, it can be difficult to determine how specific outcomes are produced. This opacity complicates both audits and regulatory compliance, as oversight bodies require clear and interpretable decision processes, particularly in high-risk sectors such as healthcare and finance. Black box AI systems can obscure how personal data is processed, combined, or inferred, making it difficult to detect when a data breach has occurred or to understand its scope and impact.
Expanding on the above, as there are already regulations and attempts at regulating AI underway, one question that pops up is whether personal data protection is taken into account when conceptualising these frameworks. The EU AI Act introduces a risk-based framework and regulates how companies develop, deploy and use AI systems in the EU. Depending on the risk level, organisations may be required to implement safeguards (such as risk management, data governance, transparency, human oversight and conformity assessments), with additional obligations for high-risk and general-purpose AI systems. The EU AI Act currently specifies that security incidents should be promptly reported, but does not explicitly define nor afford special attention to data breaches as the GDPR does.
It is safe to say that AI can be used both positively and negatively. In terms of personal data breaches, AI can play a vital preventive role, especially in industries handling predominantly sensitive personal data (e.g. the healthcare industry). Maintaining strong and secure defence mechanisms can ensure that such personal data is adequately protected. AI does not substitute technical and organisational measures you should implement; it rather affirms and boosts their potential.
As a final note, AI is not the “enemy”; it reminds us to be vigilant and align our operations with innovative and contemporary challenges. While at the start of this millennium it was difficult to imagine that service provision would become more digital than in-person, 26 years later, it is nearly impossible to envision technological advancements without the presence of AI.
Although there is no one-size-fits-all approach, we can offer guidance on how to deal with personal data breaches, including ones powered by AI. To start with, we help organisations prevent data breaches by implementing robust procedures such as staff training and system audits, as well as by putting proactive safeguards in place. In the case that a breach does occur, we ensure an effective incident response plan is ready to minimise harm and manage the situation efficiently. Where reporting thresholds are met, we assist with notifications to supervisory authorities and affected data subjects, while also reviewing and improving the processes that contributed to the breach to reduce future risk and heighten compliance. We highly encourage you to be well–prepared, at the very least, to have measures and procedures in place before the data breach occurs, to equip your employees with skills in the appropriate use of AI, and to comply with relevant regulations to avoid heavy fines. With all of this, we can assist you with. Just drop us a message, and we will happily support you with your data protection and AI compliance needs!