Does GDPR Apply to AI Systems?
Yes. GDPR applies to any AI system that processes personal data belonging to EU residents, regardless of where the system or company is located. Key obligations include lawful basis for processing, data minimization, transparency about automated decisions, and the right to human review under Article 22.
Why this question keeps coming up
Most SMBs asking this question already know GDPR exists. What they don't know is whether their specific AI setup, a chatbot, a recommendation engine, a document summarizer, actually triggers it. The answer isn't always obvious because GDPR was written before large language models existed.
The regulation doesn't mention AI by name. It regulates the processing of personal data. So the right question isn't 'does this tool use AI?' It's 'does this tool touch personal data belonging to EU residents?' If yes, GDPR applies, full stop, regardless of the AI label on the box.
What GDPR actually requires from AI systems
Any AI system that ingests, stores, analyzes, or outputs personal data tied to EU residents is covered. That includes customer service chatbots reading order histories, HR tools screening CVs, healthcare systems summarizing patient notes, and recommendation engines building behavioral profiles. The data doesn't have to sit in Europe. If your users are in Germany and your servers are in Dallas, you're still subject to GDPR.
Article 22 is the provision that catches most AI systems off guard. It gives individuals the right not to be subject to decisions made solely by automated processing that produce significant effects on them, think loan approvals, insurance quotes, job screenings, or medical triage. If your AI makes or heavily influences those decisions, you need either explicit user consent, a contractual necessity argument, or a human in the loop who can genuinely override the output. Rubber-stamping an AI recommendation doesn't count as human review.
Beyond Article 22, GDPR's core obligations apply in full: you need a lawful basis to process the data (consent, legitimate interest, or contract), you must document what data flows into and out of the model, you can't retain personal data longer than necessary, and you must be able to delete a specific person's data on request. That last one is particularly thorny with fine-tuned models, because training data can be difficult to surgically remove once baked in.
When the answer gets more complicated
If your AI system processes no personal data at all, such as a model that only answers questions about your product catalog using static content, GDPR doesn't apply to that specific function. That's rare in practice, because even logging IP addresses or session identifiers can constitute personal data under GDPR's broad definition.
The stakes escalate with special category data: health records, biometric data, political opinions, religious beliefs. Processing these through an AI system requires explicit consent or a specific legal exemption, not just a legitimate interest claim. Healthcare and HR applications built on public-API tools like ChatGPT or Gemini are especially exposed here, because those vendors' data retention and training policies may conflict directly with GDPR's minimization and purpose-limitation requirements.
How we handle GDPR compliance in the systems we build
We build private LLM deployments, not wrappers around public APIs. That means personal data stays inside a customer's controlled infrastructure rather than passing through a third-party model provider's servers. For EU-facing clients, we structure data flows so that personal data is either anonymized before it reaches the model or processed in a region-specific deployment that satisfies GDPR's data transfer rules without relying on Standard Contractual Clauses as a band-aid.
For any system that touches Article 22 territory, we build an explicit human-review step into the workflow architecture before deployment. We also document the lawful basis, retention periods, and deletion pathways as part of our standard delivery package, not as an add-on. GDPR compliance isn't a legal afterthought you bolt on after launch. It has to be designed into the system from the start, which is exactly what Secure-by-Design means in practice.
Ready to see it working for your business?
Book a free 30-minute strategy call. We will scope your use case and give you honest numbers on timeline, cost, and ROI.