Is ChatGPT HIPAA Compliant?
No. The consumer ChatGPT product and the standard OpenAI API are not HIPAA compliant, and OpenAI does not sign Business Associate Agreements for those products. Using them to process protected health information puts your practice in violation of HIPAA regardless of how careful the prompts are.
Why this question matters
Healthcare practices hear "ChatGPT" everywhere and reasonably ask whether they can plug it into clinical workflows. The confusion comes from vendors marketing AI features as "HIPAA-ready" or "HIPAA-compatible" without specifying what that means.
HIPAA compliance is not a feature you can enable. It is a contractual and architectural state that requires three things simultaneously: a signed Business Associate Agreement with the vendor, technical safeguards that actually protect PHI, and administrative controls documenting who can access what. ChatGPT satisfies none of those out of the box.
What this means in practice
Any use of the consumer ChatGPT web app or the standard OpenAI API involves sending data to OpenAI's shared infrastructure. Even if OpenAI does not look at the data or train on it, their contractual position is that they are not a Business Associate under HIPAA for those products. Without a BAA, sharing PHI with them is a breach under OCR enforcement.
This has real teeth. OCR fines for HIPAA violations range from $100 per record for simple mistakes up to $50,000 per record for willful neglect, with annual caps in the millions. A single prompt containing a patient's name and diagnosis, pasted into ChatGPT by a well-meaning office manager, could theoretically trigger enforcement. It usually does not, because OCR is overwhelmed and prioritizes larger breaches. But the liability is real.
There are technically two OpenAI products that DO offer a BAA: the ChatGPT Enterprise plan and Azure OpenAI Service when configured correctly. Both are enterprise-grade with separate pricing. Neither is what SMB healthcare practices typically adopt when they say they are "trying out ChatGPT."
When ChatGPT can be used safely in a healthcare setting
ChatGPT is perfectly fine for non-PHI work. Drafting internal policy documents, researching billing codes, brainstorming marketing copy, writing staff training materials, summarizing public research papers: all safe. The rule is simple. If the prompt or the expected response involves any information that could identify a specific patient, ChatGPT is off the table. If neither involves PHI, use it freely.
For PHI workflows, the working alternatives are ChatGPT Enterprise with a signed BAA, Azure OpenAI Service configured for HIPAA, or a private LLM deployment that keeps inference inside your own environment. The last option is what most regulated SMBs end up choosing once they see the compliance posture clearly.
How Usmart handles this for healthcare clients
We do not send PHI to public LLM APIs. When we build voice agents or agentic workflows for dental, medical, or specialty practices, inference runs on private infrastructure where the LLM provider has no access to prompts or responses. We sign BAAs as a matter of course, log every action the agent takes, and retain call audio only as long as your practice's policy dictates.
The practical difference for a practice owner is that the vendor question stops being "which AI is HIPAA safe?" and starts being "does this vendor's architecture match my regulatory surface area?" That is a much easier question to answer, because the right vendor will explain their architecture in plain English during the first strategy call.
Ready to see it working for your business?
Book a free 30-minute strategy call. We will scope your use case and give you honest numbers on timeline, cost, and ROI.