The same properties that make AI valuable in healthcare, speed, pattern recognition, synthesis across large volumes of information, are the properties that create the most significant compliance exposure. Moving faster with patient-adjacent data is fine if the governance is in place. It is a liability if it is not.

This article is not legal advice. It is a practical orientation for healthcare operators who want to adopt AI thoughtfully, understand where the real risks sit, and build workflows that can survive a compliance review.

The Fundamental Question: Does This Touch PHI?

Protected Health Information is the organizing concept for HIPAA compliance in AI deployments. PHI is any information that can identify an individual and relates to their health condition, the provision of healthcare, or payment for healthcare. That definition is broader than most operators realize.

A patient's name alone is PHI in a healthcare context. An appointment time linked to a clinic name is PHI. A document that contains a diagnosis code attached to an insurance identifier is PHI. If your AI workflow processes, generates, stores, or transmits any of these, HIPAA rules apply to that workflow and to every vendor that touches it.

Before any AI deployment in a healthcare environment, the first question is not "which tool should we use?" It is "does this workflow touch PHI?" That answer determines the entire compliance structure that follows.

Many healthcare AI use cases can be designed to avoid PHI entirely. An AI that summarizes general clinical protocols, drafts staff training materials, or automates scheduling communications for non-specific appointment types may not touch PHI at all. Those workflows operate under a much lighter compliance burden. Design for PHI avoidance wherever possible before adding the governance layers required when PHI is present.

The Five Questions Before You Deploy

Question 1: Is this vendor HIPAA-eligible and willing to sign a BAA?

A Business Associate Agreement is the contractual mechanism by which a vendor accepts responsibility for HIPAA compliance as it relates to the PHI you share with them. If an AI vendor will not sign a BAA, you cannot share PHI with their system, period. This eliminates many consumer-grade AI tools from healthcare deployments even when those tools look technically capable.

Some major AI vendors offer BAA-eligible tiers of their products, often at enterprise pricing, with specific data handling commitments. Others do not offer BAAs at any price point. Asking for a BAA early in vendor evaluation will save significant time and prevent compliance exposure from tools that were already deployed before anyone asked the question.

Question 2: Where is the data processed and stored?

Cloud-based AI tools process data on vendor infrastructure. For PHI, you need to know the physical and logical location of that processing, who has access to the data within the vendor organization, how long the data is retained, and whether it is used for model training. Several major AI providers use submitted data for model improvement by default, with opt-out mechanisms buried in enterprise agreements. That default is incompatible with HIPAA for most healthcare workflows.

Question 3: What is the minimum necessary PHI for this workflow?

HIPAA's minimum necessary standard requires that PHI shared with any system or vendor be limited to what is actually required to accomplish the task. An AI that summarizes patient intake forms does not need the patient's full insurance identifier. An AI that drafts appointment reminder communications does not need the patient's diagnosis history.

18
HIPAA identifies 18 specific categories of PHI identifiers. A single one of these appearing in an AI workflow input triggers full HIPAA compliance obligations for that workflow and every vendor that touches it.

Designing workflows to use the minimum necessary PHI is both a compliance requirement and good security practice. The less PHI flows through any AI system, the smaller the blast radius if that system is compromised.

Question 4: Who reviews the AI output before it reaches a patient or clinical context?

AI systems make errors. In most business contexts, an error means a wrong answer that a human catches and corrects. In a patient-adjacent context, an AI error could affect clinical communication, scheduling, or documentation in ways that have direct patient safety implications. Human-in-the-loop review is not just a governance best practice in healthcare; it is a risk management necessity.

Every AI workflow that produces patient-facing output or informs clinical documentation needs a defined human review step before that output is used. That review step needs to be documented, assigned, and monitored. An AI that drafts a post-visit summary is not ready to send that summary directly to the patient or the medical record. A clinician reviews and approves it first.

Question 5: How does the workflow get audited?

HIPAA requires audit controls that record and examine activity in systems containing PHI. If PHI flows through an AI workflow, you need audit logs that capture what data was submitted, what output was generated, who reviewed it, and when. These logs need to be retained according to your organization's retention policies and available for review in the event of a breach or complaint.

Most AI tools do not generate HIPAA-grade audit logs by default. If your workflow requires them, they need to be built into the architecture intentionally, not retrofitted after deployment.

What This Looks Like in Practice

One of the most common healthcare AI deployments we support involves clinical documentation support, specifically the administrative workload around intake forms, referral letters, and prior authorization requests. These workflows touch PHI directly and require every element described above: BAA-eligible vendor, minimized PHI input, human clinician review before output is finalized, and audit logging of every transaction.

When all five questions are answered before deployment begins, the governance structure takes two to three weeks to build alongside the workflow itself. When those questions are skipped and a workflow is deployed on a consumer tool without a BAA, the remediation process is significantly more expensive than the original build would have been. We have seen both scenarios.

The Opportunity Is Real

None of the above should read as an argument against AI in healthcare. The opportunity is substantial. Administrative burden in healthcare is a documented crisis: burnout, turnover, and reduced time for patient care are all downstream of the paperwork volume that clinical and administrative teams carry. AI can genuinely reduce that burden, and the workflows that do so compliantly are achievable for practices of almost any size.

The prerequisite is asking the five questions before the build begins, not after the tool is already in production. Healthcare AI governance is a design problem, not an afterthought, and the good news is that designing for it upfront adds very little to the cost or timeline of a well-run implementation.