← Back to blog

HIPAA Compliance for AI Agents: The Complete Checklist

I have reviewed more HIPAA audit reports than I care to admit. The pattern is always the same: an organization deploys an AI agent to handle patient intake, appointment scheduling, or clinical note summarization. Six months later, an auditor asks how the agent handles Protected Health Information, and nobody has a good answer. This post is the checklist I wish those teams had used from the start.

## What HIPAA Actually Requires for AI Systems

HIPAA does not mention AI agents. It does not mention machine learning, large language models, or autonomous systems. What it does mention are "covered entities" and "business associates" that create, receive, maintain, or transmit PHI electronically. If your AI agent does any of those things, HIPAA applies fully.

The relevant rules are the Privacy Rule (who can access PHI and under what conditions), the Security Rule (technical safeguards for electronic PHI), and the Breach Notification Rule (what happens when something goes wrong). For AI agents, the Security Rule is where most of the work lives.

## How Agents Handle PHI: The Risk Surface

An AI agent's PHI risk surface is larger than most teams realize. The obvious touchpoint is the conversation itself: a patient types symptoms, medications, or insurance details into a chat. But there are less obvious ones. Your agent's memory or context window may retain PHI across sessions. Logs capture the full conversation. LLM API calls transmit the conversation to a third party. Skill invocations may query EHR systems and pull back records.

Map every point where PHI enters, moves through, or exits your agent. I am not being dramatic when I say this is the single most important step. You cannot protect what you have not inventoried.

## Business Associate Agreements

If your AI agent is operated by a vendor (not the covered entity itself), that vendor is a business associate under HIPAA. A Business Associate Agreement (BAA) must be in place before any PHI is processed. This applies to the agent platform provider, the LLM API provider, the hosting provider, and any third-party skill or integration that touches PHI.

A common mistake: teams sign a BAA with their cloud provider but forget that their LLM provider also receives PHI through API calls. OpenAI, Anthropic, and Google all offer BAAs, but you have to request them explicitly. Without a signed BAA, every API call containing PHI is a HIPAA violation.

## The Checklist

Here is the checklist, broken into the categories that auditors actually use:

### Administrative Safeguards

1. **Designate a security officer** responsible for the agent's HIPAA compliance 2. **Conduct a risk assessment** covering all PHI touchpoints in the agent architecture 3. **Document policies** for agent access to PHI, including which skills can query which systems 4. **Train staff** who configure, deploy, or monitor the agent on HIPAA requirements 5. **Establish incident response procedures** specific to agent-related breaches

### Technical Safeguards

1. **Encrypt PHI at rest** using AES-256 or equivalent in all storage (logs, memory, databases) 2. **Encrypt PHI in transit** using TLS 1.2+ for all API calls, including to LLM providers 3. **Implement access controls** so the agent can only reach PHI sources it needs for its defined tasks 4. **Enable audit logging** that records every agent action involving PHI with timestamps and user context 5. **Deploy PII/PHI detection** on all agent inputs and outputs to catch and handle PHI before it leaks into unprotected systems 6. **Set automatic session expiration** so PHI does not persist in agent memory beyond the required timeframe 7. **Use unique authentication** for the agent's service accounts, not shared credentials

### Physical Safeguards

1. **Control data center access** if hosting on-premises 2. **Verify cloud provider compliance** with SOC2 and HIPAA if hosting in the cloud 3. **Implement workstation security** for any machines used to configure or access the agent

## Audit Logging: What Auditors Actually Check

Auditors do not just want to know that you have logs. They want to see that your logs capture the right information and that the logs themselves are protected. For an AI agent, your audit log for each interaction should include: a timestamp, the user or patient identifier (hashed or tokenized), which PHI fields were accessed, what action the agent took, which external systems were called, and whether any PHI was transmitted outside your environment.

Store logs in an append-only system. Auditors will specifically check whether logs can be modified or deleted. ClawPine's audit logging writes to an immutable store by default, which is one less thing to worry about during the audit.

## What to Do Before Your First Audit

Run a mock audit. Pull your audit logs for a random week. Can you trace every PHI access back to a specific patient interaction? Can you show that your access controls prevented unauthorized access? Can you produce your BAAs for every vendor in the chain? If the answer to any of these is no, you have work to do before the real auditor arrives.

Related posts

OpenClaw in healthcare: a compliance roadmapGDPR Requirements for AI Agents: What You Actually Need to DoPII Detection for AI Agents: Techniques That Work in ProductionSOC2 Certification Roadmap for Agent DeploymentsGDPR Automated Processing: What AI Agents Must Comply With