Anthropic launches privacy-first health AI

By
Nat Rubio-Licht

Jan 13, 2026

12:30pm UTC

Copy link
Share on X
Share on LinkedIn
Share on Instagram
Share via Facebook
A

s OpenAI angles its way into healthcare, Anthropic is hot on its tail.

On Sunday, it announced Claude for Healthcare, a suite of tools that allow healthcare providers and consumers to leverage the company’s chatbot for medical purposes through “HIPAA-ready products.”

Claude can now review prior authorization requests, appeal insurance claims, triage patient messages, and support healthtech development for startups, Anthropic said in its announcement.

For patients, users can grant Claude access to lab results and health records to summarize medical history, explain test results, and recognize patterns in fitness and health metrics. Anthropic said the “integrations are private by design,” noting that users can choose exactly what information they want to share with Claude, must explicitly opt in to allow the chatbot access to their records, and that the data will not be used to train models.

In a livestream on Monday, Anthropic CEO Dario Amodei said the AI and medical fields need to work together to deploy it safely, ethically, and quickly. “Healthcare is one place you do not want the model making stuff up,” he said.

“It's not a replacement for a doctor … it's a second opinion, and that is usually very helpful,” Amodei added. “Not everyone is getting the quality of care that they could get if they had the help of these systems.”

The release comes days after OpenAI debuted ChatGPT Health, which provides users with personalized health and wellness insights on topics like workouts, diets, and test results, based on their medical records and synced data from fitness apps. And on Monday, OpenAI announced it was acquiring a one-year-old startup, Torch Health, to bolster health record ingestion in ChatGPT Health.

Beyond personal health recommendations, several tech giants are eyeing biotech and life sciences to make better use of AI. On Monday, Nvidia and Eli Lilly announced a $1 billion investment over five years into a lab that would use AI to aid in drug discovery. Additionally, Nvidia and Microsoft researchers, working on an international team, used AI to discover new gene-editing and drug therapies.

But getting AI involved in personal health can be a risky endeavor, as evidenced by Google's withdrawal of AI health summaries after serving up inaccurate and misleading healthcare information that put users at risk.

Our Deeper View

AI in healthcare is a double-edged sword. Of course, AI has fundamental issues that make its usage in health problematic. These systems still hallucinate and offer up false information with full confidence. AI models are also parrots, ready to spill out their training data when prompted in just the right way. But the US has a healthcare problem, with more than 26 million people, roughly 8% of the population, currently uninsured. And with 40 million people a day already asking ChatGPT for healthcare advice, these tech firms face the challenge of making their models as safe and accurate as possible when health and safety are at stake.