s OpenAI angles its way into healthcare, Anthropic is hot on its tail.
On Sunday, it announced Claude for Healthcare, a suite of tools that allow healthcare providers and consumers to leverage the company’s chatbot for medical purposes through “HIPAA-ready products.”
Claude can now review prior authorization requests, appeal insurance claims, triage patient messages, and support healthtech development for startups, Anthropic said in its announcement.
For patients, users can grant Claude access to lab results and health records to summarize medical history, explain test results, and recognize patterns in fitness and health metrics. Anthropic said the “integrations are private by design,” noting that users can choose exactly what information they want to share with Claude, must explicitly opt in to allow the chatbot access to their records, and that the data will not be used to train models.
In a livestream on Monday, Anthropic CEO Dario Amodei said the AI and medical fields need to work together to deploy it safely, ethically, and quickly. “Healthcare is one place you do not want the model making stuff up,” he said.
“It's not a replacement for a doctor … it's a second opinion, and that is usually very helpful,” Amodei added. “Not everyone is getting the quality of care that they could get if they had the help of these systems.”
The release comes days after OpenAI debuted ChatGPT Health, which provides users with personalized health and wellness insights on topics like workouts, diets, and test results, based on their medical records and synced data from fitness apps. And on Monday, OpenAI announced it was acquiring a one-year-old startup, Torch Health, to bolster health record ingestion in ChatGPT Health.
Beyond personal health recommendations, several tech giants are eyeing biotech and life sciences to make better use of AI. On Monday, Nvidia and Eli Lilly announced a $1 billion investment over five years into a lab that would use AI to aid in drug discovery. Additionally, Nvidia and Microsoft researchers, working on an international team, used AI to discover new gene-editing and drug therapies.
But getting AI involved in personal health can be a risky endeavor, as evidenced by Google's withdrawal of AI health summaries after serving up inaccurate and misleading healthcare information that put users at risk.




