AI’s Alarming Health Misinformation Risks: Eye-Opening Findings

The study evaluates the effectiveness of safeguards to prevent large language models (LLMs) from generating health disinformation and assesses developers’ transparency regarding risk mitigation. GPT-4, PaLM 2, and Llama 2 inconsistently implemented protections, allowing significant disinformation generation, while Claude 2 maintained stronger safeguards. The results highlight vulnerabilities in AI-generated health content controls.