AI’s Alarming Health Misinformation Risks: Eye-Opening Findings

The study evaluates the effectiveness of safeguards to prevent large language models (LLMs) from generating health disinformation and assesses developers’ transparency regarding risk mitigation. GPT-4, PaLM 2, and Llama 2 inconsistently implemented protections, allowing significant disinformation generation, while Claude 2 maintained stronger safeguards. The results highlight vulnerabilities in AI-generated health content controls.

Breakthrough AI Rules: A Must-Know for Journal Authors

The study investigates the prevalence and guidance on the use of generative artificial intelligence (GAI) in academic publishing. It highlights that while most journals prohibit GAI as an author, guidelines for its usage vary widely, particularly in areas like disclosure. The analysis emphasizes the need for consistency across publishing standards.