Summary
Creates disclosure requirements for large generative AI providers (≥1M monthly users) to provide visible “manifest” and latent provenance for AI-generated image, audio, or video and to offer a public detection tool. Establishes enforcement via civil penalties with AG/local prosecutors.
Healthcare Implications
Healthcare communications, patient education, and marketing that use GenAI media must account for disclosure/provenance requirements when relying on covered tools. Vendors supplying GenAI components to health systems should assess whether licensing or product changes are needed to maintain required disclosures. Supports authenticity of health information and reduces risks of deepfake-related patient harm or misinformation.