From CEO Deepfakes to AI Slop, AI Incident Tracking Ramps Up

After years of eye-opening statistics about cybersecurity attacks, it is artificial intelligence (AI) incidents’ turn to be tracked and tallied. The AI Incident Database (AIID), a research effort, has compiled reports on 1,140 publicly disclosed AI-related incidents, classified into 23 types of harms and risks. The Organisation of Economic Cooperation and Development runs another, mostly automated tracker that has added an average of approximately 330 AI incident reports per month to its database this year. Additionally, in April, the non-profit MITRE launched an AI incident-sharing site that incentivizes companies to confidentially report model tampering, adversarial data injections, voice cloning and other malicious acts targeting AI systems. This article presents observations by leaders at MITRE and AIID on the maturity of AI incident tracking, how to define what counts as an AI incident, incident trends and the benefits of AI incident information sharing. See “Cybersecurity and AI Are Top Global Business Challenges Identified in Kroll Study” (Jul. 16, 2025).

To read the full article

Continue reading your article with an ACR subscription.