On January 13, 2026, eight United States Senators sent a letter to Alphabet, Meta, Reddit, Snap, TikTok, and X stating that they“are alarmed by reports of users exploiting generative AI tools to produce sexualized ‘bikini’ or ‘non-nude’ images of individuals without their consent and distributing them on platforms including X and others.” The senators requested

December 2025 saw India’s courts in New Delhi and Mumbai tackle a new breed of lawsuits: leading Indian cinema celebrities fighting back against unauthorized deepfakes and AI-generated impersonations. Nandamuri Taraka Rama Rao (NTR Jr.), R. Madhavan, and Shilpa Shetty—all famous Indian actors—filed and won powerful court orders aimed at blocking the spread of synthetic images

In an excellent blog post, “Avoiding AI Pitfalls in 2026: Lessons Learned from Top 2025 Incidents,” ISACA’s Mary Carmichael summarizes lessons learned from top incidents in 2025 using MIT’s AI Incident Database and risk domains. According to Carmichael, an analysis of the incidents showed recurring patterns across different risk domains, including privacy, security

OpenAI recently published research summarizing how criminal and nation-state adversaries are using large language models (LLMs) to attack companies and create malware and phishing campaigns. In addition, the use of deepfakes has increased, including audio and video spoofs used for fraud campaigns.

Although “most organizations are aware of the danger,” they “lag behind in [implementing]

Last year, the Illinois Judicial Conference Task Force on Artificial Intelligence (IJC) was created to develop recommendations for how the Illinois Judicial Branch should regulate and use artificial intelligence (AI) in the court system. The IJC made recommendations to the Illinois Supreme Court, which adopted a policy on AI effective January 1, 2025.

The policy