OpenAI recently published research summarizing how criminal and nation-state adversaries are using large language models (LLMs) to attack companies and create malware and phishing campaigns. In addition, the use of deepfakes has increased, including audio and video spoofs used for fraud campaigns.

Although “most organizations are aware of the danger,” they “lag behind in [implementing]

On October 6, 2025, Bloomberg reported that the Securities and Exchange Commission (SEC) has launched an investigation into AppLovin Corporation’s data-collection practices, following an alleged whistleblower complaint and a series of short-seller reports. We previously covered the shareholder class action against AppLovin in another blog post. The company is a mobile advertising technology business that

U.S. District Judge Amit P. Mehta sanctioned an attorney who filed a brief containing erroneous citations in every case cited after the attorney admitted to relying on generative AI to write the brief. The attorney had used the tools Grammarly, ProWriting Aid, and Lexis’ cite-checking tool. The attorney was ordered to pay sanctions, including opposing

On July 24, 2025, during a public meeting following public comment, the California Privacy Protection Agency (CPPA) Board unanimously approved amendments to the California Consumer Privacy Act (CCPA). These substantial changes include new obligations for businesses subject to the CCPA. Significantly, the updates emphasize CPPA’s new regulatory focus over AI decision-making and cybersecurity in addition

On August 4, 2025, Illinois Governor JB Pritzker signed the Wellness and Oversight for Psychological Resources Act into law, which went into immediate effect, and “prohibits anyone from using AI to provide mental health and therapeutic decision-making, while allowing the use of AI for administrative and supplementary support services for licensed behavioral health professionals.” The

On July 24, 2025, the White House released the “White House AI Action Plan,” which includes over 90 policy actions focused on accelerating innovation, building AI infrastructure, and increasing international diplomacy around artificial intelligence (AI). The Plan focuses on removing regulatory barriers and requires that systems are free from ideological bias and “woke” policies.

The

Finally, after providing the building blocks for strong Information Governance (IG) programs and operationalizing that framework, we discuss how to sustain your IG program in the last part of the series. An effective IG program powered by the ARMA IGIM framework isn’t static. To remain relevant in an AI-driven world, it must be scalable

Last week, we outlined the building blocks for a strong IG program. Now that you’ve laid the groundwork, it’s time to bring your IG program to life. The ARMA IGIM framework emphasizes operational execution in three key areas:

  1. Procedural Framework
  2. Capabilities
  3. Information Lifecycle

These domains are where your framework tangibly interacts with AI systems