In an excellent blog post, “Avoiding AI Pitfalls in 2026: Lessons Learned from Top 2025 Incidents,” ISACA’s Mary Carmichael summarizes lessons learned from top incidents in 2025 using MIT’s AI Incident Database and risk domains. According to Carmichael, an analysis of the incidents showed recurring patterns across different risk domains, including privacy, security, reliability, and human impact, pointing out that most problems were predictable and avoidable.

Carmichael notes that her blog post “reviews where those patterns appeared and what needs to change in 2026 so organizations can use AI with greater confidence and control.”

Consider reading the article, but in a nutshell, her lessons are:

  1. Treat AI systems like core infrastructure—enforce MFA, unique administrative accounts, privileged access reviews, and security testing, particularly where personal information is included.
  2. To combat discrimination and toxicity, facial recognition technology can be used to support investigations but should not be “the deciding evidence.” Require corroborating evidence, publish error rates by race and other characteristics, and log every use.
  3. Deepfakes are on the rise: “Organizations should monitor for misuse of their brands and leaders. This includes playbooks for rapid takedowns with platforms and training employees and the public to ‘pause and verify’ through secondary channels before responding.”
  4. Attackers are using AI models for cyber-espionage. “Assume attackers have an AI copilot. Treat coding and agent-style models as high-risk identities, with least-privilege access, rate limits, logging, monitoring, and guardrails. Any AI that can run code should be governed like a powerful engineer account, not a harmless chatbot.”
  5. Chatbots and AI companion apps have engaged in harmful conversations. Build AI products with safety-by-design: “clinical input, escalation paths, age-appropriate controls, strong limits and routes to human help. If it cannot support these safeguards, it should not be marketed as an emotional support tool for young people.”
  6. AI providers are alleged to be adding air pollution, noise, and industrial traffic to neighborhoods. Due diligence, including information on “energy mix, emissions and water use” should be collected “so AI procurement aligns with climate and sustainability goals.”
  7. AI tools are confident, but often incorrect. Hallucinations are frequent and pose safety risks. “Design every high-impact AI system with the assumption it will sometimes be confidently wrong. Build governance around that assumption with logging, version control, validation checks and clear escalation so an accountable human can catch and override outputs.”

Carmichael outlines strategic goals to consider in 2026 to leverage the lessons learned in 2025. Her final thought, near and dear to my heart, is that having an AI governance program will give organizations a competitive advantage in 2026. “Organizations that maintain visibility, clear ownership and rapid intervention will reduce harm and earn trust. With the right oversight, AI can create value without compromising safety, trust or integrity.” I couldn’t have said it better. If you have not developed and established an AI governance program yet, Q1 in 2026 is a perfect time to get started.

There are many factors to consider when assisting clients with assessing the use of artificial intelligence (AI) tools in an organization and developing and implementing an AI Governance Program. Although adopting an AI Governance Program is a no-brainer, no form of a governance program is insufficient. Each organization has to evaluate how it will use AI tools, whether (and how) it will develop its own, whether it will allow third-party tools to be used with its data, the associated risks, and what guardrails and guidance to provide to employees about their use.

Many organizations don’t know where to start when thinking about an AI Governance Program. I came across a guide that I thought might be helpful in kickstarting your thinking about the process: Syncari’s “The Ultimate AI Governance Guide: Best Practices for Enterprise Success.”

Although the article scratches the surface of how to develop and implement an AI Governance Program, it is a good start to the internal conversation regarding some basic questions to ask and risks that may be present with AI tools. Although the article mentions AI regulations, including the EU AI Act and GDPR, it is important to consider state AI regulations being introduced and passed daily in the U.S. In addition, when considering third-party AI tools, it is important to question the third-party on how it collects, uses, and discloses company data, and whether company data is being used to train the AI tool.

Now is the time to start discussing how you will develop and implement your AI Governance Program. Your employees are probably already using it, so assess the risk and get some guardrails around it.

A recent report published by Cyera entitled “State of AI Data Security: How to Close the Readiness Gap as AI Outpaces Enterprise Safeguards,” based on a survey of 921 IT and cybersecurity professionals, finds that although 83% of enterprises “already use AI in daily operations…only 13% report strong visibility into how it is being used.” The report concludes:

The result is a widening gap: sensitive data is leaking into AI systems beyond enterprise control, autonomous agents are acting beyond scope, and regulators are moving faster than enterprises can adapt. AI is now both a driver of productivity and one of the fastest expanding risk surfaces CISOs must defend.

The survey results show that although AI adoption in companies is rapid, most enterprises are “blind to how AI interacts with their data.” This is complicated by the fact that autonomous AI agents are difficult to secure and very few organizations have prompt or output controls, including the ability to block risky AI activity by employees.

In addition, most of the respondents acknowledged that AI tools used in the organization are “over-accessing data.” This is further complicated by the fact that a small minority of those surveyed (7%) have a “dedicated AI governance team, and just 11% feel fully prepared for regulation.”

The conclusion is: “the enterprise risk surface created by AI is expanding far faster than the governance and enforcement structures meant to contain it.”

We have previously commented on how important AI Governance Programs are in mitigating the risks associated with AI use in an organization. The Cyera Report reiterates that conclusion. If you are one of a vast majority of organizations who have not developed an AI Governance Program yet, it’s time to make it a top priority.

AI hype is everywhere. The 15th Annual AI & Data Leadership Executive Benchmark Survey, shows what nearly 110 Fortune 1000 companies and global brands are actually doing with AI. Once a future bet, AI is now a business mandate, and most companies are already seeing results.

Investment is essentially universal with an overwhelming 99.1% of surveyed leaders say data and AI are a top organizational priority, and 90.9% are increasing their level of investment. Executives also connect the AI boom to a renewed focus on fundamentals, with 92.7% saying intensified AI interest is driving stronger attention to data.

Leadership models are tightening as AI scales. The Chief Data Officer (CDO) role is now standard, with 90% of companies reporting a CDO in place and nearly 70% describing the role as successful and well-established, up from 47.6% the prior year. Just as importantly, the CDO mandate has shifted toward growth, with 85.5% saying the role is focused on “offense,” meaning innovation and value creation, rather than primarily defensive or compliance work.

At the same time, the Chief AI Officer (CAIO) role is emerging to formalize AI accountability. Roughly 38.5% of organizations now report a CAIO or equivalent, up from 33.1% last year. Reporting lines are still settling, but in most companies without a CAIO, AI leadership continues to sit with the CDO or Chief Digital and Artificial Intelligence Office function, which 69.1% say currently carries the remit.

AI adoption has moved decisively from experiments to production. Two years ago, just 4.7% of firms reported AI in production at scale—this year, that figure is 39.1%. When added to the 54.5% running AI in limited production, 93.6% of organizations now have active AI capabilities in production, signaling that pilots are no longer the dominant mode.

Value is showing up alongside deployment, with 97.3% of organizations report measurable business value from data and AI investments, and 54% say they are realizing a high or significant degree of value, improving on last year’s results and reinforcing that AI programs are increasingly tied to tangible outcomes.

The biggest constraint is not the technology, it is the human side of transformation. A record 93.2% of executives cite cultural challenges and change management as the top barrier to AI success. The work that slows companies down is shifting processes, building skills, changing decision habits, and creating an environment where teams trust data and adopt new ways of working.

Looking forward, leaders view AI as a once-in-a-generation shift. Almost 83% believe AI is likely to be the most transformational technology in a generation. Governance is rising with the stakes, with nearly 80% naming Responsible AI as a top corporate priority and 88.7% saying they have safeguards and guardrails in place.

The takeaway is simple and urgent—the “should we invest” era is over. The winners will be the organizations that align leadership, operating models, culture, and governance fast enough to convert rapidly expanding adoption into sustained business value.

The rise of large language models (LLMs) such as ChatGPT has created novel legal implications surrounding the development and use of such artificial intelligence (AI) systems. One of the most closely watched AI cases currently is New York Times Co. v. Microsoft Corp., No. 1:23-cv-11195 (S.D.N.Y. filed Dec. 27, 2023), in which the New York Times (NYT) has alleged that OpenAI, the parent company of ChatGPT, impermissibly used NYT-copyrighted works to train the ChatGPT LLM. Though the case is centered on questions of intellectual property, a recent development in the case has raised significant data privacy concerns, as well.

The Preservation Order

In the course of the ongoing litigation, NYT asserted that, if ChatGPT saved its user data, such data could preserve evidence to support NYT’s position. In a May 13, 2025, preservation order, U.S. Magistrate Judge Wang for the Southern District of New York agreed with NYT and instructed OpenAI “to preserve and segregate all output log data that would otherwise be deleted on a going forward basis until further order of the Court.”

This is a sweeping demand, as ChatGPT receives and hosts a vast volume of data. Over 4.5 billion users visit ChatGPT per month, and the LLM receives approximately 2.5 billion prompts each day.  In OpenAI’s opposition argument, the company asserted that the order would require the retention of 60 billion conversations that would be near impossible to search, adding that less than 0.010% of the data would be relevant to NYT’s copyright assertions.

Moreover, OpenAI explained that ChatGPT users expect their deleted chats to be unavailable. Indeed, prior to receiving the preservation order, ChatGPT had touted its data retention policies, whereby if a user deletes a chat, the chat is removed from the user’s account immediately and scheduled for permanent deletion from OpenAI systems within 30 days (absent a legal or security reason to preserve it). However, after a late June hearing, U.S. District Judge Stein denied OpenAI’s objections to Magistrate Judge Wang’s preservation order. Judge Stein concluded that OpenAI’s terms of use allow for preservation of data for legal requests, under which this case falls.

The preservation order applies to ChatGPT Free (along with some other versions) but not to ChatGPT Enterprise. This means that the inputted data of individuals who use the free version of ChatGPT might be subject to NYT’s review. However, for organizations that use ChatGPT Enterprise, this data is outside of the preservation order’s scope. Still, if an employee has used the personal version of ChatGPT to input any employer information, that information could now potentially be subject to this order, too.

What’s Next?

OpenAI continues to raise concerns about privacy in response to the preservation order. In a June tweet, OpenAI CEO Sam Altman introduced the concept of an “AI privilege,” offering that AI conversations should receive the same privileges as conversations with lawyers and medical providers. Of course, this is not a legally recognized privilege, and OpenAI has not introduced it in any legal briefings. Even if the company did so, it is unlikely any court would be willing to create this new category of privilege for generative AI interactions.

While many people following the case are alarmed at larger constitutional privacy concerns, it is also unlikely that all 60 billion conversations subject to the preservation order will become available to the public at large. For now, NYT lawyers are expected to gain access to and begin searching OpenAI logs in support of NYT’s copyright case.

OpenAI will surely bolster its security practices in response to the preservation order, but the fact that data that would have otherwise been deleted is now being maintained is a heightened risk in itself.

Considerations for Organizations

From a data governance perspective, this preservation order raises several considerations as organizations think about data governance:

  • Review your own internal data retention clauses – OpenAI’s terms of use states: “We may share your Personal Data, including information about your interaction with our Services, with government authorities, industry peers, or other third parties in compliance with the law…if required to do so to comply with a legal obligation…”

Such language is common in most businesses’ privacy policies and terms of use. Companies often need data retention clauses that carve out legal exceptions for certain situations, but this case has demonstrated how such clauses could be turned on their head.

Organizations should be aware that “maintaining data for legal purposes,” could also include court orders such as the OpenAI preservation order, which in OpenAI’s case, ended up counter to its privacy promises to its users.

  • Segregate data where technically feasible – Organizations should segregate their data into distinct buckets based on the data’s sensitivity and/or purpose. For OpenAI, one hurdle in arguing against the preservation order was that the high volume of data was not flagged in any manner, so the company was unable to determine which output logs were relevant to the matter.

It is quite possible that OpenAI’s assertion is correct and only a small fraction of the total data is relevant to NYT’s case. Yet, the court had no viable means of making such a determination. If organizations can separate or flag data based on sensitivity and usage, this would help them to isolate relevant data for specific issues rather than having to include all organizational data in evaluating every issue that may arise.

  • Evaluate your vendor contracts – Though ChatGPT Enterprise user data is not affected by this preservation order, the matter serves as a reminder to all organizations to review their vendor contracts. Businesses might consider zero data retention agreements for certain vendors so that these vendors don’t store data – even for a legal purpose – after it has been used for its originally intended purpose. Generally, data minimization limits the likelihood of information exposure overall.
  • Further raise employee awareness – Organizations should remind their employees that personal ChatGPT conversations can no longer be deleted after 30 days and that the “temporary chat” feature is no longer operational because of the preservation order.

That means that any personal ChatGPT input could become a part of the discovery in this matter. Employees should maintain heightened caution in using tools such as ChatGPT to input any proprietary, business, confidential, or sensitive information.

  • Establish an AI Governance Program – It is also prudent for organizations’ legal and IT Teams to understand the AI use across their organizations. Often, individuals or departments are using AI without their legal/IT departments knowing. Usually, this is not nefarious activity, but people are simply unaware of the risks of AI tools and the need for organizational awareness regarding their usage.

Once businesses can wrap their heads around what departments are using what AI tools, they can circulate an AI use questionnaire to encourage responsible and informed use of such tools. In today’s age, employee AI use is likely to happen, but a strong AI governance program and institutionalized policies can increase employee awareness and serve as an organizational safeguard to mitigate AI-related risks.

The reality is that no business can entirely prevent all unauthorized AI use, but with a robust governance program and related AI policies, they can at least train their staff at the individual level and manage that risk holistically as an organization, too.

Finally, after providing the building blocks for strong Information Governance (IG) programs and operationalizing that framework, we discuss how to sustain your IG program in the last part of the series. An effective IG program powered by the ARMA IGIM framework isn’t static. To remain relevant in an AI-driven world, it must be scalable, adaptable, and future-proof. Three domains are critical here:

  1. Architecture
  2. Infrastructure
  3. Continuous Improvement

1. Strengthening Architecture

Your information and technology architecture are the structural foundation for both IG and AI tools. Integration between systems of record and systems of engagement is key to maintaining data quality and accessibility.

Key for AI Adoption:
High-quality architecture supports real-time AI applications like predictive analytics by ensuring the latest data is always available.

Example: A logistics company improved delivery route optimization using an AI-powered tool. By standardizing taxonomy across data platforms, drivers received accurate real-time recommendations powered by up-to-date information.

2. Building Resilient Infrastructure

Infrastructure ensures that your IG program has the technological underpinnings needed to scale AI initiatives.

Key for AI Adoption:
Cloud-based storage solutions offer scalability for large training datasets, while encryption and robust APIs ensure those datasets remain secure.

Actionable Tip: Invest in automated tools to monitor both IG program performance and AI algorithm accuracy.

3. Continuous Improvement

AI tools and data environments are constantly evolving, requiring continuous updates to your IG practices to ensure alignment.

Key for AI Adoption:
A regular review cycle ensures AI tools remain effective when regulations change, or business needs evolve.

Actionable Tip: Schedule bi-annual program reviews to assess shifts in regulatory requirements or advancements in AI capabilities, and adjust IG policies accordingly.

By understanding how Architecture and Infrastructure fuel IG and AI success, your organization will stay competitive in an AI-driven future.

What can you do now? Conduct an infrastructure audit to ensure your current technology can support scalable AI solutions.

Series Wrap-Up

The ARMA IGIM 2.1 framework does more than streamline governance; it enables the technologies of tomorrow. By adopting strong IG practices you:

  1. Create a data foundation ready for AI-powered insights.
  2. Build trust, efficiency, and reliability into your operations.
  3. Maximize business value from cutting-edge tools.

Take the first step in transforming your organization’s approach to information governance today, and unlock the full potential of AI innovation.

Last week, we outlined the building blocks for a strong IG program. Now that you’ve laid the groundwork, it’s time to bring your IG program to life. The ARMA IGIM framework emphasizes operational execution in three key areas:

  1. Procedural Framework
  2. Capabilities
  3. Information Lifecycle

These domains are where your framework tangibly interacts with AI systems, ensuring tools like machine learning models work with clean, structured data.

1. Procedural Framework

Your Procedural Framework establishes consistent policies, roles, and accountability measures. For AI, having standardized processes ensures that models produce reliable outputs.

Key for AI Adoption:
Without uniform procedures, AI systems can misinterpret data. For example, inconsistent naming conventions in datasets can skew analytics or predictions.

Actionable Tip: Create a policy requiring metadata tagging for all incoming data to improve accessibility for AI models.

2. Capabilities

Capabilities refer to the tools and technologies that power your IG program. AI tools are only as good as the systems they connect with.

Key for AI Adoption:
Role-based access controls prevent sensitive data from being used irresponsibly in AI training, while metadata management enhances the searchability of training datasets.

Example: A retail company equipped its e-commerce platform with AI product recommendations. By integrating IGIM-driven policies on access control and metadata, they ensured only accurate, permissible data informed the algorithms.

3. Information Lifecycle

AI relies on data that evolves through its lifecycle—from creation to disposition. The Information Lifecycle ensures that outdated or incorrect data doesn’t compromise AI tools.

Key for AI Adoption:
By defining retention schedules, organizations ensure AI models are trained on relevant data, reducing errors and increasing trust in outputs.

Next week, we’ll discuss how to sustain your IG program, enabling continuous innovation with AI.

What can you do now? Make sure that your data policies, tools, and lifecycle management strategies are aligned to support your AI-driven initiatives.

Last week, we introduced you to the ARMA IGIM Framework. What’s next? Every successful Information Governance (IG) program starts with a strong base. The ARMA IGIM framework outlines three critical building blocks:

  1. Steering Committee
  2. Authorities
  3. Support Functions

Implementing these foundational pieces not only gets your IG program off the ground but also creates a system where artificial intelligence (AI) tools can thrive.

1. Forming a Steering Committee

An effective Steering Committee ensures that your IG program has direction, accountability, and cross-functional collaboration. Including representatives from IT, Legal, Privacy, and Compliance ensures diverse perspectives and safeguards alignment with organizational goals.

Key for AI Adoption:
A cross-functional Steering Committee ensures your AI initiatives don’t operate in silos. For example, while the IT team oversees AI tools, Legal ensures adherence to privacy laws, preventing risks related to automated decision-making.

Actionable Tip: Assign an AI-focused subcommittee to oversee how governance policies interact with algorithmic applications.

2. Understanding Authorities

Clear Authorities, such as internal policies and external regulations, guide AI initiatives responsibly. Policies must address AI-specific issues like ethical data use, bias mitigation, and legal compliance.

Key for AI Adoption:
Organizations can confidently deploy AI when they know their tools and models comply with data regulations (e.g., GDPR or CCPA). Structured Authorities build the trust needed for employees and stakeholders to adopt AI-powered processes.

3. Activating Support Functions

Support Functions like training, project management, and communications are essential for building both an IG program and an AI-ready culture.

Key for AI Adoption:
Train employees not only on IG principles but also on effectively using and understanding AI tools. An educated workforce handles AI outputs responsibly, avoiding misuse or confusion.

Example in Action: A health care provider formed a Steering Committee to integrate an AI tool for patient scheduling. By aligning IG policies, they ensured the tool used anonymized data and complied with HIPAA, boosting efficiency without risking non-compliance.

Next week, we’ll explore operationalizing your IG program and how it empowers AI to deliver valuable business insights.

What can you do now? Host a workshop with your Steering Committee to map out how your IG policies can enable AI initiatives.

Today, organizations face unprecedented data challenges. The sheer volume of information, evolving regulations, and the rising momentum of artificial intelligence (AI) revolutionizing industries make it clear that information governance (IG) is not optional. The ARMA IGIM 2.1 framework provides organizations with a practical, structured approach to manage data effectively, enabling them to meet these challenges head-on.

The IGIM Framework and Its Importance

At its core, the IGIM framework breaks down IG into eight domains:

  • Steering Committee
  • Authorities
  • Support Functions
  • Procedural Framework
  • Capabilities
  • Information Lifecycle
  • Architecture
  • Infrastructure

These eight domains work to ensure that every piece of information within your organization is secure and usable throughout its lifecycle. Adopting IGIM benefits organizations by streamlining workflows, reducing compliance risks, and increasing operational efficiency. But the advantages don’t end there.

Why IG is Indispensable for AI Adoption

AI thrives on high-quality, well-governed data. AI tools rely on accurate, accessible, and structured information to generate actionable insights. Without a proper IG framework, organizations often struggle with:

  • Data Silos: Making it difficult to consolidate or analyze data.
  • Dirty Data: Leading to inaccurate AI outputs.
  • Compliance Risks: Exposing organizations to penalties from data misuse.

By establishing effective governance practices, organizations create the foundation needed for AI to optimally perform. For example, banks using the IGIM framework to organize customer data see faster AI-driven credit risk assessments because the information is clean, structured, and easily retrievable.

Through this series, you’ll discover how the IGIM framework enables not only effective governance but also maximizes the value of AI investments. Next week, we’ll discuss laying the foundation for your IG program.

What can you do now? Assess your current data governance practices and consider how well-structured data could drive your AI initiative forward.

A new survey from Intapp, titled “2025 Tech Perceptions Survey Report,” summarizes findings from a survey of fee-earners that there has been a “surge in AI usage.” The professions surveyed included accounting, consulting, finance, and legal sectors. Findings include that “AI usage among professionals has grown substantially, with 72% using AI at work versus 48% in 2024.” AI adoption among firms increased to 56%, with firms utilizing it for data summarization, document generation, research, error-checking, quality control, voice queries, data entry, consultation (decision-making support), and recommendations. That said, the vast majority of AI adoption in the four sectors is in finance, with 89% of professionals using AI at work. Specifically, 73% of accounting professionals, 68% of consulting professionals, and 55% of legal professionals use AI.

A significant conclusion is that when firms do not provide AI tools for professionals to use, they often develop their own. Over 50% of professionals have used unauthorized AI tools in the workplace, which increases risk for companies. They are reallocating the time saved with AI tools by improving work-life balance, focusing on higher-level client work, focusing on strategic initiatives and planning, cultivating relationships with clients, and increasing billable hours.

The survey found that professionals want and need technology to assist with tasks. Only 32% of professionals believe they have the optimal technology to complete their job effectively. The conclusion is that professionals who are given optimal technology to perform their jobs are more satisfied and likely to stay at the firm, optimal tech “powers professional-and firm-success, and AI is becoming non-negotiable for future firm leaders.”

AI tools are rapidly developing and adopted by all industries, including professional sectors. As noted in the Intapp survey, if firms are not providing AI tools for workers to use to enhance their jobs, they will use them anyway. The survey reiterates how important it is to have an AI Governance Program in place to provide sanctioned tools for workers to reduce the risks associated with using unauthorized AI tools. Developing and implementing an AI Governance Program and acceptable use policies should be high on the priority list for all industries, including professional services.