The California Attorney General published two legal advisories this week:

These advisories seek to remind businesses of consumer rights under the California Consumer Privacy Act, as amended by the California Privacy Rights Act (collectively, CCPA), and to advise developers who create, sell, or use artificial intelligence (AI) about their obligations under the CCPA.

Attorney General Rob Bonta said, “California is an economic powerhouse built in large part on technological innovation. And right alongside that economic might is a strong commitment to economic justice, workers’ rights, and competitive markets. We’re not successful in spite of that commitment — we’re successful because of it [. . .] AI might be changing, innovating, and evolving quickly, but the fifth largest economy in the world is not the wild west; existing California laws apply to both the development and use of AI. Companies, including healthcare entities, are responsible for complying with new and existing California laws and must take full accountability for their actions, decisions, and products.” 

Advisory No. 1: Application of Existing California Laws to Artificial Intelligence

This advisory:

  • Provides an overview of existing California laws (i.e., consumer protection, civil rights, competition, data protection laws, and election misinformation laws) that may apply to companies that develop, sell, or use AI;
  • Summarizes the new California AI law that went into effect on January 1, 2025, such as:
  • Disclosure Requirements for Businesses
  • Unauthorized Use of Likeness
  • Use of AI in Election and Campaign Materials
  • Prohibition and Reporting of Exploitative Uses of AI  

Advisory No. 2: Application of Existing California Law to Artificial Intelligence in Healthcare 

AI tools are used for tasks such as appointment scheduling, medical risk assessment, and medical diagnosis and treatment decisions. This advisory:

  • Provides guidance under California law, i.e., consumer protection, civil rights, data privacy, and professional licensing laws—for healthcare providers, insurers, vendors, investors, and other healthcare entities that develop, sell, and use AI and other automated decision systems;
  • Reminds such entities that AI carries harmful risks and that all AI systems must be tested, validated, and audited for safe, ethical, and lawful use;
  • Informs such entities that they must be transparent about using patient data to train AI systems and alert patients on how they are using AI to make decisions affecting their health and/or care;

This is yet another example of how issues related to the safe and ethical use of AI will likely be at the forefront for many regulators across many industries.

Last week, New Jersey Attorney General Matthew Platkin announced new guidance that the New Jersey Law Against Discrimination (LAD) applies to algorithmic discrimination, i.e., when automated systems treat people differently or negatively based on protected characteristics. This can happen with algorithms trained on biased data or with systems designed with biases in mind. LAD prohibits discrimination based on a protected characteristic like race, religion, national origin, sex, pregnancy, and gender identity, among other things. According to the guidance, employers, housing providers, and places of public accommodation who make discriminatory decisions using automated decision-making tools, like artificial intelligence (AI), would violate LAD. LAD is not an intent-based statute. Therefore, a party can violate LAD even if it uses an automated decision-maker with no intent to discriminate or uses a discriminatory algorithm developed by a third party. The guidance does not create any new rights or obligations. However, in noting that the law covers automated decision-making, the guidance encourages companies to carefully design, test, and evaluate any AI system they seek to employ to help avoid producing discriminatory impacts.

The rapid advancement of artificial intelligence (AI) technologies is reshaping the corporate landscape, offering unparalleled opportunities to enhance customer experiences and streamline operations. At the intersection of this digital transformation lie two key executives—the Chief Information Officer (CIO) and the Chief Marketing Officer (CMO). This dynamic duo, when aligned, can drive ethical AI adoption, ensure compliance, and foster personalized customer engagement powered by innovation and responsibility.

This blog explores how the collaboration between CIOs and CMOs is essential in balancing ethical AI implementations with compelling customer experiences. From data governance to technology infrastructure and cybersecurity, below is a breakdown of the critical aspects of this partnership and why organizations must align these roles to remain competitive in the AI-driven world.

Understanding Ethical AI: Balancing Innovation with Responsibility

Ethical AI isn’t just a buzzword; it’s a guiding principle that ensures AI solutions respect user privacy, avoid bias, and operate transparently. To create meaningful customer experiences while addressing the societal concerns surrounding AI, CIOs, and CMOs must collaborate to design AI applications that are innovative and responsible.

CMOs focus on delivering dynamic, real-time, and personalized interactions to meet rising customer expectations. However, achieving this requires vast amounts of personal data, potentially risking violations of privacy regulations like the General Data Protection Regulation and the California Consumer Privacy Act. Enter the CIO, who ensures the technical infrastructure adheres to these laws while safeguarding the organization’s reputation. Together, the CIO and CMO can delicately balance between leveraging AI for customer engagement and adhering to responsible AI practices.

The Role of Data Governance in AI-Driven Strategies

Data governance is the backbone of ethical AI and compelling customer engagement. CMOs rely on customer data to craft hyper-personalized campaigns, while CIOs are charged with maintaining that data’s the security, accuracy, and ethical usage. Without proper governance, organizations risk breaches, regulatory fines, and, perhaps most damagingly, a loss of trust among consumers.

Collaboration between CIOs and CMOs is necessary to establish clear data management protocols; this includes ensuring that all collected data is anonymized as needed, securely stored, and utilized in compliance with emerging AI content labeling regulations. The result is a transparent system that reassures customers and consistently delivers high-quality experiences.

Robust Technology Infrastructure for AI-Powered Customer Engagement

For AI to deliver on its promise of customer engagement, organizations require scalable, secure, and agile technology infrastructure. A close alignment between CIOs and CMOs ensures that marketing campaigns are supported by IT systems capable of handling diverse AI workloads.

Platforms driven by machine learning and big data analytics allow marketing teams to create real-time, omnichannel campaigns. Meanwhile, CIOs ensure these platforms integrate seamlessly into the organization’s technology stack without sacrificing security or performance. This partnership allows marketers to focus on innovative strategies while IT supports them with reliable and forward-thinking infrastructure.

Cybersecurity Challenges and the Integrated Approach of CIOs and CMOs

Customer engagement strategies powered by AI rely heavily on consumer trust, but cybersecurity threats lurk around every corner. According to Palo Alto Networks’ predictions, customer data is central to modern marketing initiatives. However, without an early alignment between CIOs and CMOs, the organization is exposed to risks like data breaches, compliance violations, and AI-related controversies.

A proactive collaboration between CIOs and CMOs ensures that potential vulnerabilities are identified and mitigated before they evolve into full-blown crises. Measures such as end-to-end data encryption, regular cybersecurity audits, and robust AI content labeling policies can protect the organization’s digital assets and reputation. This integrated approach enables businesses to foster lasting customer trust in a world of increasingly sophisticated cyber threats.

Case Studies: Successful CIO-CMO Collaborations

  • Case Study 1: A Retail Giant’s Transformation
    One of the world’s largest retail chains successfully transformed its customer experience through the CIO-CMO collaboration. The CIO rolled out a scalable AI-driven recommendation engine, while the CMO used this tool to craft personalized shopping experiences. The result? A 35% increase in customer retention within a year and significant growth in lifetime customer value.
  • Case Study 2: Financial Services Leader
    A financial services firm adopted an AI-powered chatbot to enhance its customer service. The CIO ensured compliance with strict financial regulations, while the CMO leveraged customer insights to refine the chatbot’s conversational design. Together, they created a seamless, trustworthy digital service channel that improved customer satisfaction scores by 28%.
  • These examples reinforce the advantages of partnership. By uniting their expertise, CIOs and CMOs deliver next-generation strategies that drive measurable business outcomes.

Future Trends in AI, Compliance, and Executive Collaboration

The evolving landscape of AI, compliance, and customer engagement is reshaping the roles of CIOs and CMOs. Here are a few trends to watch for in the coming years:

  • AI Transparency: Regulations will increasingly require companies to disclose how AI models were trained and how customer data is used. Alignment between CIOs and CMOs will be vital in meeting these demands without derailing marketing campaigns.
  • Hyper-Personalization: Advances in machine learning will allow marketers to offer even more granular personalization, but this will require sophisticated data-centric systems designed by CIOs.
  • AI Content Labeling: From machine-generated text to synthetic media, organizations must adopt clear labeling practices to distinguish between AI-driven and human-generated content.

By staying ahead of these trends, organizations can cement themselves as leaders in ethical AI and customer engagement.

Forging a Path to Sustainable AI Innovation The digital transformation of business will continue to deepen the interconnected roles of the CIO and CMO. These two leaders occupy the dual pillars required for success in the AI era—technology prowess and customer-centric creativity. By aligning their goals and strategies early on, they can power ethical AI innovation, ensure compliance, and elevate customer experiences to new heights.

TikTok users are seeking alternate platforms to share and view content as the U.S. is set to ban the popular social media app on January 19, 2025. Instead of turning to U.S.-based companies like Facebook or Instagram, users are flocking to another Chinese app called Xiaohongshu, also known as RedNote. The app, which previously had little presence in the U.S. market, shot up to the most downloaded app in Apple’s app store this week. RedNote shares similarities to Yelp, where users share recommendations, but it also allows users to post short clips, similar to the soon-to-be-banned TikTok.

While some of these TikTok users choose to switch to RedNote because of the similar short-form video format, other users appear to be purposefully choosing another Chinese-owned app as a form of protest. Either way, ordinary American and Chinese citizens can easily interact in new ways on the internet through RedNote.

However, RedNote includes many of the same privacy and national security issues that the U.S. government raised concerning TikTok. Although many users ordinarily ignore privacy policies, RedNote’s privacy policy is written in Mandarin, making it even more difficult (and in some cases impossible) for users to understand. A translation of the privacy policy indicates that RedNote collects sensitive data like a user’s IP address and browsing habits. As a Chinese-based app, RedNote is also similarly subject to the Chinese data laws that led U.S. lawmakers to ban TikTok. The TikTok ban could eventually be extended to include RedNote and other Chinese (and other foreign country) apps national security and privacy concerns exist. With other short-form video services (e.g., Instagram Reels and YouTube Shorts) provided by U.S. companies, users do not need to expose their personal data to Chinese-based companies. Additionally, using RedNote to circumvent the TikTok ban could be problematic, particularly for government workers with security clearances. RedNote is not worth these risks, and Americans should avoid downloading it.

At the close of 2024, the Office for Civil Rights (OCR) at the U.S. Department of Health and Human Services (HHS) issued a Notice of Proposed Rulemaking (the Proposed Rule) to amend the Security Rule regulations established for protecting electronic health information under the Health Insurance Portability and Accountability Act of 1996 (HIPAA). The updated regulations would increase cybersecurity protection requirements for electronic protected health information (ePHI) maintained by covered entities and their business associates to combat rising cyber threats in the health care industry.

The Proposed Rule seeks to strengthen the HIPAA Security Rule requirements in various ways, including:

  • Removing the “addressable” standard for security safeguard implementation specifications and making all implementation specifications “required.”
    • This, in turn, will require written documentation of all Security Rule policies and encryption of all ePHI, except in narrow circumstances.
  • Requiring the development or revision of technology asset inventories and network maps to illustrate the movement of ePHI throughout electronic information system(s) on an ongoing basis, to be addressed not less than annually and in response to updates to an entity’s environment or operations potentially affecting ePHI.
  • Setting forth specific requirements for conducting a risk analysis, including identifying all reasonably anticipated threats to the confidentiality, integrity, and availability of ePHI, identifying potential vulnerabilities, and assigning a risk level for each threat and vulnerability identified.
  • Requiring prompt notification (within 24 hours) to other healthcare providers or business associates with access to an entity’s systems of a change or termination of a workforce member’s access to ePHI; in other words, entities will now be obligated to immediately communicate changes if an employee’s or contractor’s access to patient data is altered or revoked to mitigate the risk of unauthorized access to ePHI.
  • Establishing written procedures on how the entity will restore the loss of relevant electronic information systems and data within 72 hours.
  • Testing and revising written security incident response plans.
  • Requiring encryption of ePHI at rest and in transit.
  • Requiring specific security safeguards on workstations with access to ePHI and/or storage of ePHI, including anti-malware software, removal of extraneous software from ePHI systems, and disabling network ports pursuant to the entity’s risk analysis.
  • Requiring the use of multi-factor authentication (with limited exceptions).
  • Requiring vulnerability scanning at least every six (6) months and penetration testing at least once every year.
  • Requiring network segmentation.

The Proposed Rule notably includes some requirements specific to business associates only. These include a proposed new requirement for business associates to notify covered entities (and subcontractors to notify business associates) within 24 hours of activating their contingency plans. Business associates would also be required to verify, at least once a year, to their covered entity customers that the business associate has deployed the required technical safeguards to protect ePHI. This must be conducted by a subject matter expert who provides a written analysis of the business associate’s relevant electronic information systems and a written certification that the analysis has been performed and is accurate.

The Proposed Rule even includes a specific requirement for group health plans, requiring such plans to include in their plan documents requirements for their group health plan sponsors to comply with the administrative, physical, and technical safeguards of the Security Rule, requiring any agent to whom they provide ePHI to implement the administrative, physical, and technical safeguards of the Security Rule; and notify their group health plans no more than 24 hours after activation of their contingency plans.

Ultimately, the Proposed Rule seeks to implement a comprehensive update of mandated security protections and protocols for covered entities and business associates, reflecting the significant changes in health care technology and cybersecurity in recent years. The Proposed Rule’s changes are also a tacit acknowledgment that current Security Rule standards have not kept up with threats or operational changes.

The government is soliciting comments on the Proposed Rule, and all public comments are due by March 7, 2025. Given the scope of the proposed changes and the heightened obligations for all individuals and entities subject to HIPAA, there will likely be many comments from various stakeholders. We will continue to follow the Proposed Rule and reactions thereto. The Proposed Rule is available here.

Adobe recently issued a patch for a high-severity vulnerability for ColdFusion versions 2023.11 and 2021.17 and earlier; according to the National Institute of Standards and Technology  (NIST), “an attacker could exploit this vulnerability to access files or directories that are outside of the restricted directory set by the application. This could lead to the disclosure of sensitive information or the manipulation of system data.”  The patches, ColdFusion (2023 release) Update 12 (release date, December 23, 2024) “resolves a critical vulnerability that could lead to arbitrary file system read, if the pmtagent package is installed on your ColdFusion server.”

The vulnerability, referred to as CVE-2024-53961, is considered critical, and Adobe has marked it as Priority 1, “warning that it has a high risk of being targeted in attacks.” Adobe recommends that companies using ColdFusion should install the patches as soon as possible. 

We previously reported that Ascension Health detected a cyber-attack on May 8, 2024, that affected clinical operations in Ascension facilities in six states.

On December 20, 2024, Ascension notified the Maine Attorney General in a regulatory filing that the attack compromised the personal information of 5.6 million individuals. According to Ascension, the incident occurred on February 29, 2024, but was detected on May 8, 2024. The data compromised included individuals’ names, insurance information, Social Security numbers, and payment details. The incident occurred when “an employee accidentally downloaded a malicious file disguised as legitimate…an honest mistake.”

Ascension is notifying the individuals and providing 24 months of credit monitoring, a $1,000,000 insurance reimbursement policy, and identity theft recovery services for those affected by the incident.

American Addiction Centers (AAC) has notified 422,424 individuals that their personal information was stolen in a cyber-attack attributed to the Rhysida criminal organization. The incident was discovered on September 26, 2024, and the notification letter to affected individuals confirmed that the information exfiltrated included names, Social Security numbers, and health insurance information. AAC is offering individuals 12 months of credit monitoring.

The criminal organization Rhysida has claimed responsibility for the attack and added AAC to its leak site. It states that it stole around 2.8TB of data and “has made most of it available publicly.”

Generative Artificial Intelligence (Gen AI) is transforming industries at an unprecedented pace, unlocking new possibilities in automation, creativity, and problem-solving. However, as we look toward 2025, the success and sustainability of Gen AI will depend on one critical element: information governance. Governance frameworks will provide the foundation for ethical AI development and ensure compliance, accountability, and collaboration in a rapidly evolving AI landscape. Without these frameworks, the potential of Gen AI could be overshadowed by risks such as data misuse, algorithmic bias, and regulatory challenges. Below are five key predictions about how information governance will shape Gen AI projects in 2025.

1. Increased Emphasis on Ethical AI

The conversation around ethical AI is growing louder, with concerns about bias, discrimination, and lack of accountability taking center stage. By 2025, ethical AI will no longer be an optional feature for organizations—it will become a core requirement. Information governance frameworks will be crucial in defining and implementing guidelines for the ethical use of data and developing AI models. These guidelines will ensure that AI systems are fair, transparent, and aligned with societal values, reducing the risk of reputational harm, public backlash, or hefty regulatory fines. Organizations must prioritize fairness audits, explainability protocols, and inclusivity metrics to keep their AI systems in line with ethical standards. Ethical AI will require ongoing oversight and a shift in mindset, treating governance as an enabler of trust rather than a bureaucratic hurdle.

2. Transformative Role of Data Management

Gen AI thrives on vast, high-quality datasets, making data management more critical than ever. As datasets grow in scale and complexity, information governance will take center stage in ensuring proper data collection, storage, and usage. Organizations will need strong governance strategies to maintain data integrity, prevent bias, and mitigate risks like data breaches, misuse, or non-compliance. In 2025, expect to see advancements in data labeling, cleaning technologies, and privacy-preserving methods like differential privacy and federated learning. These innovations will enhance AI model performance while safeguarding sensitive information. Additionally, organizations must implement robust data retention policies, ensuring they only store what is needed while meeting legal and ethical obligations for data disposal. By placing data management at the heart of AI projects, businesses can make smarter, safer, and more impactful use of their data.

3. The Rise of Regulatory Technologies

The regulatory environment for AI continues to intensify with the introduction of new data privacy laws and accountability frameworks worldwide. From GDPR and CCPA to AI-specific legislation in regions like the EU, navigating compliance will become increasingly complex by 2025. This will lead to a rise in regulatory technologies (RegTech) designed to automate compliance tasks and streamline information governance processes. These tools will integrate directly into AI development workflows, enabling organizations to monitor data usage, track model decisions, and ensure adherence to global data protection laws. RegTech solutions will also be critical in generating real-time insights into compliance risks, helping teams address issues proactively. As compliance becomes a key driver of AI adoption, companies that embrace these technologies will not only accelerate their AI deployments but also establish themselves as trusted leaders in the field.

4. Shifting Responsibilities in AI Development

As information governance becomes more central to AI success, traditional roles in AI development will undergo significant changes. Teams must embrace a more collaborative approach, with compliance officers, data scientists, and developers working to ensure ethical data use and model transparency. Shared accountability will become the norm, with each stakeholder upholding governance standards at every stage of the AI lifecycle. For example, compliance officers must understand technical workflows, while developers must prioritize privacy and explainability in their code. Clear governance practices will help define these evolving roles, ensuring teams are aligned with technological goals and organizational objectives. Additionally, organizations will invest in cross-disciplinary training to bridge knowledge gaps and foster greater collaboration across departments. This shift will create a more cohesive and accountable AI development ecosystem that is better prepared to navigate the challenges of 2025.

5. Collaboration and Standardization Efforts

The increasing complexity of governing AI will drive organizations to collaborate more closely across industries and sectors. By 2025, we’ll see a more significant push toward developing standardized governance frameworks, best practices, and shared tools that promote trust and transparency. Industry consortia, academic institutions, and regulatory bodies will collaborate to create unified guidelines, helping organizations navigate the fragmented regulatory landscape more effectively. Open-source initiatives will also play an important role, enabling organizations to share insights, frameworks, and technologies that address common governance challenges. These efforts will provide a clearer roadmap for responsible AI development and foster greater trust among consumers, investors, and regulators. Collaboration will be key to scaling AI systems in an ethical, compliant, and sustainable way.

The Path Forward

Gen AI in 2025 will face higher expectations for ethics, transparency, and compliance. As AI becomes more integrated into our daily lives, information governance will be the foundation for responsible and innovative development; organizations that fail to prioritize governance risk falling behind and face reputational damage or regulatory penalties. However, those who embrace robust governance practices will gain a strategic advantage by building AI systems that are trustworthy, efficient, and impactful.

Whether you’re a compliance officer, a data scientist, or a software developer, integrating governance into your projects is no longer optional—it’s essential. The interplay between Gen AI and information governance will continue to evolve, requiring adaptability and proactive planning. By adopting strong governance frameworks today, organizations can ensure their AI projects flourish responsibly and sustainably in the years to come. Together, we can shape an AI-driven future that benefits everyone.

2024 was a year chock-full of data breaches and privacy violations. Many new data privacy and cybersecurity regulations were introduced (and became effective), and regulators sent a strong message to businesses that privacy must be at the forefront of their strategy and goals and that robust security controls are required to protect employee and consumer personal information. Plaintiffs also sent a strong message to businesses that breaches will likely result in class action lawsuits.

This year, financial settlements with regulators and data breach victims were particularly prominent. Here are the top data protection fines and settlements in the U.S. last year, according to Infosecurity’s 2024 report:

  • Meta’s $1.4 billion settlement with the Texas Attorney General for unlawful collection of biometric data in violation of the Texas Capture or Use of Biometric Identifier Act and The Deceptive Trade Practices Act (largest ever privacy settlement in the U.S.).
  • Lehigh Valley Health Network’s $65 million class action settlement after a data breach involving 600 patients and employees (accessed were addresses, email addresses, dates of birth, Social Security numbers, and passport information, as well as various medical data and some nude photos) (largest settlement on a per-patient basis for a healthcare ransomware breach case).
  • Marriott’s $52 million settlement with 50 U.S. states related to a multi-year data breach that affected over 131 million users of the Starwood guest reservation database (allegations were related to failure to comply with consumer protection laws, privacy laws, and data security standards).
  • 23andMe’s $30 million settlement agreement resulting from a class action against it for a data breach affecting ancestry data (these accounts were not protected by multi-factor authentication; 23andMe denied any wrongdoing in the settlement agreement and contends that the breach was a result of users’ reusing credentials across multiple websites).
  • T-Mobile’s $15.75 million settlement with the Federal Communications Commission (FCC) for several security incidents (2021, 2022, and 2023) that resulted in millions of consumers’ personal data being accessed by cyber criminals (T-Mobile also has to invest the same amount -$15.75 million – to update its cybersecurity practices and safeguards).
  • AT&T’s $13 million FCC settlement over its supply chain breach which led to cyber criminals’ exfiltration of customer personal information (AT&T agreed to update its data governance and supply chain integrity practices).

As we head into the new year, the landscape of data privacy laws in the U.S. will continue to change. Eight new consumer privacy laws will become effective throughout the year, and companies should be prepared for more rulemaking that could expand compliance obligations and enforcement.