The Cybersecurity and Infrastructure Security Agency (CISA) confirmed on Tuesday, March 11, 2025, that the Multi-State Information Sharing and Analysis Center (MS-ISAC) will lose its federal funding and cooperative agreement with the Center for Internet Security. MS-ISAC’s mission “is to improve the overall cybersecurity posture of U.S. State, Local, Tribal, and Territorial (SLTT) government organizations through coordination, collaboration, cooperation, and increased communication.”

According to its website, MS-ISAC is a cybersecurity partner for 17,000 State, Local, Tribal, and Territorial (SLTT) government organizations, and  offers its “members incident response and remediation support through our team of security experts” and develops “tactical, strategic, and operational intelligence, and advisories that offer actionable information for improving cyber maturity.” The services also include a Security Operations Center, webinars addressing recent threats, evaluations of cybersecurity maturity, advisories and notifications, and weekly top malicious domain reports.

All of these services assist governmental organizations that do not have adequate resources to respond to cybersecurity threats. Information sharing has been essential to prevent government entities from becoming victims. State and local governments have relied on this information sharing for resilience. Dismantling MS-ISAC will make it harder for governmental entities to obtain timely information about cybersecurity threats for preparedness. It is an organized place for governmental entities to share information about cyber threats and attacks and to learn from others’ experiences.

According to CISA, the dismantling of MS-ISAC will save $10 million. State representatives rely on the information shared by MS-ISAC. It may save the federal government minimal dollars, but when state and local governments are adversely affected and become victims of cyberattacks, this savings will be dwarfed by the amount spent on future attacks without MS-ISAC’s assistance. Responding to state and local government cyberattacks still expends taxpayer dollars. This shift is an unhelpful one that will leave state and local governments in the dark and at increased risk. This is a short-sighted strategy by the administration.

According to Security Week, X (formerly Twitter) was hit with a distributed denial-of-service (DDoS) attack that disrupted tens of thousands of X users’ ability to access the platform on March 10, 2025.

According to Reuters, the traffic involved in the attack came from IP addresses in the U.S., Vietnam, Brazil, and Ukraine. The threat group Dark Storm Team, which “claims to be a pro-Palestine hacktivist group which may have links to Russia,” has claimed responsibility. It posted screenshots on Telegram to substantiate its claim. Security Week notes that this has not been verified and threat actors are known to “falsely take credit for major attacks or outages.”

The incident is still under investigation.

Last week, we explored a recent data breach class action and the litigation risk of such lawsuits. Companies need to be aware of litigation risk not only arising from data breaches, but also from shareholder class actions related to privacy concerns.

On March 5, 2025, a class action securities lawsuit was filed against AppLovin Corporation and its Chief Executive Officer and Chief Financial Officer (collectively, the defendants). AppLovin is a mobile advertising technology business that operates a software-based platform connecting mobile game developers to new users. AppLovin offers a software platform and an app. In the lawsuit, the plaintiff alleges that the defendants misled investors regarding AppLovin’s artificial intelligence (AI)-powered digital ad platform, AXON.

According to the complaint, the defendants made material representations through press releases and statements on earnings calls about how an upgrade to AppLovin’s AXON AI platform would provide improvements over the platform’s earlier version. The complaint further alleged that the defendants made numerous statements indicating that AppLovin’s financial growth in 2023 and 2024 was driven by improvements to the AXON technology. The defendants reportedly stated that AppLovin’s increase in net revenue per installation of the mobile app and the volume of installations was a result of the improved AXON technology.

The complaint further states that on, February 25, 2025, two short seller reports were published that linked AppLovin’s digital ad platform growth not to AXON, but to exploitative app permissions that carried out “backdoor” installations without users noticing. According to the reports, AppLovin used a code that purportedly allowed it to bind to consumers’ permissions for AppHub, Android’s centralized Google repository where app developers can upload and distribute their apps. The complaint claims that by attaching to AppHub’s one-click direct installations as its own, AppLovin directly downloaded apps onto consumers’ phones without their knowledge.

The research reports also state that AppLovin was reverse-engineering advertising data from Meta platforms and using manipulative practices, such as having ads click on themselves and forcing shadow downloads, to inflate its installation and profit figures. One of the research reports states that AppLovin was “intentionally vague about how its AI technology actually works,” and that the company used its upgraded AXON technology as a “smokescreen to hide the true drivers of its mobile gaming and e-commerce initiatives, neither of which have much to do with AI.” The reports further assert that the company’s “recent success in mobile gaming stems from the systematic exploitation of app permissions that enable advertisements themselves to force-feed silent, backdoor app installations directly onto users’ phones.” The complaint details the findings from the reports and alleges that AppLovin’s misrepresentations led to artificially inflated stock prices, which materially declined because of the research report findings.

On a company blog post in response to the research reports, the CEO wrote that “every download [of AppLovin] results from an explicit user choice—whether via the App Store or our Direct Download experience.”

As organizations begin integrating AI into their operations, they should be cautious in making representations regarding AI as a profitability driver. Executive leaders responsible for issuing press releases and leading earnings calls relating to a company’s technology practices should also understand how these technologies function and ensure that any statements they make are accurate. Whether such allegations are true or not, litigation around materially false representations can prove costly to an organization, both from a financial and reputation perspective. 

Edison Electric Institute (EEI), an association that represents all U.S. investor-owned electric companies, petitioned the Federal Communications Commission (FCC) to permit calls and texts under the Telephone Consumer Protection Act (TCPA) without prior express consent for “demand response” communications. A prior FCC ruling clarified the FCC’s policies towards the types of calls and texts from utilities that require prior express consent; EEI now urges the FCC to provide additional guidance on allowable “demand response” calls and texts. “Demand response” refers to non-marketing communications related to “temporary, strategic adjustments to electricity usage during peak demand periods.” EEI has asked the FCC to “recognize how essential demand response programs are to ensuring customer safety and to managing increasing demand for electricity more effectively.” EEI seeks FCC clarification on whether such calls and texts are permissible without prior express consent from customers so that utilities can save customers money and prevent outages.

Violations of the TCPA could result in fines and lawsuits against utilities. Thus, in 2016, the FCC clarified that when a customer provides a telephone number to a utility, such provision constitutes prior express consent for certain communications “closely related” to the utility service. EEI is asking that the FCC’s ruling be expanded to include non-telemarketing, information demand response calls, and texts. EEI’s petition states, “Demand response programs target short-term, intentional modification of electricity usage by end-user customers during peak times or in response to market prices. They help keep the electricity grid stable and efficient and can save customers money.” EEI further states that customer survey data “indicates widespread satisfaction among participants in demand response programs utilizing calls or texts, demonstrating positive impacts on customer experience with low opt-out rates.” EEI hopes that the FCC can clarify the language regarding the applicability of the utility customer presumption of consent and allow utilities to engage customers in these essential demand and response programs.

A federal district court has denied a motion by Johnson & Johnson Consumer Inc. (JJCI) to dismiss a second amended complaint alleging it violated the Illinois Biometric Information Privacy Act (BIPA) by collecting and storing biometric information through its Neutrogena Skin 360 beauty app without consumers’ informed consent or knowledge. The plaintiffs also allege that the biometric information collected through the app is then linked to their names, birthdates, and other personal information.

Plaintiffs alleged that the Skin360 app is depicted as “breakthrough technology” that provides personalized at-home skin assessments by scanning faces and analyzing skin to diagnose enigmas like wrinkles, fine lines, and dark spots. The app then uses that data to recommend certain Neutrogena products for the consumer to eliminate those concerns. JJCI argued that the Skin360 app recommends products designed to improve skin health, which means that the consumers should be considered patients in a healthcare setting, making BIPA inapplicable.

However, the court disagreed and cited Marino v. Gunnar Optiks LLC, 2024 Ill. App. (1st) 231826 (Aug. 30, 2024), which held that a customer trying on non-prescription sunglasses using an online “try-on” tool is not considered a patient in a healthcare setting. In Marino, the court defined a patient as an individual currently waiting for or receiving treatment or care from a medical professional. Alternatively, Skin360 uses artificial intelligence software to compare a consumer’s skin to a database of images and provides an assessment based on a comparison of these images. Of course, JCCI did not dispute that no medical professionals are involved in providing the service through the Skin360 app.

The court stated that “[e]ven assuming Skin360 provides users with this AI assistant and ‘science-backed information’ the court finds it a reach to consider these services ‘medical care’ under BIPA’s health care exemption; [i]ndeed, Skin360 only recommends Neutrogena products to users of the technology, which suggests it is closer to a marketing and sales strategy rather than to the provision of informed medical care or treatment.”

The California Privacy Protection Agency (CPPA) the agency responsible for implementing and enforcing the California Consumer Privacy Act (CCPA) and the California Privacy Rights Act (CPRA) (collectively the CCPA), protecting consumer privacy, and ensuring compliance with data privacy regulations, has announced an investigate sweep into companies’ collection of sensitive location data. The CPPA has already sent out inquiries to “advertising networks, mobile app providers, and data brokers that appear to be in violation” of the CCPA.

California Attorney General Rob Bonta said, “Every day, we give off a steady stream of data that broadcasts not only who we are, but where we go. This location data is deeply personal, can let anyone know if you visit a health clinic or hospital, and can identify your everyday habits and movements.” The CPPA is concerned that this sensitive location data will be used to target vulnerable populations. The CPPA urges businesses to take responsibility as stewards of this sensitive data seriously and affirmatively protect location data.

The CPPA’s investigation will focus on how companies are informing consumers about their right to opt out of the sale and sharing of their data (as required under the CCPA), including geolocation data and other types of personal information collected by businesses. Additionally, the CPPA will investigate how companies actually apply this opt-out requirement when a consumer asserts that right.

If your company hasn’t assessed its opt-out processes and procedures lately, now is the time to confirm that consumers are clearly notified of this right and that they can readily opt-out of such tracking and collection and subsequent sale and/or sharing of that data with their parties.

With the proliferation of artificial intelligence (AI) usage over the last two years, companies are developing AI tools at an astonishing rate. When pitching their AI tools, these companies claim that their products can do certain things and promise and exaggerate their capabilities. AI washing “is a marketing tactic companies employ to exaggerate the amount of AI technology they use in their products. The goal of AI washing is to make a company’s offerings seem more advanced than they are and capitalize on the growing interest in AI technology.”

Isn’t this mere puffery? No, according to the Federal Trade Commission (FTC), Securities and Exchange Commission (SEC), and investors.

The FTC released guidance in 2023, outlining certain questions companies can assess to determine if they are AI washing. It urges companies to determine whether they are overpromising what the algorithm or AI tool can deliver. According to the FTC, “You don’t need a machine to predict what the FTC might do when those claims are unsupported.”   

In March 2024, the SEC charged two investment advisors with AI washing by making “false and misleading statements about their use of artificial intelligence.” Both cases were settled for $400,000. The SEC found the two companies had “marketed to their clients and prospective clients that they were using AI in certain ways when, in fact, they were not.” 

Investors are getting into the hunt as well. In February and March 2025, investors sued two companies in securities litigation that alleged AI washing. In the first case, the company allegedly made statements to investors about its AI capabilities and reported “impressive financial results, outlooks and guidance.” It was subsequently the subject of short-seller reports that alleged they were using “manipulative practices” that inflated its numbers and profitability. The litigation alleged that, as a result, the company’s shares declined.

In the second case, the class action named plaintiff alleged that the company overstated “its position and ability to capitalize on AI in the smartphone upgrade cycle,” which caused investors to invest at an artificially inflated price.

Lessons learned from these examples? Look at the FTC’s guidance and assess whether your sales and marketing plan takes AI washing into consideration.

British Prime Minister Keir Starmer wants to turn the U.K. into an artificial intelligence (AI) superpower to help grow the British economy by using policies that he describes as “pro-innovation.” One of these policies proposed relaxing copyright protections. Under the proposal, initially unveiled in December 2024, AI companies could freely use copyrighted material to train their models unless the owner of the copyrighted material opted out.

Although some Parliament members called the proposal an effective compromise between copyright holders and AI companies, over a thousand musicians released a “silent album” to protest the proposed changes to U.K. copyright laws. The album, currently streaming on Spotify, includes 12 tracks of only ambient sound. According to the musicians, the silent tracks illustrate empty recording studios and represent the impact they “expect the government’s proposals would have on musicians’ livelihoods.” To further convey their unhappiness with the proposed changes, the title of these twelve songs, when combined, reads, “The British government must not legalize music theft to benefit AI companies.” 

High-profile artists like Elton John, Paul McCartney, Dua Lipa, and Ed Sheeran have also signed a letter urging the British government to avoid implementing these proposed changes. According to the artists, implementing the new rule would effectively give artists’ rights away to big tech companies. 

The British government launched a consultation that sought comments on the potential changes to the copyright laws. The U.K. Intellectual Property Office received over 13,000 responses before the consultation closed at the end of February 2025, which the government will now review as it seeks to implement a final policy.

Artificial Intelligence (AI) is rapidly transforming the legal landscape, offering unprecedented opportunities for efficiency and innovation. However, this powerful technology also introduces new challenges to established information governance (IG) processes. Ignoring these challenges can lead to significant risks, including data breaches, compliance violations, and reputational damage.

AI Considerations for Information Governance Processes,” a recent paper published by Iron Mountain, delves into these critical considerations, providing a framework for law firms and legal departments to adapt their IG strategies for the age of AI.

Key Takeaways:

  • AI Amplifies Existing IG Risks: AI tools, especially machine learning algorithms, often require access to and process vast amounts of sensitive data to function effectively. This makes robust data security, privacy measures, and strong information governance (IG) frameworks absolutely paramount. Any existing vulnerabilities or weaknesses in your current IG framework can be significantly amplified by the introduction and use of AI, potentially leading to data breaches, privacy violations, and regulatory non-compliance.
  • Data Lifecycle Management is Crucial: From the initial data ingestion and collection stage, through data processing, storage, and analysis, all the way to data archival or disposal, a comprehensive understanding and careful management of the AI’s entire data lifecycle is essential for maintaining data integrity and ensuring compliance. This includes knowing exactly how data is used for training AI models, for analysis and generating insights, and for any other purposes within the AI system.
  • Vendor Due Diligence is Non-Negotiable: If you’re considering using third-party AI vendors or cloud-based AI services, conducting rigorous due diligence on these vendors is non-negotiable. This due diligence should focus heavily on evaluating their data security practices, their compliance with relevant industry standards and certifications, and their contractual obligations and guarantees regarding data protection and privacy.
  • Transparency and Explainability are Key: “Black box” AI systems that make decisions without any transparency or explainability can pose significant risks. It’s crucial to understand how AI algorithms make decisions, especially those that impact individuals, to ensure fairness, accuracy, non-discrimination, and compliance with ethical guidelines and legal requirements. This often requires techniques like model interpretability and explainable AI.
  • Proactive Policy Development is Essential: Organizations need to proactively develop clear policies, procedures, and guidelines for AI usage within their specific context. These should address critical issues such as data access and authorization controls, data retention and storage policies, data disposal and deletion protocols, as well as model training, validation, and monitoring practices.

The Time to Act is Now:

AI is not a future concern; it’s a present reality. Law firms and legal departments must proactively adapt their information governance processes to mitigate the risks associated with AI and unlock its full potential.

We have educated our readers about phishing, smishing, QRishing, and vishing scams, and now we’re warning you about what we have dubbed “snailing.” Yes, believe it or not, threat actors have gone retro and are using snail mail to try to extort victims. TechRadar is reporting that, according to GuidePoint Security, an organization received several letters in the mail, allegedly from the BianLian cybercriminal gang, stating:

“I regret to inform you that we have gained access to [REDACTED] systems and over the past several weeks have exported thousands of data files, including customer order and contact information, employee information with IDs, SSNs, payroll reports, and other sensitive HR documents, company financial documents, legal documents, investor and shareholder information, invoices, and tax documents.”

The letter alleges that the recipient’s network “is insecure and we were able to gain access and intercept your network traffic, leverage your personal email address, passwords, online accounts and other information to social engineer our way into [REDACTED] systems via your home network with the help of another employee.” The threat actors then demand $250,000-$350,000 in Bitcoin within ten days. They even offer a QR code in the letter that directs the recipient to the Bitcoin wallet.

It’s comical that the letters have a return address of an actual Boston office building.

GuidePoint Security says the letters and attacks mentioned in them are fake and are inconsistent with BianLian’s ransom notes. Apparently, these days, even threat actors get impersonated. Now you know—don’t get scammed by a snailing incident.