We continue to alert our readers to the uptick and successful use of vishing attacks against companies. Threat actors continue to be creative in developing strategies to use vishing to gain access into systems.

According to Cyberscoop, (a publication that I read religiously), Mandiant has confirmed that “multiple cybercrime groups,” including ShinyHunters, are “combining voice calls and advanced phishing kits to trick victims into handing over access” to company systems. The scary thing about this new wave of vishing attacks is that threat actors are using sophisticated vishing campaigns to compromise single sign on (SSO) credentials, then “enroll threat actor controlled devices into victim multifactor authentication solutions.” This effectively bypasses well-known security tools used by companies to prevent unauthorized access into their systems.

Once threat actors gain access, they move into the company’s SaaS environment to exfiltrate data and then launch extortion campaigns. In addition,

Cybercriminals are registering custom domains that mimic legitimate single sign-on portals used by targeted companies, then deploying tailored voice-phishing kits to call victims while remotely controlling which pages appear in the victim’s browser. This lets the attackers sync their spoken prompts with multifactor-authentication requests in real time, increasing the likelihood the victim approves or enters the needed codes on cue.

In response to these attacks, Okta released threat intelligence confirming that it has seen “multiple phishing kits developed” to use with other SSO and cryptocurrency providers. To be clear, this is not a vulnerability with the SSO products, but a scary way for threat actors to dupe users into providing credentials.  

Due to the success of these new vishing campaigns using SSO, now is the time to remind your users about vishing, how it works, the newest ways threat actors are trying to get users to provide their credentials, and how SSO can give the threat actors the keys to the kingdom.

Businesses that run consumer-facing websites have spent the past several years contending with a steady stream of California Invasion of Privacy Act (CIPA) demands and class actions aimed at everyday digital tools such as cookies, pixels, and analytics scripts. A recent decision from the Southern District of California, Camplisson v. Adidas Am., Inc., 2025 WL 3228949 (S.D. Cal. Nov. 18, 2025), suggests that this wave is not fading. If anything, it may pick up further in 2026.

CIPA is a California privacy statute that, among other things, limits the interception of communications and the deployment of certain surveillance-style technologies without proper authorization. In the current round of cases, plaintiffs have increasingly trained their focus on CIPA’s prohibition on using “pen registers” and “trap and trace” devices absent a court order or user consent. They argue that common website tracking technologies function like modern equivalents of these wiretap-adjacent tools. The stakes are high because CIPA allows statutory damages of up to $5,000 per violation, even without proof of actual harm.

In Camplisson, website users brought a putative class action alleging that Adidas violated CIPA by using two tracking pixels on its website, the TikTok Pixel and Microsoft Bing. According to the complaint, the trackers were placed on visitors’ browsers without consent and collected data including IP addresses, browser information, unique identifiers, and other personal information. Adidas moved to dismiss on two primary grounds; first, it argued that the alleged tracking tools do not qualify as a “pen register” as a matter of law, and second, it contended that users had consented.

The court declined to accept either argument at the pleading stage. Emphasizing what it characterized as CIPA’s deliberately broad language, the court reasoned that a narrow reading of “pen register” limited to tools that capture all outgoing information could undermine the statute’s privacy-protective purpose. The court also held that the consent allegations were deficient based on how the website presented its terms and privacy disclosures. In particular, visitors allegedly had to scroll to the footer to locate links to the online terms and privacy policy, and the website did not present a pop-up, or similar mechanism, requiring users to affirmatively consent before the pixels fired.

From a forward-looking perspective, Camplisson hands plaintiffs a new citation for the proposition that standard website pixels can plausibly qualify as pen registers when they capture identifiers such as IP address information and other alleged personal information. It also offers a template for pleading around consent by highlighting the user’s practical path to notice on the website, and whether any meaningful opt-in occurred before tracking began. Together, those concepts are likely to drive additional pre-suit demand letters and new filings, particularly against companies that primarily rely on footer-based links for notice or that allow pixels to run before any affirmative consent. Longer term, unless appellate courts bring greater clarity or the legislature modernizes this decades-old statutory framework, businesses should plan for continued uncertainty and inconsistent results from courts.

This week, the U.S. Supreme Court granted certiorari in Salazar v. Paramount Global, No. 25-459 (cert. granted Jan. 26, 2026), to resolve a circuit split over the scope of the federal Video Privacy Protection Act (VPPA). Enacted in 1988, the VPPA has helped fuel a wave of class actions in recent years, especially suits aimed at digital media and entertainment companies with embedded video content.

This case stems from allegations that the plaintiff subscribed to a newsletter from a sports entertainment website owned by Paramount Global, watched videos on the site, and that Facebook’s Meta pixel caused the browser to transmit the plaintiff’s Facebook ID and the webpage URL to Facebook—allegedly disclosing viewing habits in violation of the VPPA. The district court dismissed the complaint, and the Sixth Circuit affirmed.

The dispute here centers on who qualifies as a “consumer” entitled to sue under the VPPA, which prohibits a “video tape service provider” from knowingly disclosing personally identifiable information concerning any “consumer.” The statute defines “consumer” as “any renter, purchaser, or subscriber of goods or services from a video tape service provider,” and it defines “video tape service provider” in terms of being in the business of rental, sale, or delivery of “prerecorded video cassette tapes or similar audio visual materials.”

The Sixth Circuit applied a narrower approach, holding a newsletter subscriber was not a “consumer” absent an alleged subscription to video content or other qualifying “audio visual materials,” but other circuits have interpreted the “term” consumer more broadly.

For companies that use embedded video and third-party tracking tools, the Supreme Court’s eventual ruling may help sharpen the VPPA risk picture in the age of digital entertainment.

On January 23, 2026, a bipartisan group of 35 state Attorneys General issued a letter to xAI stating their concern “about artificial-intelligence produced deepfake nonconsensual intimate images (NCII) of real people, including, children, wherever it is made or found,” including xAI’s chatbot, Grok. This is in addition to the letter sent on January 13, 2026, to X and other AI companies by eight United States senators requesting information on non-consensual “bikini” and “non-nude” images produced by their products. 

The letter “strongly urges (xAI) to be a leader in this space by further addressing the harms resulting from this technology.” The letter further calls for xAI to “immediately take all available additional steps to protect the public and users of your platforms, especially the women and girls who are the overwhelming target of NCII.”

The letter outlines all ways Grok can be easily used as a “nudify” tool that can “embarrass, intimidate, and exploit people by taking away their control over how their bodies and likenesses are portrayed.” It alleges that not only is Grok enabling the ability to make these images with a mere click, but it is also actually “encouraging this behavior by design.”

Grok is not only being used to alter images of adults, as the letter outlines how the chatbot has “altered images of children to depict them in minimal clothing and sexual situations…including photorealistic images of ‘very young’ people engaged in sexual activity.”

The letter emphasizes the importance of this issue to the Attorneys General, and requests that xAI provide answers on what measures it will take to prohibit Grok from producing NCII, and how it will eliminate existing content, suspend and report to authorities users producing such content, and “grant X users control over whether their content can be edited by Grok.”

We will continue to update you on the information provided by the companies in response to these inquiries.

Years ago, the Federal Trade Commission (FTC) designated the last week of January asIdentity Theft Awareness Week. For 2026, this week is devoted to education and awareness about identity theft, which is an ever-present problem.

According to the FTC, “identity theft is one of the most-reported problems to the FTC every year.” The FTC offers free educational opportunities throughout this week, which include podcasts and webinars on how to protect yourself, how identity thieves target military service members, veterans, and their families, financial fraud, recent trends in identity theft, resources for seniors, and how local libraries can assist patrons with prevention tools.

We assume that seniors are hit the hardest with identity theft scams but, in fact, a U.S. News and World Report states that the vast majority of victims who report identity theft are  between the ages of 30 and 40, followed by those aged 40-49, then 20-29. The numbers drop off for those aged 50-80. Apparently, the older you are, the more suspicious you are.

That said, U.S. News and World Report has listed 11 ways to prevent identity theft, all of which are solid tips to implement throughout the year, including this week—check them out.

The Symantec and Carbon Black Threat Hunter Team recently released its Ransomware 2026 report that contains helpful intelligence into the state of ransomware attacks and insight into how they are evolving, despite law enforcement’s success in taking down some of the largest ransomware gangs in 2025.

The very first statement is a sobering reality: “Ransomware activity reached record-high levels in 2025 as criminal actors continued to view extortion as one of the most lucrative forms of attack.”

The report notes that even though RansomHub (the number one ransomware operation) collapsed, there was “only a brief drop in ransomware attacks.” The statistics show that there were 6,182 extortion attacks in 2025, a 23% increase from 2024.

The report outlines the ambitious activities of the various ransomware groups in 2025. It highlights that, although new ransomware groups emerged, they all use similar tactics to achieve a solitary objective: “accessing the victim’s network, obtaining privileges to move laterally across the entire network before exfiltrating data, and delivering an encrypting payload to the maximum number of machines.” The threat actors are able to do this by using legitimate software to evade security measures put in place. “An awareness of the TTPs used by attackers will help organizations prepare their defenses and identify malicious behaviors on their networks.”

The report provides a detailed analysis of the TTPs that should be reviewed by security professionals, and the legitimate software used by threat actors to attack victims.

Finally, the report provides mitigation techniques that organizations can deploy to protect against targeted attacks which are well worth the read.

We know that California has a lot of privacy laws, but the Shine the Light law is one of the oldest in the state, and it still catches businesses off guard because it is not about cookies or ad tech. It’s about who you share customer information with for marketing and what you must disclose when a customer asks. Increasingly, it is also about litigation risk because plaintiffs’ attorneys are now filing claims against companies for alleged Shine the Light violations.

California’s Shine the Light law gives California residents the right to ask a business:

  • Whether the business shared their personal information with third parties for those third parties’ direct marketing purposes; and
  • Who those third parties are, plus what categories of information were shared.

This law is aimed at businesses that:

  • Do business with California residents;
  • Have an established customer relationship with a California resident; and
  • Share certain personal information with third parties for the third parties’ direct marketing.

Of course, there are exceptions and nuances, but the simplest way to think about it is this: if you share customer data (e.g., name, email address, telephone number, other account related information, etc.) with other companies so they can market their own products or services, you should assume Shine the Light applies and prepare accordingly. Such planning and preparedness means that your business should publish a clear request method (i.e., a simple “Shine the Light” statement in your website privacy policy), make sure customer support and privacy teams know what a Shine the Light request is and how to respond, track marketing-related sharing,  respond on time with the right content and, importantly, when you receive a valid request, respond with the required disclosures. Don’t improvise—use a vetted template.

Even with newer California privacy frameworks, Shine the Light remains a classic compliance tripwire because it is consumer initiated and simple enough that plaintiffs’ attorneys can test compliance quickly, by reading your policy, submitting a request, and then filing claims if the company allegedly lacks the required intake path or fails to provide the required disclosures. These alleged violations are showing up in demand letters and lawsuits, which means the cost of getting it wrong can include legal fees and operational disruption, not just a policy update.

A recent report published by Cyera entitled “State of AI Data Security: How to Close the Readiness Gap as AI Outpaces Enterprise Safeguards,” based on a survey of 921 IT and cybersecurity professionals, finds that although 83% of enterprises “already use AI in daily operations…only 13% report strong visibility into how it is being used.” The report concludes:

The result is a widening gap: sensitive data is leaking into AI systems beyond enterprise control, autonomous agents are acting beyond scope, and regulators are moving faster than enterprises can adapt. AI is now both a driver of productivity and one of the fastest expanding risk surfaces CISOs must defend.

The survey results show that although AI adoption in companies is rapid, most enterprises are “blind to how AI interacts with their data.” This is complicated by the fact that autonomous AI agents are difficult to secure and very few organizations have prompt or output controls, including the ability to block risky AI activity by employees.

In addition, most of the respondents acknowledged that AI tools used in the organization are “over-accessing data.” This is further complicated by the fact that a small minority of those surveyed (7%) have a “dedicated AI governance team, and just 11% feel fully prepared for regulation.”

The conclusion is: “the enterprise risk surface created by AI is expanding far faster than the governance and enforcement structures meant to contain it.”

We have previously commented on how important AI Governance Programs are in mitigating the risks associated with AI use in an organization. The Cyera Report reiterates that conclusion. If you are one of a vast majority of organizations who have not developed an AI Governance Program yet, it’s time to make it a top priority.

AI hype is everywhere. The 15th Annual AI & Data Leadership Executive Benchmark Survey, shows what nearly 110 Fortune 1000 companies and global brands are actually doing with AI. Once a future bet, AI is now a business mandate, and most companies are already seeing results.

Investment is essentially universal with an overwhelming 99.1% of surveyed leaders say data and AI are a top organizational priority, and 90.9% are increasing their level of investment. Executives also connect the AI boom to a renewed focus on fundamentals, with 92.7% saying intensified AI interest is driving stronger attention to data.

Leadership models are tightening as AI scales. The Chief Data Officer (CDO) role is now standard, with 90% of companies reporting a CDO in place and nearly 70% describing the role as successful and well-established, up from 47.6% the prior year. Just as importantly, the CDO mandate has shifted toward growth, with 85.5% saying the role is focused on “offense,” meaning innovation and value creation, rather than primarily defensive or compliance work.

At the same time, the Chief AI Officer (CAIO) role is emerging to formalize AI accountability. Roughly 38.5% of organizations now report a CAIO or equivalent, up from 33.1% last year. Reporting lines are still settling, but in most companies without a CAIO, AI leadership continues to sit with the CDO or Chief Digital and Artificial Intelligence Office function, which 69.1% say currently carries the remit.

AI adoption has moved decisively from experiments to production. Two years ago, just 4.7% of firms reported AI in production at scale—this year, that figure is 39.1%. When added to the 54.5% running AI in limited production, 93.6% of organizations now have active AI capabilities in production, signaling that pilots are no longer the dominant mode.

Value is showing up alongside deployment, with 97.3% of organizations report measurable business value from data and AI investments, and 54% say they are realizing a high or significant degree of value, improving on last year’s results and reinforcing that AI programs are increasingly tied to tangible outcomes.

The biggest constraint is not the technology, it is the human side of transformation. A record 93.2% of executives cite cultural challenges and change management as the top barrier to AI success. The work that slows companies down is shifting processes, building skills, changing decision habits, and creating an environment where teams trust data and adopt new ways of working.

Looking forward, leaders view AI as a once-in-a-generation shift. Almost 83% believe AI is likely to be the most transformational technology in a generation. Governance is rising with the stakes, with nearly 80% naming Responsible AI as a top corporate priority and 88.7% saying they have safeguards and guardrails in place.

The takeaway is simple and urgent—the “should we invest” era is over. The winners will be the organizations that align leadership, operating models, culture, and governance fast enough to convert rapidly expanding adoption into sustained business value.

Artificial intelligence (AI) makes it easy to create, remix, and distribute content at scale, and that speed is a significant part of its value. It is also where intellectual property (IP) risk can creep in. That risk is not limited to the end user generating an AI output. It can also extend to the companies that build the tool, host it, integrate it into other products, or deploy it for customers.

A useful legal reference point is MGM Studios Inc. v. Grokster, Ltd., 545 U.S. 913 (2005), a seminal case on secondary liability. Grokster distributed peer-to-peer software with lawful uses, but the case turned on whether the company encouraged infringement. The Supreme Court focused on inducement, finding that even if a product can be used for legal purposes, a company can still face secondary liability if its messaging, product choices, or business model appear designed to drive impermissible infringement.

That idea carries over to AI models today, which can be general purpose, but disputes often turn on what the product is steering users to do. Then, once credible warning signs appear, attention shifts to how the company responds.

If you are assessing how an AI secondary liability claim might be framed, consider these questions.

  • What are we encouraging, even indirectly? Marketing copy, tutorials, example prompts, and default workflows can read like a “how-to” guide. If templates aim for near replicas of branded characters, a plaintiff may argue the product is being sold with infringement in mind.
  • Can we tell a strong lawful-use story? “Substantial non-infringing use” matters most when it is real and central to the product. A tool used primarily for internal drafting, meeting summaries, and transforming a company’s own materials is easier to defend than a tool whose primary intended workflow is rewriting paywalled articles.
  • What do we know, and when did we know it? Credible notices, repeated complaints, and internal metrics that point to obvious infringement patterns can make a lack of knowledge argument hard to sustain. After a certain point, inaction can start to be perceived as a decision in itself.
  • How much control do we have, and are we monetizing the risk? If you can supervise use through accounts, moderation, or termination rights, and you profit directly from high-volume usage, claimants may argue you had both the ability to intervene and a financial incentive not to.

To maintain the most defensible posture, companies should maintain documented, repeatable governance across the AI lifecycle, including training data traceability, policies for customer fine-tuning on third party content, monitoring for output patterns that suggest replication, and a clear process for handling repeat users who push high-risk requests. Product features, contract language, and marketing materials should also be aligned so your claims about the tool match what it actually does. The goal is to be able to show that you anticipated foreseeable risks, made reasonable design and operational choices to mitigate them, and improved based on what you observed in production.