On November 24, 2025, the Cybersecurity & Infrastructure Security Agency (CISA) issued an alert titled “Spyware Allows Cyber Threat Actors to Target Users of Messaging Applications,” which outlines how “multiple cyber threat actors” are “leveraging commercial spyware to target users of mobile messaging applications.”

The threat actors “use sophisticated targeting and social engineering techniques to deliver spyware and gain unauthorized access to a victim’s messaging app, facilitating the deployment of additional malicious payloads that can further compromise the victim’s mobile device.”

According to the alert, the threat actors use tactics including:

  • Phishing and malicious device-linking QR codes to compromise victim accounts and link them to actor-controlled devices;
  • Zero-click exploits, which do not require direct action from the device user; and
  • Impersonation of messaging app platforms, such as Signal and WhatsApp.

The threat actors target “high-value individuals, such as current and former high-ranking government, military and political officials, as well as civil society organizations (CSOs) and individuals across the United States, Middle East and Europe.” CISA “strongly encourages messaging app users to review” its updated Mobile Communications Best Practice Guide and Mitigating Cyber Threats with Limited Resources: Guidance for Civil Society.

On December 1, 2025, the Federal Trade Commission (FTC) approved a proposed complaint and order against Illuminate Education, Inc., an education technology provider requiring it to “to implement a data security program and delete unnecessary data to settle allegations that the company’s data security failures led to a major data breach, which allowed hackers to access the personal data of more than 10 million students.”

The FTC alleges that Illuminate “failed to deploy reasonable security measures to protect student data stored in cloud-based databases. These failures led to a major data breach.” According to the complaint, in late December 2021, a hacker used the credentials of a former employee to access Illuminate’s databases stored in the cloud.  The threat actor accessed information including students’ email addresses, mailing addresses, dates of birth, student records, and health information.

The FTC further alleges that Illuminate failed to notify school districts in a timely manner, as “it waited nearly two years to notify some school districts, comprising more than 380,000 students, about the data breach.”

The FTC’s proposed order includes:

  • Deleting personal information no longer needed to provide requested services;
  • Following a publicly available data retention schedule that details why information is collected and establishes a timeframe for its deletion;
  • Establishing and implementing a comprehensive information security program that protects the security, availability, confidentiality, and integrity of personal information it collects; and
  • Notifying the FTC if it has alerted another federal, state, or local government about a data breach involving consumers’ personal information.

There was no monetary settlement included in the proposed order. The FTC voted 2-0 to accept the proposed complaint and order for public comment.

A recent lawsuit filed in the United States District Court for the Western District of North Carolina is spotlighting risks businesses face when using prerecorded telemarketing messages without proper consent. The case, Toledo v. QuoteWizard.com, LLC, 3:2025CV00949 (W.D.N.C. 11.24.25) alleges that QuoteWizard, an insurance comparison subsidiary of LendingTree, violated the Telephone Consumer Protection Act (TCPA) by making unsolicited prerecorded telemarketing calls to consumers’ cell phones without obtaining express written consent.

According to the complaint, Shelly Toledo received a prerecorded voice message on her cell phone from QuoteWizard. The message purported to follow up on an auto insurance quote request, encouraged her to visit the QuoteWizard website, and offered a callback number. Toledo claims that she never provided QuoteWizard with her express written consent to receive marketing communications via prerecorded messages.

The lawsuit alleges TCPA violations restricting most unsolicited telemarketing calls, including those using prerecorded or artificial voices, to mobile devices and residential lines without prior express written consent. Toledo seeks to certify a nationwide class of individuals who received similar prerecorded telemarketing calls from QuoteWizard in the past four years, with the aim of statutory and treble damages, as well as an order to prohibit such conduct in the future.

The proposed class is defined as all U.S. residents who received artificial or prerecorded voice messages from QuoteWizard within four years preceding the lawsuit, where the purpose was to encourage purchase or rental of the company’s goods or services. The complaint alleges the class may number “in the several thousands, if not more.”

Toledo seeks:

  • An order certifying the case as a class action and appointing her as class representative;
  • Actual, statutory, and treble damages for plaintiffs;
  • A declaration that QuoteWizard’s activities violated the TCPA; and
  • An injunction prohibiting further unsolicited calls without express consent.

The QuoteWizard case is part of a rising national trend of TCPA lawsuits targeting companies for unauthorized calls and texts. Recent months have seen suits against law firms, marketing companies, mortgage lenders, and cannabis dispensaries for similar alleged violations. Courts have frequently upheld the right of consumers to pursue statutory and treble damages for unwanted telemarketing communications.

Consider thesekey takeaways for your business to avoid TCPA violations and lawsuits:

  • Consent is king: Businesses must obtain clear, prior express written consent before placing prerecorded or automated telemarketing calls to consumers’ mobile or residential lines;
  • Litigation risks, including class actions, for violations of the TCPA are growing, especially as statutory damages can accumulate quickly across many recipients; and
  • Even a single unsolicited prerecorded call may expose a business to liability; widespread campaigns can result in significant damages and court-ordered injunctions.

The California Attorney General (CA AG) has again made waves in the privacy world, this time with a settlement requiring Sling TV to pay a $530,000 fine and make significant operational changes due to alleged violations of the California Consumer Privacy Act (CCPA) and Unfair Competition Law (UCL). This case signals an increase in CCPA enforcement and a clear mandate for companies: If you haven’t revisited your CCPA program lately, now is the time.

The Sling TV resolution is just the latest example of the CA AG pushing for aggressive interpretations and implementations of the CCPA. Essential takeaways include:

  • Demand for “One-Click” Opt-Outs: The CA AG expects companies to provide consumers with direct, frictionless controls to opt out of sales and sharing of their personal information across all channels, including websites, mobile, and TV apps;
  • Crackdown on Market Practices: Many compliance methods that have become standard practice, like cookie-only preference centers or requiring consumers to confirm opt-out requests, are now actively discouraged or seen as insufficient; and
  • Heightened Children’s Privacy Enforcement: With lots of scrutiny on how companies treat the data of consumers under 16, the CA AG continues to make children’s privacy an enforcement priority.

The CA AG alleged multiple CCPA and UCL violations by Sling TV, with a focus on “Do Not Sell or Share” compliance and children’s privacy:

  • Fragmented Opt-Out Mechanisms: Sling TV required consumers to use two different methods to opt out of sales and sharing, a cookie preference center for cookies, and a separate webform for other data. The CA AG found this “bifurcated” approach inconsistent with the CCPA’s requirements;
  • Barriers for Logged-In Users: Customers who were already logged in had to re-enter their information in a webform to make opt-out requests, instead of Sling TV using existing account details to facilitate the process;
  • No In-App Opt-Outs: Consumers using the TV app (the primary way most people access Sling TV) were not offered an in-app opt-out. Instead, they were sent to a website, which did not cover in-app sales or sharing; and
  • Children’s Data Sold Without Opt-In Consent: Sling TV allegedly collected and shared (or sold) personal information of children under 16 without obtaining the required parental or age-appropriate consent.

As a result of the settlement, Sling TV agreed to:

  • Provide Easy, Universal Opt-Outs: Implement a clear, prominent, and user-friendly opt-out mechanism on all digital properties (i.e., website, mobile app, and TV app);
  • Click-to-Opt-Out for Logged-In Users: Allow logged-in customers to opt out with a single click or link, using data already on file;
  • In-App Opt-Outs: Incorporate a seamless opt-out process directly within the TV app; Better Children’s Data Controls: Allow parents to designate user profiles as a kid’s profile, defaulting to the highest privacy protections (no sale/sharing, no targeted advertising); and
  • Delete Existing Children’s Data: Remove personal data of children known to be under 16 collected without proper consent.

The CA AG’s stance is clear: companies must move beyond the bare minimum. Here’s how your organization can stay ahead:

  • Minimize Barriers to Opting Out: Use a single, simple method for consumers to opt out of all sales and sharing of information, covering all data types and channels (not just cookies);
  • Streamline For Logged-In Users: Don’t make logged-in users re-identify themselves; leverage information you already have to honor requests easily;
  • Opt-Outs Where Consumers Interact: Provide opt-out mechanisms on every platform selling or sharing consumer data (i.e., apps, websites, and any other channels);
  • Prioritize Children’s Privacy: Audit your children’s privacy practices now. Age verification, opt-in requirements, and data deletion protocols must be robust and ready for new regulations; and
  • Plan for Development Time: Many of these changes require technical adjustments that can take months. Start planning and implementing now to avoid future enforcement actions.

The Sling TV case is a wake-up call: CCPA compliance isn’t static, and the CA AG is enforcing the letter and spirit of the law more aggressively than ever. Companies should conduct a comprehensive privacy compliance review and look for ways to make consumer rights not just technically available, but truly easy to exercise.

The 2025 California legislative session ended without passing critical reforms to the California Invasion of Privacy Act (CIPA), leaving businesses vulnerable and scrambling to manage escalating compliance challenges and legal exposure on their own.

Why Was Reform Needed?

CIPA, originally enacted in 1967 to protect against telephone wiretapping, has recently been used to challenge how websites collect and process user data using tools like Google Analytics, Meta Pixel, and session replay software. Plaintiffs allege these tools “intercept” online communications without proper user consent, invoking CIPA’s provisions on eavesdropping and signal tracing even though the law predates the digital era by decades.

Despite the uncertainty, most courts have not dismissed these claims early, opening the door to expensive litigation. Each violation can mean statutory damages of $5,000 per violation, with potential exposure ballooning rapidly for businesses with significant web traffic.

What Happened with SB 690?

Senate Bill 690 (SB 690) was introduced as a modernization effort, aiming to exempt routine data collection for business operations or analytics from being treated as illegal wiretapping under CIPA. The bill cleared the Senate but stalled in the Assembly Judiciary Committee amid calls for further negotiation between privacy advocates, industry groups, and consumer-rights organizations.

With SB 690 in limbo, companies must continue to navigate the ambiguities and aggressive lawsuits that have become commonplace since plaintiffs’ firms began targeting legacy tracking technologies and years-old analytics integrations.

Essential Compliance Action Steps for Businesses

Until state lawmakers act, businesses should consider taking the following steps to mitigate risk and demonstrate good faith if challenged:

  1. Conduct a Comprehensive Privacy Audit
    • Inventory all data-collection tools including analytics, marketing pixels, session replay, chat, and plug-ins; and
    • Determine what information is being collected and who has access to it (including third parties).
  1. Obtain Clear and Affirmative Consent
    • CIPA requires explicit, affirmative opt-in consent before collecting user data. Use action-based consent banners (e.g., “By clicking Accept, you agree…”);
    • Passive consent such as “by continuing to browse” is insufficient; do not collect personal information before explicit consent; and
    • Some tools, like Google Analytics, now offer “consent mode” to restrict data collection until consent is given. This can be used for all California-based IP addresses visiting your website.
  1. Update Privacy Disclosures
    • Accurately describe all data practices and third-party tool usage in easy-to-understand language in your privacy policy and consent pop-ups; and
    • Ensure public disclosures match actual practices; discrepancies can increase liability.
  1. Strengthen Vendor Agreements
    • Technology vendors contracts must require compliance, limit data use, and include indemnification where possible.
  1. Implement Role-Based Data Controls
    • Restrict access to personal data to only necessary personnel and systems; retain records only as long as needed.
  1. Educate and Align Internal Teams
    • Ensure marketing and IT teams understand CIPA risks and consent requirements. Many issues stem from misunderstandings rather than intentionally ignoring these risks.
  1. Insurance, Indemnification, and Reputational Risk
    • Most general liability and cyber insurance policies exclude coverage for statutory privacy violations like CIPA claims. This gap may leave businesses financially exposed to high defense costs and settlements. Review policy language with brokers or counsel and seek possible amendments .

Beyond direct costs, reputational harm can be significant, as plaintiffs’ firms often publicize lawsuits to exert pressure on companies and attract copycat claims. Transparent, user-friendly communication about data practices is the best defense.

What’s Next?

Many expect SB 690 or similar reform efforts to reappear in the next legislative session, and California courts will continue grappling with conflicting interpretations of CIPA. Until then, regulatory uncertainty will persist, with plaintiffs’ firms actively exploiting it. Preparation and transparency remain businesses’ best shields: proactive audits, updated disclosures, and robust consent mechanics are essential. Audit before you’re accused. Legacy laws like CIPA now pose modern threats. With reforms delayed, compliance is a business-wide mandate, not just a legal question. Companies that act now to align practices, communications, and governance will be best positioned to avoid costly disputes and reputational damage.

Indiana’s new Consumer Data Protection Act (CDPA) takes effect on January 1, 2026. It follows other state consumer privacy laws by providing consumers with rights related to the collection and processing of their information. On November 25, 2025, Indiana’s Attorney General issued a Consumer Data Protection Bill of Rights as “a tool to educate Hoosiers on how the new law works, the rights offered to consumers and the obligations placed on applicable businesses.”

The Bill of Rights details core consumer rights under the CDPA and adds specific clarity, examples, and expectations for how businesses should operationalize them. Though the guide is intended to clarify the law for consumers, it also puts businesses on notice about the Indiana Attorney General’s compliance expectations under the privacy law.

Rights Summarized in the Bill
The Bill of Rights outlines 15 protections for Indiana consumers, including rights to delete personal data held by companies, opt out of targeted advertising and data sales, and request a copy of their information in a portable format. Amongst other guidelines:

  • Consumers can find out if a business is processing their data and request one free copy of their personal data each year;
  • Consumers can request corrections to inaccurate data, or have their personal data deleted, regardless of its source;
  • Customers can opt out of targeted advertising, profiling, and sale of their personal data.
  • Businesses may not discriminate if a consumer exercises their rights, including by changing services, pricing, or quality;
  • Requests must be addressed within 45 days, with one possible 45-day extension; and
  • Sensitive data, including health, biometric, immigration, religious, and precise location data, and all data on children under 13, cannot be processed unless the consumer or parent/guardian gives opt-in consent.

A Step-by-Step Consumer Guide
Unlike statutes or regulations, the Bill of Rights gives consumers step-by-step instructions for determining if a business is a “controller” subject to the CDPA, how to exercise their rights, and what to do if their request is denied. Consumers are advised to first confirm whether the CDPA even applies, checking for coverage, exemptions, and whether the business meets the relevant data thresholds. Next, they are encouraged to use the methods described in the company’s privacy notice (such as an online form, email, or mailing address) to submit requests to access, correct, or delete their data. For opt-out requests, such as stopping targeted advertising or the sale of personal data, the Bill of Rights instructs consumers to use specifically provided mechanisms, like opt-out buttons or forms.

If a business denies a request, the Bill outlines how to appeal through the company’s process and reminds consumers that if a company does not comply, the consumer can file a formal complaint with the Attorney General’s office. The document also highlights the need for timely responses and requires companies to provide appeal instructions and reasons for denials.

Takeaways

  • Make your privacy notice easy to find and understand. Given its ease of use and accessibility, consumers are likely to use this document to evaluate companies’ websites for CDPA compliance. Privacy notices should clearly detail what data is collected, with whom it is shared, and how customers can exercise their rights; and
  • Build user-friendly request and appeal processes. The Bill of Rights expects companies to provide a clear, practical mechanism for requests. Companies should test their processes to determine whether they are accessible and available in the way consumers are likely to make requests. Companies should also train legal and compliance teams on timely responses to consumer requests.

Indiana’s Consumer Data Protection Bill of Rights is likely to drive public understanding, customer expectations, and regulator priorities. As January 2026 approaches, companies should review and update their privacy practices, notices, and response procedures accordingly.

Anyone who has purchased a car in the past decade is familiar with the dazzling wave of technology that greets them: giant touchscreens, voice controls, remote start apps. But behind the gleaming infotainment systems and driver-assist cameras, a subtler, more powerful feature has crept into the modern automobile, the ability to observe, record, and report on virtually every aspect of its use and its users.

For years, consumers have worried about smartphone privacy, Alexa eavesdropping, or social media tracking. However, while attention was directed elsewhere, auto manufacturers quietly built an ecosystem that rivals Big Tech in its reach, and, according to a blistering Mozilla Foundation study reported by AP News, completely fails at protecting consumer privacy. Not even one of the 25 major car brands reviewed earned a passing grade.

Why? Because car companies aren’t just making money off vehicle sales anymore. They’re monetizing your data, and the information they scoop up goes well beyond GPS locations or your driving speed. Think:

  • Biological metrics: Weight, heart rate, even facial expressions via sensors and cameras;
  • Personal details: Information from your tethered phone, call logs, text messages, sometimes even biometric or demographic data; and
  • Highly sensitive information: According to some vehicle manufacturers’ own policies, data on “sexual activity” and “intelligence” can be collected.

Unlike a smartphone app, which must explicitly ask for permissions, car makers hide their consent models deep in paperwork signed under pressure in a dealership. Few read these documents and even fewer realize that 84% of cars reviewed by Mozilla share personal data with brokers and service providers, and 76% claim the right to sell your data.

This has transformed cars into ongoing surveillance devices whose output is not for your benefit, but to be shopped around in a shadowy secondary data market, sold to insurers, marketers, and sometimes even government agencies.

What was once private (e.g., how you drive, where you go, who rides with you) can now raise your costs or be used for purposes you never anticipated. The auto industry claims this is about safety or innovation. While crash detection or predictive maintenance require some data, that argument fails when it comes to collecting genetic or intimate personal information.

In the United States, where state-level rules like the California Consumer Privacy Act are only just beginning to probe this problem, most drivers are exposed by default. Federal lawmakers are only now starting to see the domestic, and even national security dangers. Issues range from stalkers misusing connected apps to fears of foreign adversaries accessing U.S. driver data. But for now, self-regulation prevails—and as Mozilla’s findings make clear, it doesn’t work. Consent screens for cars, buried in sales documents and 50-page privacy policies, simply don’t provide real choice or transparency, particularly when a car is used by multiple drivers or passengers.

In an age when car sensors can identify individual drivers, or capture pedestrians in external footage, the question of whose privacy is being violated gets murky. Passengers (who never agreed to anything), can have their images, voices, and even biometrics swept up by default, an uncharted legal territory, with serious implications for consent and wiretapping laws.

The Alliance for Automotive Innovation touts voluntary, non-binding “consumer privacy principles.” In practice, opting out often means disabling mission-critical functions or navigating a maze of settings and customer service calls—hardly a meaningful choice, and often creating a “take it or leave it” arrangement where convenience trumps privacy.

As cars increasingly become platforms for subscriptions and software updates, the industry must realize that trust is everything. Already, lawsuits are hitting data-sharing arrangements. If automakers don’t fix their practices, a harsh regulatory reckoning is inevitable—one that could curtail the very innovation they celebrate.

Today’s car dealerships are not just selling you a car, they’re enrolling you, and everyone who travels with you, into a sprawling, often poorly regulated data marketplace. As drivers and passengers wake up to this reality, demands for transparency, meaningful consent, and real privacy choices will only grow. The road to the future, it turns out, is paved with data. The question is, do we still control the dashboard, or has the car quietly taken the wheel?

On November 18, 2025, the California Privacy Protection Agency (CPPA) announced the formation of a new Data Broker Enforcement Strike Force within its Enforcement Division. The purpose of this new team is to investigate alleged violations of both the California Consumer Privacy Act (CCPA) and the Delete Act’s data broker registration requirements.

According to the CPPA, the Strike Force will:

  • Expand review and oversight of the data broker industry;
  • Support the implementation of the forthcoming Delete Request and Opt-Out Platform; and
  • Ensure that data brokers comply with registration and consumer deletion request rules under the Delete Act and CCPA.

A key development beginning in January 2026, the Delete Request and Opt-Out Platform will allow consumers to submit a single deletion request that will be sent to all registered data brokers at once, greatly simplifying the process for individuals seeking to exercise their privacy rights.

This announcement follows the 2024 investigative sweep targeting data broker compliance, which, according to the CPPA, has already resulted in a significant number of ongoing enforcement actions. The creation of the Strike Force is intended to provide additional resources and focus for those efforts, ensuring continued and increased scrutiny of data brokers’ compliance with registration, transparency, and consumer-rights obligations.

California is cementing its role as a leader in state-level privacy enforcement, with the Strike Force representing yet another expansion of oversight and enforcement tools. Other states are likely to watch California’s model carefully and may establish similar specialized units or requirements.

Consider these practical steps for your business to stay out of the path of this new enforcement arm:

  • Proactive Compliance: Organizations subject to the CCPA or Delete Act should review their compliance programs, particularly data broker registration status and procedures for responding to consumer deletion requests;
  • Monitoring Developments: Companies operating in multiple states should pay close attention to evolving state-level privacy laws and enforcement trends, as similar requirements may take shape outside of California; and
  • Readiness for the 2026 Platform: Preparation for the Delete Request and Opt-Out Platform should begin now, to ensure technical and process readiness when the platform goes live.

California’s ongoing focus on data broker regulation and consumer privacy rights means that enforcement is likely to intensify, not just through the courts, but via proactive monitoring and investigation. The creation of the Data Broker Enforcement Strike Force underscores the importance of robust privacy compliance in today’s rapidly evolving legal landscape. Now is the time to evaluate your privacy practices and make sure that they meet California’s rigorous and expanding standards.

As platforms like Zoom, Microsoft Teams, and Google Meet have cemented themselves as the backbone of modern collaboration, a quiet revolution has unfolded in our meeting rooms, one where digital notetakers often outnumber the people actually present. Tools like fireflies.ai and Otter.ai promise the magic of effortless, automated meeting transcription. But as reliance on these services grows, a significant legal storm is brewing just out of sight.

What many users don’t realize: these AI notetakers are increasingly finding themselves, and their users, entangled in high-stakes litigation under federal and state wiretapping laws. Otter.ai, in particular, has become a focal point for lawsuits that allege unlawful recording, storage, and use of participants’ conversations, often without the clear, informed consent that state and federal wiretap statutes require.

Here are some recent cases highlighting this very issue:

  • In Brewer v. Otter.ai Inc., No. 5:25-cv-06911 (N.D. Cal. Aug. 15, 2025), plaintiffs allege that Otter’s notetaker automatically joins meetings across Zoom, Teams, and Meet, records conversations—including those of non-users—and uses these discussions to train its machine learning models, all without proper consent or disclosure. Notably, the complaint claims Otter puts the onus on the account holder to obtain permissions, rather than seeking them directly from every participant;
  • Walker v. Otter.ai Inc., No. 5:25-cv-07187 (N.D. Cal. Aug. 26, 2025), raises the stakes even further, alleging that Otter’s software collects and uses “voiceprints” (i.e., unique biometric data) from meeting audio. According to the complaint, Otter does so without notifying participants or obtaining the explicit, written consent required by the Illinois Biometric Information Privacy Act (BIPA);
  • Theus v. Otter.ai Inc., No. 5:25-cv-07462 (N.D. Cal. Sept. 3, 2025), claims that Otter acts as a silent eavesdropper: joining meetings by default, capturing recordings and screenshots, storing data indefinitely, and sending transcripts and promotional emails, all potentially without attendees’ knowledge or consent. The suit also points to Otter’s practice of collecting calendar data and linking personal information to meeting content; and
  • Winston v. Otter.ai Inc., No. 5:25-cv-07712 (N.D. Cal. Sept. 10, 2025), alleges that Otter not only transcribes and stores meeting content but also sends follow-up emails, including partial transcripts and screenshots, to all invitees, even those who never joined the meeting. Critically, the complaint asserts that Otter’s default settings provide no disclosure to non-user participants, unless account holders pay for an expensive “Enterprise” tier.

These cases are now being litigated as a single, consolidated action before Judge Eumi K. Lee in the Northern District of California. While substantive rulings have yet to be issued, the allegations surface a core risk: the use of AI notetakers, without robust notification and consent, could violate federal and state laws, even if a service only transcribes (and does not store) audio.

In California, for example, the California Invasion of Privacy Act (CIPA) is incredibly broad. It prohibits not just recording, but also “reading, attempting to read, or learning” the contents of communications without the consent of all parties (Cal. Penal Code § 631(a)). And, as clarified by the California Supreme Court in Ribas v. Clark, 696 P.2d 637 (1985), even passive “listening in” without explicit disclosure to every participant may violate the law.

If your business uses these tools, here are some key considerations:

  • Don’t assume “AI notetaker” means risk-free convenience. Even silent listeners or entities that simply transcribe, but do not store, audio may fall within the scope of privacy statutes;
  • Consent is critical and complicated. Many state laws, like CIPA and BIPA, demand clear consent from all parties for any type of recording or eavesdropping, whether human or AI-driven;
  • Product defaults matter. Relying on account holders to provide notification, rather than proactive disclosures by the service itself, may not be enough, especially for organizations with participants in “two-party consent” states; and
  • Watch for legal updates. Substantive rulings in these Otter.ai cases could set important precedents for how AI tools can and cannot participate in digital meetings across the country.

AI notetakers offer real productivity gains, but they’re ushering in new legal risks that everyone, from end users to corporate IT and compliance leaders, needs to understand. Until the law catches up, the safest course is to disclose, seek consent, and choose tools and configurations that put privacy front and center.

In its 40th anniversary report, Trouble in Toyland 2025, the Public Interest Research Group (PIRG) warns that “[T]oys with artificial intelligence bots or toxics present hidden dangers. Tests show A.I. toys can have disturbing conversations. Other concerns include unsafe or counterfeit toys bought online.”

The report outlines PIRG’s testing of four toys (Curio’s Grok, a stuffed rocket; Folo Toy’s Kumma, a stuffed teddy bear; Miko’s Miko 3, a robot; and Robot MINI, a small plastic robot) that contain AI chatbots that are marketed and interact with children between the ages of 3 and 12. The report states that:

We found some of these toys will talk in-depth about sexually explicit topics, will offer advice on where a child can find matches or knives, act dismayed when you say you have to leave, and have limited or no parental controls. We also look at privacy concerns because these toys can record a child’s voice and collect other sensitive data, by methods such as facial recognition scans.

Although the toys that embed AI are marketed for children, they are “largely built on the same large language model technology that powers adult chatbots – systems the companies themselves such as OpenAI don’t currently recommend for children and that have well documented issues with accuracy, inappropriate content generation and unpredictable behavior.” Three of the four toys tested relied in some part on a version of ChatGPT. Although OpenAI has clearly noted that it is not for use by children, the technology is nonetheless being used by toy companies to embed it into smart toys.

The report outlines the testing of three of the four toys, as they were unable to test Robot MINI because it was unable to sustain an internet connection long enough to function. They tested the toys in four categories:

  • Inappropriate content and sensitive topics;
  • Addictive design features that encourage extended engagement and emotional investment;
  • Privacy features; and
  • Parental controls

The results were pretty alarming on how the toys handled sensitive topics (some did better than others); religion; addictive design features; engagement and friendship; and how the toys collect, retain, and disclose data about your child.

The conclusion is that “AI toys are more like an experiment on our kids.”

 The report points out features in AI toys that parents may wish to consider for the safety of their children:

  • At the time of this report, we don’t know what regulation efforts will ultimately lead to. In the meantime, parents need to make decisions about AI toys;
  • With the AI toy market becoming hot, there will be knock-off or faulty devices that do not work as advertised;
  • Parents need to know that the toys can provide dangerous information about using potentially dangerous household items including guns, knives, matches, pills, plastic bags, and bleach and where to find them in the house;
  • AI toys may discuss mature or sexually explicit content with children;
  • AI toys may discuss mature topics with children that parents handle, such as religion;
  • AI toys may be developed with addictive design features or reward systems to increase engagement;
  • Relational AI toys come at a key moment in social development of young children. There’s a lot we don’t know about how AI toys might affect childhood development, especially for young children….Given these potential concerns, it seems prudent to set clear boundaries around how young children engage with AI; and
  • Collection of a child’s data through voice disclosure may “unwittingly disclose a lot of personal information in the course of conversations, not realizing that behind their friend is a company” that is storing the data, sharing it with other companies and increasing the risk of exposure or “ending up in the hands of scammers or other bad actors.”

This holiday season, consider the ramifications of AI toys on your children and the points raised by PIRG.