Former President Joe Biden issued an Executive Order (EO) entitled “Strengthening and Promoting Innovation in the Nation’s Cybersecurity” on January 16, 2025. The EO is designed to

  • Remove Barriers to Threat Information Sharing Between Government and the Private Sector
  • Modernize and Implement Stronger Cybersecurity Standards in the Federal Government
  • Improve Software Supply Chain Security
  • Establish a Cyber Safety Review Board
  • Create Standardized Playbook for Responding to Cybersecurity Vulnerabilities and Incidents
  • Improve Investigative and Remediation Capabilities

According to the National Institute of Standards and Technology (NIST) and the Cybersecurity and Infrastructure Security Agency (CISA), the EO – which is not posted to the new White House website – aims to “improve accountability for software and cloud service providers, strengthen the security of Federal communications and identity management systems, and promote innovative developments and the use of emerging technologies for cybersecurity.”

The EO charges NIST with:

  • Operationalizing Transparency and Security in Third-Party Software Supply Chains
  • Securing Federal Communications
  • Solutions to Combat Cybercrime and Fraud
  • Promoting Security with and in Artificial Intelligence
  • Aligning Policy to Practice.

NIST is to complete these tasks between March and November 2025. 

CISA’s role in implementing the EO includes:

  • Removing Barriers to Threat Information Sharing Between Government and the Private Sector
  • Modernizing and Implementing Stronger Cybersecurity Standards across the Federal Government
  • Improving Software Supply Chain Security
  • Establishing a Cyber Safety Review Board
  • Creating Standardized Playbook for Responding to Cybersecurity Vulnerabilities and Incidents
  • Improving Detection of Cybersecurity Incidents on Federal Government Networks
  • Improving Investigative and Remediation Capabilities

These goals are all needed and admirable. We will see how this develops throughout the year.

Despite bipartisan support for banning TikTok – essentially spyware presenting a national security threat from the People’s Republic of China (PRC) – in the United States (as done by India) and the Supreme Court’s upholding of the law as constitutional and requiring the app to go dark, President Trump signed an Executive Order (EO) during his first day in office giving TikTok 75 days to “pursue a resolution.”

TikTok already had several months to “pursue a resolution,” which was to divest itself from the PRC so it could not collect and use Americans’ sensitive data. TikTok does not want to pursue this resolution because it wants to keep collecting, using, manipulating, and spying on U.S. citizens.

This is a disappointing development, and hopefully, Trump, who originally supported the ban, will come to his senses to protect national security and keep the PRC from spying on unwary citizens.

Singapore-based Chinese video game developer Cognosphere, dba HoYoverse, known for “Genshin Impact,” a role-playing game involving collectible characters with unique fighting skills, has agreed to pay $20 million to settle Federal Trade Commission (FTC) allegations that it violated the Children’s Online Privacy Protection Act (COPPA) and deceived players about the cost of winning certain prizes.

Introduced in the U.S. in 2020, Genshin Impact was one of the first Chinese video games to go viral in this country.

The FTC alleged that the company collected children’s personal information without parental consent as required by COPPA. The FTC’s complaint stated that the company “shares device-related persistent identifier information and records of the player’s engagement, progress, and spending within the game with third-party analytics and advertising providers.”

Additionally, the game’s players pay real money for a virtual currency for the chance to win virtual prizes; however, the opportunities to win prizes are confusing and complicated and involve multiple types of in-game virtual currency with different exchange rates. The purchasing process obscures the reality that consumers must spend large amounts of real money to obtain 5-star heroes. As a result of the settlement, the company will introduce new age-gate and parental consent protections for children and young teens and increase its in-game disclosures related to its virtual currency and rewards for players in the U.S. In addition, it will also allow users to directly purchase content, using real money, from the game’s loot boxes and will cease misrepresenting the odds of loot boxes. The company must also restrict children under the age of 16 from purchasing loot boxes without parental consent.

Cognosphere, the distributor of Genshin Impact, released a statement in response to the settlement that can be viewed here.

In its continued concentration on the collection and use of consumers’ precise geolocation, on January 16, 2024, the Federal Trade Commission (FTC) settled with General Motors (GM) over allegations that it collected, used, and sold drivers’ precise geolocation and driving behavior data from millions of vehicles—data that can be used to set insurance rates—without adequately notifying consumers and obtaining their affirmative consent.

The FTC accepted the proposed order for public comment, which will be open for 30 days.

The complaint against GM alleged that it “used a misleading enrollment process to get consumers to sign up for its OnStar connected vehicle service and the OnStar Smart Driver feature. GM failed to clearly disclose that it collected consumers’ precise geolocation and driving behavior data and sold it to third parties, including consumer reporting agencies, without consumers’ consent.” According to the complaint, GM collected driver data through OnStar as often as every three seconds. As in the previous four cases in 2024, the FTC alleges that “tracking and collecting geolocation data can be extremely [privacy-invasive], revealing some of the most intimate details about a person’s life, such as whether they visited a hospital or other medical facility, and expose their daily routines.” The proposed order, if accepted, “prohibits GM and OnStar from misrepresenting information about how they collect, use and share consumers’ location and driver behavior data.” In addition, the order prohibits them from disclosing consumers’ geolocation and driver behavior data to consumer reporting agencies for five years; requires them to obtain affirmative express consent from consumers before collecting connected vehicle data; allows consumers to obtain and delete their data; and allows consumers to limit data collection from their vehicles.

The Federal Trade Commission (FTC) issued a proposed settlement order against GoDaddy alleging that it “has failed to implement reasonable and appropriate security measures to protect and monitor its website-hosting environments for security threats, and misled customers about the extent of its data security protections on its website hosting services.”

The proposed settlement order requires GoDaddy “to establish a comprehensive data security program that is similar to those in other FTC cases, including the recent settlement with Marriott International.”

The complaint alleged that GoDaddy had unreasonable security measures, including “failing to inventory and manage assets and software updates; assess risks to its shared hosting services; adequately log and monitor security-related events in the hosting environment; and segment its shared hosting from less-secure environments.” These data security failures caused several “major security breaches between 2019 and 2022.”

The order prohibits GoDaddy from misrepresenting its security practices, and requires it to establish and implement a comprehensive security program to be reviewed by an independent third-party assessor.

On January 16, 2025, the Federal Trade Commission (FTC) issued a press release stating, “The updated [Children’s Online Privacy Protection Act (COPPA)] rule strengthens key protections for kids’ privacy online. By requiring parents to opt [into] targeted advertising practices, this final rule prohibits platforms and service providers from sharing and monetizing children’s data without active permission. The FTC is using all its tools to keep kids safe online.”

These changes are the first major updates to the rule since its inception in 2013. COPPA protects the online privacy of children under the age of 13. It imposes specific requirements on operators of websites or online services directed to children or that knowingly collect personal information from children. COPPA requires operators to obtain verifiable parental consent before collecting, using, or disclosing personal information from children under 13 and to provide clear and comprehensive notice of their information practices regarding children, including a link to their children’s privacy policy on their website or online service. The rule also requires operators to take reasonable steps to disclose children’s personal information only to third parties capable of maintaining its confidentiality, security, and integrity. COPPA also mandates that operators retain collected children’s personal information for only as long as necessary to fulfill the purpose of its collection and delete such information using reasonable measures to protect against unauthorized access or use.

COPPA also includes provisions related to the FTC’s ability to approve self-regulatory guidelines (known as Safe Harbor Programs), which allow operators to use alternative methods for obtaining parental consent, provided they meet the requirements of COPPA.

The amendments to the COPPA rule include:

  • An Expanded Definition of Personal Information: Now includes biometric and government-issued identifiers.
  • A New Definition of Mixed Audience Website or Online Service: These are websites or services directed to children, but do not target them as the primary audience and do not collect personal information from any visitor before determining if the visitor is a child.
  • Required Parental Consent for Data Disclosure: Operators must now obtain separate verifiable parental consent for disclosing a child’s personal information to third parties, such as for targeted advertising purposes.
  • New Methods for Verifiable Parental Consent: Expands the permissible methods, including knowledge-based authentications, submitting government-issued photographic identification, and using text messages with additional safeguards.
  • A Required Information Security Program: Operators are required to establish and maintain a written information security program appropriate to the sensitivity of the personal information collected from children. This program must be regularly tested and monitored.
  • Strengthened Data Retention Limitations: Operators must retain children’s personal information for only as long as necessary to fulfill a specific purpose and must maintain a publicly available written data retention policy.
  • More Accountability for Safe Harbor Programs: Comprehensive reviews of the operator’s privacy and security policies now required.

The FTC did not adopt proposed amendments to the rule related to limitations on using push notifications to children without parental consent or requirements for educational technology in schools. The changes to the rule will take effect 60 days after publication in the Federal Register (which has not yet occurred or been scheduled). Organizations subject to the final rule have one year to comply with the changes; however, compliance is required earlier in relation to COPPA Safe Harbor programs. To review the amendments, click here.

Well, it was good while it lasted. Former President Biden issued an Executive Order (EO) in October 2023 designed to start the discussion and development of guardrails around using artificial intelligence (AI) in the United States. It was a valiant and important effort to try to get ahead of the known risks surrounding the use of AI.

President Trump gutted the AI EO on his first day in office in one fell swoop without providing any meaningful replacement. Some of his supporters even questioned the move. We’ll continue monitoring this situation to see how it develops.

This week, I received a fake text message (a smish) saying my E-ZPass account was overdue and that I urgently needed to pay it. That’s a new one and, apparently, quite effective. Luckily, I knew it was a scam, but others were victimized.

According to the website Krebs on Security, security researchers “say the surge in SMS spam coincides with new features added to a popular commercial phishing kit sold in China that makes it simple to set up convincing lures spoofing toll road operators in multiple U.S. states.”

Residents in multiple states have been targeted, to the point where the Massachusetts Department of Transportation issued a warning about the smishing scheme using its EZDriveMA electronic tolling program. Others targeted by the scam include California, Colorado, Connecticut, Florida, Minnesota, Rhode Island, Texas, and Washington residents.

According to a reported conversation with a security researcher at SecAlliance, these smishing attacks increased after the New Year, when “at least one Chinese cybercriminal group known for selling sophisticated SMS phishing kits began offering new phishing pages designed to spoof toll operators in various U.S. states.” The purpose is to get consumers’ credit card information.

It has been such a problem that the Federal Trade Commission issued a consumer alert about it last week. If you receive a smish purporting to be from a toll road operator, delete it. Do not click the link or visit the site it directs you to.

In August 2024, the Department of Justice (DOJ) and eight states filed a civil antitrust lawsuit against RealPage Inc., alleging that its software was used to unlawfully decrease competition among landlords and maximize profits. Last week, the DOJ, now joined by ten states, filed an amended complaint alleging that landlords Greystar Real Estate Partners LLC, Blackstone’s LivCor LLC, Camden Property Trust, Cushman & Wakefield Inc., Pinnacle Property Management Services LLC, Willow Bridge Property Company LLC, and Cortland Management participated in the price-fixing scheme. These companies operate over 1.3 million residential units across 43 states and the District of Columbia.

According to the amended complaint, these landlords shared sensitive information through RealPage’s pricing algorithm to decrease competition and increase corporate profits. Jennifer Bowcock, RealPage’s Senior Vice President of Communications, rebutted the allegations, arguing that issues with housing affordability stem from the limited supply of residential units and that the government should “stop scapegoating RealPage – and now [its] customers – for the housing affordability problems.”

The DOJ also announced a proposed consent decree with Cortland Management, where the claims against Cortland would be resolved in exchange for agreeing to cooperate with the DOJ’s ongoing investigation against the remaining defendants. Under the terms of the proposed agreement, Cortland would be barred from using a competitor’s sensitive data to train a pricing model, pricing units with the assistance of an algorithm without court supervision, and soliciting or disclosing sensitive information with other companies to set rental prices. A spokesman for Cortland indicated that it is pleased with the outcome and is looking forward to “improv[ing the] resident experience” in 2025. Under the Tunney Act, P.L. 93-528, the proposed consent decree will be published in the Federal Register for a 60-day comment period, after which the court can enter final judgment. The case is United States v. RealPage Inc., dkt. no, 1:24-cv-00710 (LCB) (M.D.N.C. filed Aug. 23, 2024).

The California Attorney General published two legal advisories this week:

These advisories seek to remind businesses of consumer rights under the California Consumer Privacy Act, as amended by the California Privacy Rights Act (collectively, CCPA), and to advise developers who create, sell, or use artificial intelligence (AI) about their obligations under the CCPA.

Attorney General Rob Bonta said, “California is an economic powerhouse built in large part on technological innovation. And right alongside that economic might is a strong commitment to economic justice, workers’ rights, and competitive markets. We’re not successful in spite of that commitment — we’re successful because of it [. . .] AI might be changing, innovating, and evolving quickly, but the fifth largest economy in the world is not the wild west; existing California laws apply to both the development and use of AI. Companies, including healthcare entities, are responsible for complying with new and existing California laws and must take full accountability for their actions, decisions, and products.” 

Advisory No. 1: Application of Existing California Laws to Artificial Intelligence

This advisory:

  • Provides an overview of existing California laws (i.e., consumer protection, civil rights, competition, data protection laws, and election misinformation laws) that may apply to companies that develop, sell, or use AI;
  • Summarizes the new California AI law that went into effect on January 1, 2025, such as:
  • Disclosure Requirements for Businesses
  • Unauthorized Use of Likeness
  • Use of AI in Election and Campaign Materials
  • Prohibition and Reporting of Exploitative Uses of AI  

Advisory No. 2: Application of Existing California Law to Artificial Intelligence in Healthcare 

AI tools are used for tasks such as appointment scheduling, medical risk assessment, and medical diagnosis and treatment decisions. This advisory:

  • Provides guidance under California law, i.e., consumer protection, civil rights, data privacy, and professional licensing laws—for healthcare providers, insurers, vendors, investors, and other healthcare entities that develop, sell, and use AI and other automated decision systems;
  • Reminds such entities that AI carries harmful risks and that all AI systems must be tested, validated, and audited for safe, ethical, and lawful use;
  • Informs such entities that they must be transparent about using patient data to train AI systems and alert patients on how they are using AI to make decisions affecting their health and/or care;

This is yet another example of how issues related to the safe and ethical use of AI will likely be at the forefront for many regulators across many industries.