According to statements by the Cybersecurity and Infrastructure Security Agency (CISA), the People’s Republic of China-backed (PRC) hacking group Salt Typhoon, which attacked telecommunications providers last month, is still infiltrating the providers and it is “impossible for us to predict a time frame on when we’ll have full eviction.” One reason is that the hackers infiltrated the telecoms in different ways and “each victim is unique.”

In addition, the incident has not been fully mitigated and the number of victims is “evolving.”

As a result of the massive hacking incident, CISA, the Federal Bureau of Investigation, National Security Agency, and their partners in Australia, New Zealand, and Canada issued a bulletin on December 4, 2024, stating that the PRC-affiliated hackers “compromised networks of major global telecommunications providers to conduct a broad and significant cyber espionage campaign.” The bulletin “highlight[s] the threat and provide[s] network engineers and defenders of communications infrastructure with best practices to strengthen their visibility and harden their network devices against successful exploitation carried out by PRC-affiliated and other malicious cyber actors.”

The bulletin is a substantive and worthwhile read to help mitigate against attacks and  “encourage[s] telecommunications and other critical infrastructure organizations to apply the best practices in this guide.”

On December 4, 2024, four of the five members of the Five Eyes intelligence-sharing group (the United States, Australia, Canada, and New Zealand) law enforcement and cyber security agencies (Agencies) published a joint guide for network engineers, defenders of communications infrastructure and organizations with on-premises enterprise equipment (the Guide). The Agencies strongly encourage applying the Guide’s best practices to strengthen visibility and strengthen network devices against exploitation by reported hackers, including those hackers affiliated with the People’s Republic of China (PRC). The fifth group member, the United Kingdom, released a statement supportive of the joint guide but stated it had alternate methods of mitigating cyber risks for its telecom providers.

In November 2024, the Federal Bureau of Investigation (FBI) and the U.S. Cybersecurity and Infrastructure Security Agency (CISA) issued a joint statement to update the public on its investigation into the previously reported PRC-affiliated hacks on multiple telecommunications companies’ networks. The FBI and CISA reported that these hacks appeared to focus on cell phone activity of individuals involved in political or government activity and copies of law enforcement informational requests subject to court orders. However, at the time of the update, these U.S. agencies and members of Congress have underscored the broad and significant nature of this breach. At least one elected official stated that the hacks potentially expose unencrypted cell phone conversations with someone in America to the hackers.

In particular, the Guide recommends adopting actions that quickly identify anomalous behavior, vulnerabilities, and threats and respond to a cyber incident. It also guides telecoms and businesses to reduce existing vulnerabilities, improve secure configuration habits, and limit potential entry points. One of the Guide’s recommended best practices attracting media attention is ensuring that mobile phone messaging and call traffic is fully end-to-end encrypted to the maximum extent possible. Without fully end-to-end encrypted messaging and calls, the content of calls and messages always has the potential to be intercepted. Android to Android messaging and iPhone to iPhone messaging is fully end-to-end encrypted but messaging from an Android to an iPhone is not currently end-to-end encrypted. Google and Apple recommend using a fully encrypted messaging app to better protect the content of messages from hackers.

The FBI and CISA are continuing to investigate the hacks and will update the public as the investigation permits. In the interim, telecom providers and companies are encouraged to adopt the Guide’s best practices and to report any suspicious activity to their local FBI field office or the FBI’s Internet Crime Complaint Center. Cyber incidents may also be reported to CISA.

The Federal Trade Commission (FTC) has been on a mission to communicate its seriousness about companies collecting, using, and selling consumers’ sensitive location data and that it is closely watching these practices.

On December 3, 2024, the FTC announced that it entered into a proposed order with Gravy Analytics and its subsidiary Venntel “for unlawfully selling location data tracking consumers to sensitive sites” and that the order “bans the use or sale of data associated with military sites, churches, labor unions and other sensitive locations.” Other locations noted include “religious organizations, correctional facilities, schools or childcare facilities, services supporting people based on racial and ethnic backgrounds, services sheltering homeless, domestic abuse, refugee or immigrant populations, and military installations.”

According to the press release, the proposed order requires Gravy Analytics and Venntel to implement a sensitive data location program, and they are prohibited from “selling, disclosing, or using sensitive location data in any product or service.” The FTC alleged in a complaint that Gravy Analytics “unfairly sold sensitive characteristics, like health or medical decisions, political activities and religious viewpoints, derived from consumers’ location data.”

The FTC alleged that Gravy Analytics “used geofencing, which creates a virtual geographical boundary, to identify and sell lists of consumers who attended certain events related to medical conditions and places of worship and sold additional lists that associate individual consumers to other sensitive characteristics.”

The proposed order requires the companies to delete all historic location data, unless the company obtains consent from consumers, or the data is “deidentified or rendered non-sensitive.”

The FTC noted that “this is [its] fourth action taken this year challenging the sale of sensitive location data, and it’s past time for the industry to get serious about protecting Americans’ privacy.”  Message heard.

This week, the Federal Trade Commission (FTC) issued a proposed consent order to settle allegations against IntelliVision Technologies Corp. (IntelliVision) for making false, misleading, and unsubstantiated claims that its artificial intelligence (AI) facial recognition software, was free of gender and racial bias.

According to the proposed consent order, IntelliVision must cease publicizing misrepresentations of its facial recognition software’s accuracy and efficacy, as well as its claims that the software was created with all different genders, ethnicities, and skin tones in mind. The FTC’s complaint, alleges that IntelliVision did not have any supportive evidence of its claim that the software had “one of the highest accuracy rates on the market and performs with zero gender or racial bias.” Additionally, the complaint alleges that IntelliVision claimed that it trained its AI-powered software on millions of images when, instead, it trained the software on the facial images of roughly 100,000 unique individuals and then created variations of those same images.

Lastly, the FTC alleges that IntelliVision did not have evidence to substantiate its advertising claim that the software’s anti-spoofing technology does not allow the system to be “tricked” by a photo or video image.

The Director of the FTC’s Bureau of Consumer Protection, Samuel Levine, warned businesses, “Companies shouldn’t be touting bias-free artificial intelligence systems unless they can back those claims up. Those who develop and use AI systems are not exempt from basic deceptive advertising principles.”

This is only the second case where the FTC alleged that AI facial recognition technology had been misrepresented. In December 2023, the FTC entered a consent order with Rite Aid for its failure to implement reasonable procedures related to using AI facial recognition in its stores and to prevent harm to consumers.

The Consumer Financial Protection Bureau (CFPB) announced this week that it intends to increase the scrutiny on data brokers to better protect service members, law enforcement officials, domestic violence victims, senior citizens, and other populations from surveillance, doxing, fraud, and threats of violence when cyber threat actors purchase personal and financial information from data brokers through legitimate means. The CFPB proposed updated regulations that would make data brokers subject to the Fair Credit Reporting Act’s (FCRA) accuracy requirements and restrict the sale of certain data, such as FICO scores, “credit header” information (Social Security numbers, address, and telephone numbers), only for purposes allowed under the FCRA — i.e., loan application checks and prevention of fraud. Currently, the FCRA applies to consumer credit reporting agencies (e.g., Equifax, Experian, and TransUnion), but the CFPB seeks to broaden its reach to data brokers, too.

The proposed regulations would also require data brokers that sell consumer data to obtain consent through explicit disclosures to consumers. Further, the proposed regulations would explicitly prohibit the use of covered data for marketing purposes.

In the press release, CFPB Director Rohit Chopra said, “By selling our most sensitive personal data without our knowledge or consent, data brokers can profit by enabling scamming, stalking, and spying. The CFPB’s proposed rule will curtail these practices that threaten our personal safety and undermine America’s national security.”

We’ll watch this rule closely to see how far it gets. The final decision on this rule will be up to the new head of the CFPB based on President-elect Donald Trump’s pick for that role. While the incoming administration is expected to lessen regulatory restraints on businesses, the proposed rule is supported by law enforcement, national security officials, and lawmakers from both parties, which may increase the chances for the survival of this CFPB regulation. Public comments close on March 3, 2025.

*This post was authored by Daniel Lass, law clerk at Robinson+Cole. Daniel is not admitted to practice law.

Launched in July 2024, Death Clock is an application that uses artificial intelligence (AI) to predict when its users will die. Death Clock trained its AI model using over 1,200 life expectancy studies. It then uses the answers from a questionnaire about the user’s physical health, like diet and exercise, to calculate each user’s date of death. While users of the free version will only receive this date, users of the paid version will receive lifestyle recommendations to help them live longer.

Although the AI model includes a large amount of data, the data collected from individual users is currently limited. The current questionnaire is brief and does not delve extensively into family history or lifestyle habits. Including this additional data is likely necessary to receive more accurate results from the model. Improving the model’s accuracy is key for the economic calculations of different organizations, like the government and insurance companies. For example, people can better determine if they have saved enough for retirement.

However, the increase in data collection comes at a risk — namely, user privacy and discrimination. Collecting more data for analysis and inclusion in the model exposes the data to a greater likelihood of being leaked if proper security and storage procedures are not followed. Additionally, implicit biases in the model may produce harmful outcomes (e.g., higher insurance premiums) for certain consumer groups. Therefore, it is crucial that models are developed with a diverse group of stakeholders and are used in a fair, unbiased, and privacy-conscious way.

Many people do not understand how their geolocation data can be collected and used about them, or how massive the amount of precise location data collected from our devices.

The Federal Trade Commission (FTC) recently filed a complaint against Mobilewalla, Inc., alleging that it violated Section 5 of the FTC Act by selling consumers’ sensitive location information and targeting consumers based on sensitive characteristics without their consent. It further alleged that Mobilewalla conducted an unfair practice by collecting consumer information from real-time bidding (RTB) exchanges and indefinitely retained consumer location information.

According to the complaint, Mobilewalla is a data broker “that collects and aggregates huge quantities of consumer information, including precise location information tied to individual consumers that reveals sensitive information about those consumers. Mobilewalla touts its ability, among other things, to ‘create a comprehensive, cross channel view of the customer, understanding online and offline behavior.'” Mobilewalla collects this data from data suppliers, and consumers have no idea that their location information is being collected.

In addition, Mobilewalla has “collected large swaths of consumers’ personal information, including location data from multiple sources such as real-time bidding exchanges and data brokers. These sources may themselves obtain consumer data from other data suppliers, the mobile or online advertising marketplace, or mobile applications.” Most of the data is collected from RTB exchanges:  I had never even heard of an RTB exchange until I read the complaint. The complaint explains:

            The primary purpose of RTB exchanges is to enable instantaneous delivery of advertisements and other content to consumers’ mobile devices, such as when scrolling through a webpage or using an app. An app or website implements a software development kit, cookie, or similar technology that collects the consumer’s personal information from their device and passes it along to the RTB exchange in the form of a bid request. In an auction that occurs in a fraction of a second and without consumers’ involvement, advertisers participating in the RTB exchange bid to place advertisements based on the consumer information contained in the bid request. Advertisers can see and collect the consumer information contained in the bid request (even when they do not have a winning bid) and successfully place the advertisement.

The FTC alleges that Mobilewalla collected and retained information contained in a bid request in an RTB exchange even when it did not win the bid, including the consumer’s device mobile advertising identifier (MAID) and precise geolocation information if location-based services were turned on. Mobilewalla then used this information and paired it with other purchased consumer data (e.g., telephone numbers) to build profiles of individual consumers. Mobilewalla then sells access to this data, including raw location data, which is not anonymized. The FTC alleges that MAIDs can be used “to identify a mobile device’s user or owner.”

The FTC’s concern about this practice is that “Mobilewalla’s location data associated with MAIDs can be used to track consumers to sensitive locations, including medical facilities, places of religious worship, places that offer services to the LGBTQ+ community, domestic abuse shelters, and welfare and homeless shelters. It can also be used to infer sensitive information about those consumers.” In addition, “Mobilewalla’s collection and sale of consumers’ precise geolocation data to its clients to identify and target consumers based on sensitive characteristics causes or is likely to cause substantial injury in the form of stigma, discrimination, physical violence, emotional distress, and other harms.”

Similarly, the FTC recently issued a decision and consent order against Gravy Analytics, Inc. and Venntel, Inc. following an investigation of their collection and sale of precise consumer location and sensitive data. Take a look at the complaint if you want to learn more about how your precise location and other data can be collected when the location-based services feature is enabled on your device, and consider only keeping it on when you are using an app that requires it.

The Town of Enfield, New Hampshire, appears to have been the victim of a man-in-the-middle scheme involving the transfer of $742,000 to a fraudulent bank account. The town is constructing a new $7.2 million public safety building. An employee was tricked into sending the payment to a fraudulent bank account instead of the construction company building the facility.

According to a town spokesman, “Basically, a staff member was tricked into changing a bank account number for one of our vendors and then the next payment to that vendor was directed to the fraudulent bank account.”

This is a classic man-in-the-middle scheme when a threat actor intercepts a transaction and is able to divert funds that are supposed to go to another party. Often, the threat actor impersonates the vendor, uses a similar email address to make the victim believe it is the real vendor, and then tells the victim that the bank account has been changed, and provides realistic looking documents that look like they are issued by a legitimate bank. When the banking instructions are changed, the victim believes the vendor is being paid, but the funds are diverted to the threat actor’s account. It is distressing that banks are opening legitimate accounts for threat actors for the transfer of funds. The threat actor will drain the account, and unless you notify law enforcement, (FBI or Secret Service), chances are the funds will be gone by the time you figure out what happened.

Luckily, here, the town notified the bank where the fraudulent account had been opened, and it appears that some of the funds were frozen and may be recoverable. The hard lesson is to never trust anyone requesting funds over email, especially when they are changing wiring or bank instructions, or are seeking an urgent payment. These are all red flags, and processes for authenticating instructions via other means will assist in diverting such attacks.

The U.S.-China Economic and Security Review Commission, released its annual report to Congress this month.  The 793-page report responds to the Commission’s mandate to “monitor, investigate, and report to Congress on the national security implications of the bilateral trade and economic relationship between the United States and the People’s Republic of China.” The report is a culmination of a “broad and bipartisan consensus…with all 12 members voting unanimously to approve and submit it to Congress.”

Although the report is detailed and fascinating, there is one conclusion that is relevant to this post—China has a clear advantage over the United States “at each stage of the battery supply chain, ushering in rapid global market share increases for Chinese EV and battery makers.” As a result, “China’s near monopoly on battery manufacturing creates dependencies for U.S. auto manufacturers reliant on upstream suppliers as well as potential latent threats to U.S. critical infrastructure from the ongoing installation of Chinese-made battery energy storage systems throughout U.S. electrical grids and backup systems for industrial users.”

In other words, China’s dominance in manufacturing of batteries that are used for electric vehicles, and for storage of energy, including renewable energy, poses a cybersecurity risk to the United States. To combat the risk, the Commission recommends:

“To protect U.S. economic and national security interests, Congress [should] consider legislation to restrict or ban the importation of certain technologies and services controlled by Chinese entities, including:

  • Autonomous humanoid robots with advanced capabilities of (i) dexterity, (ii) locomotion, and (iii) intelligence; and
  • Energy infrastructure products that involve remote servicing, maintenance, or monitoring capabilities, such as load balancing and other batteries supporting the electrical grid, batteries used as backup systems for industrial facilities and/or critical infrastructure, and transformers and associated equipment.”

Hopefully, Congress will take these threats and recommendations seriously as U.S. consumers buy electric vehicles and expand uses for renewable energy.

Last week, the California Privacy Protection Agency (CPPA) announced settlements with two data brokers, Growbots, Inc. and UpLead LLC, for failure to register and pay the fees required of a data broker under the California Delete Act.

Growbots is a software company that provides an outbound sales platform to help users find, engage with, and connect with potential customers. UpLead is a business-to-business data provider that offers a platform for businesses to access and find accurate contact information for potential customers.

The Delete Act requires data brokers to register with the CPPA and pay an annual fee to fund the California Data Broker Registry. Data brokers can face fines of $200 per day for failing to register by the deadline. Each company has agreed to pay fines: Growbots will pay $35,400 and UpLead will pay $34,400. Each company has also agreed to pay the CPPA’s legal costs which were incurred between February and July 2024.

The registration fees will be used to develop the Data Broker Requests and Opt-Out Platform (DROP), which is set to be the first deletion mechanism that will allow consumers to request deletion from all data brokers with one single web form request. The CPPA anticipates that DROP will be available in 2026 in the CPPA’s website. These settlements, along with the newly adopted regulations for data brokers under the Delete Act, indicate that the CPPA will continue to focus its efforts on the privacy practices (and privacy pitfalls) of data brokers.