On May 9, 2024, Governor Wes Moore signed the Maryland Online Data Privacy Act (MODPA) into law. MODPA applies to any person who conducts business in Maryland or provides products or services targeted to Maryland residents and, during the preceding calendar year:

  1. Controlled or processed the personal data of at least 35,000 consumers (excluding personal data solely for the purpose of completing a payment transaction); or
  2. Controlled or processed the personal data of at least 10,000 consumers and derived more than 20 percent of its gross revenue from the sale of personal data.

MODPA does not apply to financial institutions subject to Gramm-Leach-Bliley or registered national securities associations. It also contains exemptions for entities governed by HIPAA.

Under MODPA, consumers have the right to access their data, correct inaccuracies, request deletion, obtain a list of those who have received their personal data, and rights to opt-out of processing for targeted advertising, the sale of personal data, and profiling in furtherance of solely automated decisions. The data controller must provide this information to the consumer free of charge once during any 12-month period unless the requests are excessive, in which case the controller may charge a reasonable fee or decline to honor the request.

MODPA prohibits a controller – defined as “a person who determines the purpose and means of processing personal data” – from selling “sensitive data.” MODPA defines “sensitive data” to include genetic or biometric data, children’s personal data, and precise geolocation data.  “Sensitive data” also means personal data that includes data revealing a consumer’s:

  • Racial or ethnic origin
  • Religious beliefs
  • Consumer health data
  • Sex life
  • Sexual orientation
  • Status as transgender or nonbinary
  • National origin, or
  • Citizenship or immigration status

A MODPA controller also may not process “personal data” in ways that violate discrimination laws. Under MODPA, “personal data” is any information that is linked or can be reasonably linked to an identified or identifiable consumer but not de-identified data or “publicly available information.” However, MODPA contains an exception if the processing is for (1) self-testing to prevent or mitigate unlawful discrimination; (2) diversifying an applicant, participant, or customer pool, or (3) a private club or group not open to the public.

MODPA has a data minimization requirement as well. Controllers must limit the collection of personal data to that which is reasonably necessary and proportionate to provide or maintain the specific product or service the consumer requested.

A violation of MODPA constitutes an unfair, abusive, or deceptive (UDAP) trade practice, which the Maryland Attorney General can prosecute. Each violation may incur a civil penalty of up to $10,000 for each violation and up to $25,000 for each repeated violation. Additionally, a person committing a UDAP violation is guilty of a misdemeanor, punishable by a fine of up to $1,000 or imprisonment of up to one year, or both. MODPA does not allow a consumer to pursue a UDAP claim for a MODPA violation, although it also does not prevent a consumer from pursuing any other legal remedies. MODPA will take effect October 1, 2025, with enforcement beginning April 1, 2026.

Anthropic has achieved a major milestone by identifying how millions of concepts are represented within their large language model Claude Sonnet, using a process somewhat akin to a CAT scan. This is the first time researchers have gained a detailed look inside a modern, production-grade AI system.

Previous attempts to understand model representations were limited to finding patterns of neuron activations corresponding to basic concepts like text formats or programming syntax. However, Anthropic has now uncovered high-level abstract features in Claude spanning a vast range of concepts – from cities and people to scientific fields, programming elements, and even abstract ideas like gender bias, secrets, and inner ethical conflicts.

Remarkably, they can even manipulate these features to change how the model behaves and force certain types of hallucinations. Amplifying the “Golden Gate Bridge” feature caused Claude to believe it was the Golden Gate Bridge when asked about its physical form (Claude normally responds with a variation on, “I have no form, I am an AI model.”) Intensifying the “scam email” feature overcame Claude’s training to avoid harmful outputs, making it suggest formats for scam emails.

Other features corresponding to malicious behavior or content with the potential for misuse included code backdoors and bioweapons, as well as problematic behaviors like bias, manipulation, and deception. Normally, these features activate when the user asks Claude to “think” about one of these concepts, and Claude’s ethical guardrails keep it from drawing from these sources when generating content. This validates that these features don’t just map to parsing user input but directly shape the model’s responses. It also points to the exact kind of malicious capability that hackers and other unauthorized users will undoubtedly exploit on pirate models.

While much work remains to fully map these large models, Anthropic’s breakthrough seems like an extremely promising step forward in the burgeoning field of AI auditing. And, given that researchers were able to directly tweak the features to influence Claude’s output, this research may also open the door to the sort of under-the-hood tinkering that has eluded generative AI developers for years. Of course, it may also open the door to direct, feature-level regulation as well as creative plaintiff’s arguments as the standard of care for AI developers takes shape.

Read the full blog post from Anthropic here.

To add to TikTok’s legal woes in the U.S., Nebraska Attorney General Mike Hilgers (AG) filed suit against TikTok on May 22, 2024, alleging that TikTok violated Nebraska’s consumer protection laws and engaged in deceptive trade practices by “designing and operating a platform that is addictive and harmful to teens and children.” In addition, the AG alleges that “TikTok’s features fail to protect kids and regularly expose underage users to age-inappropriate and otherwise harmful content.”

The AG alleges that the use of TikTok by children in Nebraska “has fueled a youth mental health crisis in Nebraska.” The AG further alleges that TikTok is “addictive, that compulsive use is rampant, and that its purported safety features, such as age verification and parental controls, are grossly ineffective.”

The AG filed suit after his office conducted an investigation into TikTok’s practices, which included creating TikTok accounts of fictitious minors. According to the AG, TikTok’s algorithm directed the fictitious minors to inappropriate content “within minutes” of opening an account.

TikTok denies the allegations. We will continue to follow and update our readers on developments in the TikTok debate.

On May 10, 2024, CISA, along with the FBI, HHS, and MS-ISAC, issued a joint Cybersecurity Advisory relating to Black Basta ransomware affiliates “that have targeted over 500 private industry and critical infrastructure entities, including healthcare organizations, in North America, Europe, and Australia.”

The Black Basta Advisory provides information on how the threat actors gain initial access to victims’ systems, which primarily use spearphishing tactics. In addition, “starting in February 2024, Black Basta affiliates began exploiting ConnectWise vulnerability (CVE-2024-1709). In some instances, affiliates have been observed abusing valid credentials.”

The affiliates use different tools for lateral movement, including Remote Desktop Protocol, Splashtop, Screen Connect, and Cobalt Strike. In addition, they use credential scraping tools like Mimikatz to escalate privileges and have exploited prior zero-day vulnerabilities for local and Windows Active Domain privilege escalation.

The Advisory lists indicators of compromise, file indicators, and suspected domains used by Black Basta, which are helpful for IT professionals to compare against company systems. Mitigations listed by the Advisory include current patching, MFA, training, securing remote access software, backups, and other mitigation techniques. This Advisory is an important read for IT professionals in all industries.

In a significant setback for Elon Musk’s X Corp (formerly Twitter), a U.S. District Judge has dismissed the company’s lawsuit against an Israeli data-scraping firm, Bright Data Ltd. We previously reported on X’s recent spree of lawsuits against data-scraping companies.

The court held X Corp failed to demonstrate that Bright Data violated its user agreement by enabling the scraping and sale of content from the platform. The judge asserted that using scraping tools is not inherently fraudulent and that social media companies should not have free rein to dictate how public data is used, as this could lead to “information monopolies that would disserve the public interest.”

This case is the latest win for free information advocates. In January, another San Francisco judge ruled that Bright Data did not violate Meta Platforms’ (Facebook and Instagram) terms of service by scraping data from those platforms. In March, a judge also dismissed X Corp’s lawsuit against the nonprofit Center for Countering Digital Hate, which had published articles based on scraped data critical of the platform. This latest ruling underscores the growing recognition among the courts that social media platforms cannot unilaterally control access to public data, even if it may be inconvenient or uncomfortable for the platform owners.

Last week, the Vermont legislature passed H. 121, the Vermont Data Privacy Act. This law will make Vermont the 18th state to grant consumers privacy rights similar to those under the California Consumer Privacy Act (CCPA). It is scheduled to go into effect on July 1, 2025.

While the Vermont Data Privacy Act includes provisions similar to those granted under the CCPA (e.g., consumer rights to delete, access, correct, and opt-out), the Act also includes some provisions that are more protective than the CCPA:

  • The Act includes data minimization requirements that prohibit businesses from collecting personal information for ANY purpose outside of providing the product or service.
  • The Act grants consumers a private right of action against businesses not only when the entity causes a breach of personal information (as is the case under the CCPA) but also if the business misuses data about their race, religion, sexual orientation, health, or other categories of sensitive information. 

Note, however that the law’s private right of action must be reauthorized after two years and only applies to large data brokers. The Vermont legislature pushed this law along amidst the push by the federal government to pass a comprehensive privacy law, which has yet to come to fruition over the last decade. We will continue to monitor new consumer privacy rights laws and how these laws may affect your business and its data collection and use practices.

Generative artificial intelligence (AI) has opened a new front in the battle to keep confidential information secure. The National Institute of Standards and Technology (NIST) recently released a draft report highlighting the risk generative AI poses to data security. The report, entitled “Artificial Intelligence Risk Management Framework:  Generative Artificial Intelligence Profile,” details generative AI’s potential data security pitfalls and suggests actions for generative AI management.

NIST identifies generative AI’s data security risk as “[l]eakage and unauthorized disclosure or de-anonymization of biometric, health, location, personally identifiable [information], or other sensitive data.” Training generative AI requires an enormous amount of data culled from the internet and other publicly available sources. For example, ChatGPT4 was trained with 570 gigabytes from books, web texts, articles, and other writing on the internet, which amounts to about 300 billion words residing in a generative AI database. Much of generative AI’s training data is personal, confidential, or sensitive information. 

Generative AI systems have been known to disclose any information within its training data, including confidential information, upon request. During adversarial attacks, large language models have revealed private or sensitive information within their training data, including phone numbers, code, and conversations. The New York Times has sued ChatGPT’s creator, OpenAI, alleging in part that ChatGPT will furnish articles behind the Times paywall. This disclosure risk poses obvious data security issues.

Less obvious are the data security issues that generative AI’s capacity for predictive inference poses. With the vast quantity of data available to generative AI, it can correctly infer personal or sensitive information, including a person’s race, location, gender, or political leanings – even if that information is not within the AI’s training data. NIST warns that these AI models, or individuals using the models, might disclose this inferred information, use it to undermine privacy or apply it in a discriminatory manner. Already, we have seen a company settle an EEOC lawsuit alleging that it used AI to make discriminatory employment decisions.  Generative AI threatens to increase this legal exposure. 

From an AI governance perspective, NIST suggests several broad principles to mitigate the data privacy risk. Among other things, NIST recommends:

  • Aligning generative AI use with applicable laws, including those related to data privacy and the use, publication, or distribution of intellectual property;
  • Categorizing different types of generative AI content with associated data privacy risks;
  • Develop an incident response plan specifically tailored to address breaches, and regularly test and update the incident response plan with feedback from external and third-party stakeholders;
  • Establish incident response plans for third-party generative AI technologies deemed high-risk. As with all incident response plans, this incident response plan should include:
    • Communicating third-party generative AI incident response plans to all relevant AI actors;
    • Defining ownership of the incident response functions;
    • Rehearsing (or “table topping”) the incident response plans regularly;
    • Regular review of incident response plans for alignment with relevant breach reporting, data protection, data privacy, or other laws;
  • Update and integrate due diligence processes for generative AI acquisition and procurement vendor assessments to include data privacy, security, and other risks; and
  • Conduct periodic audits and monitor AI-generated content for privacy risks.

These actions will involve more than simply adding a reference to artificial intelligence to existing cybersecurity plans. They will involve carefully analyzing a company’s legal obligations, its contract obligations, and the company culture to design an AI governance plan that keeps confidential information out of the public domain and away from bad actors.

The Cybersecurity and Infrastructure Security Agency (CISA) and its partners recently issued helpful guidance for entities that have limited resources to address cyber threats. The guidance, entitled “Mitigating Cyber Threats with Limited Resources: Guidance for Civil Society,” is targeted to assist civil society—“nonprofit, advocacy, cultural, faith-based, academic, think tanks, journalist, dissident, and diaspora organizations, communities, and individuals involved in defending human rights and advancing democracy,” which are considered “high risk communities” because they “are targeted by state-sponsored threat actors who seek to undermine democratic values and interests” with addressing cybersecurity risks with limited resources.

According to the guidance, state-sponsored attacks against civil society are primarily launched by “the governments of Russia, China, Iran, and North Korea.” The threat actors are conducting “extensive pre-operational research to learn about potential victims, gather information to support social engineering, or obtain login credentials,” and are using spyware applications to collect data from victims.

The guidance is designed to provide “mitigation measures for civil society organizations to reduce their risk based on common cyber threats” and civil society organizations and affiliated individuals are “strongly encourage[d]…to apply the mitigations provided in this joint guide.”

If you fall into the civil society organization category, you may wish to consider delving into the guidance with your IT professionals to learn more about the threat and how to mitigate the risk of a cyber-attack from state-sponsored actors and other threat actors.

The newest health care entity to be hit by a cyberattack is Ascension Health, which operates 140 hospitals and 40 assisted living facilities in 19 states. Ascension confirmed that it has been hit by a cybersecurity attack and that the attack has disrupted its clinical operations. Ascension detected the attack on May 8, 2024, and is in the process of investigating and responding to it.

The attack has reportedly affected clinical operations in Florida, Indiana, Michigan, Oklahoma, Texas, and Wisconsin. Ascension recommends that its business partners contact its IT professionals to determine whether any connections to Ascension systems are at risk.

Unfortunately, threat actors continue to attack health care entities, and the pace does not appear to be abating. As a result, it is important for health care entities to prepare for an incident by implementing an incident response plan, frequent testing of the plan, testing contingent operations and disaster recovery plans, and conducting tabletop exercises to prepare for attacks.

This week, the Massachusetts Supreme Judicial Court (SJC) reviewed a lower court’s dismissal of gun-related indictments against Richard Dilworth, Jr., related to the state’s refusal to disclose the bitmojis and usernames it used to conduct online surveillance through Snapchat accounts in 2017 and 2018.

Police arrested Dilworth for possession of a loaded revolver after Boston police saw eight Snapchat videos in which Dilworth appeared to be holding a gun. Dilworth was also arrested again after the initial arrest, and again while in possession of a firearm, again after police saw him holding a gun on Snapchat. 

During discovery, Dilworth sought the bitmojis and usernames to determine whether the police used Snapchat to target non-white individuals in violation of the Equal Protection Clause of the U.S. Constitution. The state claimed that the Snapchat surveillance targeted individuals suspected of criminal behavior based on tips received from informants.

The state argued that it could not comply with the discovery order because disclosure of the information would compromise informants and police officers, but the Suffolk Superior Court rejected that argument and granted Dilworth’s motion.

On appeal, during oral arguments, the SJC questioned the defendant’s attorney about how the bitmojis and usernames would strengthen the argument that the state had conducted selective prosecution. As further support, Dilworth’s attorney explained that the defense had conducted an informal survey of public defenders whose clients had been the subject of Snapchat surveillance; the survey showed that 17 out of 20 cases targeted Black individuals, and the other 3 were Hispanic individuals.

Dilworth’s attorney argued that the discretionary choices made by the officers in setting up the investigatory scheme (i.e., the use of Black bitmojis) went to the core claim that Dilworth was targeted because of his race. However, Justice Serge Georges, Jr., interjected that Dilworth was targeted because “he can’t stop flashing and posting videos of guns.” Justice Georges continued, “Mr. Dilworth was arrested after whatever was witnessed online, and no sooner does he hit the street, he does it again. So, it’s not this generalized kind of targeting that I see here. It’s a specific targeting.” The defendant’s counsel replied that the information the defense had developed raised questions about the true purpose of the surveillance.

We will await the SJC’s decision on the legality of this Snapchat surveillance and how that decision could affect other types of social media surveillance and profiling.