In the ongoing saga of the 23andMe bankruptcy, Federal Trade Commission Chairman Andrew N. Ferguson recently sent a letter to the Trustee overseeing the 23andMe bankruptcy proceeding stating, “As Chairman of the Federal Trade Commission, I write to express the FTC’s interests and concerns relating to the potential sale or transfer of millions of American consumers’ sensitive personal information.”

The letter further outlined the promises 23andMe made to consumers about protecting the sensitive information it collected and maintained and that it had “made direct representations to its users about how it uses, discloses, and protects their personal information, including how personal information will be safeguarded in the event of bankruptcy.” It outlined additional promises 23andMe made to consumers and that “these types of promises to consumers must be kept.” Importantly, the letter states:

This means that any bankruptcy-related sale or transfer involving 23andMe users’ personal information and biological samples will be subject to the representations the Company has made to users about both privacy and data security, and which users relied upon in providing their sensitive data to the Company. Moreover, as promised by 23andMe, any purchaser should expressly agree to be bound by and adhere to the terms of 23andMe’s privacy policies and applicable law, including as to any changes it subsequently makes to those policies.

For 23andMe customers, now is the time to request the deletion of their data. Hopefully, the letter from the FTC will also escalate the concern over the potential sale of genetic information.

On May 17, 2024, Colorado Governor Jared Polis signed, “with reservations,” Senate Bill 42-205, “Concerning Consumer Protections in Interactions with Artificial Intelligence Systems” (the Act). The first of its kind in the United States, the Act takes effect on February 1, 2026, and requires artificial intelligence (AI) developers, and businesses that use high-risk AI systems, to adhere to certain transparency requirements and AI governance.

The Governor sent a letter to the Colorado General Assembly explaining his reservations about signing the Act. He noted that the bill “targets ‘high risk’ AI systems involved in making consequential decisions, and imposes a duty on developers and deployers to avoid ‘algorithmic discrimination’ in the use of such systems.” He encouraged the legislature to “reexamine” the concept of algorithmic discrimination of the results of AI system use before the effective date in 2026.

If your company does business in Colorado and either develops or deploys AI systems, your company may need to first determine whether the systems used qualify as high-risk AI systems. A “High-Risk AI System” means any AI system that, when deployed, makes or is a substantial factor in making a consequential decision. A “Consequential Decision” has a material legal or significant effect on the provision or denial of education enrollment/education opportunity, employment opportunity, financial or lending service, essential government service, health care services, housing, insurance, or a legal service.

Unlike other state consumer privacy laws, this Act does not have a threshold number of consumers to trigger applicability. Further, both the Act and the Colorado Privacy Act (CPA) (similar to the California Consumer Privacy Act (CCPA)) use the term “consumers,” but the term refers to Colorado residents under this Act. At the same time, the CPA defines consumers as Colorado residents “acting only in an individual or household context,” excluding anyone in a commercial or employment context. Therefore, businesses that may not be subject to the CPA may have obligations under the Act.

The Act aims to prevent algorithmic discrimination in the development and use of AI systems. “Algorithmic discrimination” means any condition in which the use of an artificial intelligence system results in an unlawful differential treatment or impact that disfavors an individual or group of individuals based on their actual or perceived age, color, disability, ethnicity, genetic information, limited proficiency in the English language, national origin, race, religion, reproductive health, sex, veteran status, or other protected classification under state or federal law.

What are the requirements of the Act?

For Developers:

  • To avoid algorithmic discrimination in the development of high-risk artificial intelligence systems, developers must develop a statement describing the “reasonably foreseeable uses and known harmful or inappropriate uses of the system,” the type of data used to train the system, risks of algorithmic discrimination, the purpose of the system and the intended benefits and uses of the system. 
  • Additionally, the developer must provide documentation with the AI product stating how the system was evaluated to mitigate algorithmic discrimination before it was made available for use, the data governance measures utilized in development, how the system should be used (and not be used), and how the system should be monitored when used for consequential decision-making. Developers are also required to update the statement no later than 90 days after modifying the system.
  • Developers must also disclose to the Colorado Attorney General any known or reasonably foreseeable risks of algorithmic discrimination arising from system’s intended uses without unreasonable delay but no later than 90 days after discovery (through ongoing testing and analysis or a credible report from a business).

For Businesses:

  • Businesses that use high-risk AI systems must implement a risk management policy and program to govern the system’s deployment. The Act sets out specific requirements for that policy and program and instructs businesses to consider the size and complexity of the company itself, the nature and scope of the systems, and the sensitivity and volume of data processed by the system. Businesses must also conduct an impact assessment for the system at least annually in accordance with the Act. However, there are some exemptions from this impact assessment requirement (e.g., fewer than 50 employees, does not use its own data to train the high-risk AI system, etc.).
  • Additionally, businesses must notify consumers that they are using an AI system to make a consequential decision before the decision is made. The Act sets forth the specific content requirements of the notice, such as how the business manages known or reasonably foreseeable risks of algorithmic discrimination that may arise from the system’s deployment. As applicable, if the CPA applies to the business (in addition to the Act), the company must also provide consumers the right to opt out of the processing of personal data by such AI systems for profiling purposes.
  • Businesses must also disclose to the Colorado Attorney General any known or reasonably foreseeable risks of algorithmic discrimination arising from the use of the system no later than 90 days after discovery.

The Act requires developers and businesses who deploy, offer, sell, lease, license, give, or otherwise make available an AI system that is intended to interact with consumers to disclose to each consumer who interacts with the system that the consumer is interacting with an AI system.

Although noting that the Act is “among the first in the county to attempt to regulate the burgeoning artificial intelligence industry on such a scale,” Colorado’s Governor stated in his letter to the legislature that “stakeholders, including industry leaders, must take the intervening two years before this measure takes effect to fine-tune the provisions and ensure that the final product does not hamper development and expansion of new technologies in Colorado that can improve the lives of individuals across our state.” He further noted:

“I want to be clear in my goal of ensuring Colorado remains home to innovative technologies and our consumers are able to fully access important AI-based products. Should the federal government not preempt this with a needed cohesive federal approach, I encourage the General Assembly to work closely with stakeholders to craft future legislation for my signature that will amend this bill to confirm with evidence based findings and recommendations for the regulation of this industry.”

As we have seen with state consumer privacy rights laws, this new AI law may be a model that other states will follow but, based upon the Governor’s letter to the Colorado legislature, we anticipate that there will be additional iterations of the law before it becomes effective. Stay tuned.

Some writers (not from my great state of Rhode Island) act like Rhode Island has been behind the times when it comes to data privacy and security when discussing the state’s new privacy law. I feel a need to explain that this is just not so. Rhode Island is not a laggard when it comes to data privacy.

Rhode Island has had a data privacy law on its books for a long time, though it was not called a privacy law. It was the Rhode Island Identity Theft Protection Act, which was enacted in 2015. It was designed to protect consumers’ privacy and provide data breach notification. It was amended to include data security requirements in the footsteps of the then-novel Massachusetts data security regulations. It was a one-stop shop for data privacy, security, and breach notification. Still, it did not provide individuals the right to access or delete data and was not as robust as new data privacy laws. Rhode Island was an early state to include health information in its definition of personal information that requires breach notification in the event of unauthorized access, use, or disclosure of health information. Many states still do not include health information in the definition of breach notification.

But just so the record is clear, consumer protection has been in the DNA of Rhode Island’s laws for many years, and the new privacy law was an expansion of previous efforts to protect consumers.

The new privacy law in Rhode Island expands the privacy protections for consumers and is the latest in a wave of privacy laws being enacted in the United States. As of this writing, 19 states have new privacy laws, and Rhode Island makes it 20.

All of the privacy laws are fairly similar, except for California, which is the only state to date that provides for a private right of action in the event of a data breach (with requirements prior to the filing of a lawsuit).

That said, for those readers who will fall under the Rhode Island law and are in my home state, here are the details of the law (the Rhode Island Data Transparency and Privacy Protection Act (RIDTPPA)) of which you should be aware:

Continue Reading Rhode Island’s New Data Privacy Law

Wow! It’s hard to believe this blog marks the 400th Privacy Tip since I started writing many years ago. I hope the tips have been helpful over the years and that you have been able to share them with others to spread the word. 

I thought it would be fun to pick 10 (ok—technically, a few more than 10) Privacy Tips and re-publish them (in case you missed them) in honor of our 400th Privacy Tip milestone. We have published tips that are relevant to the hot issues of the time, but some are time-honored. It was really hard to pick, but here they are:

Continue Reading Privacy Tip #400 – Best of First 400 Privacy Tips

On May 9, 2024, Governor Wes Moore signed the Maryland Online Data Privacy Act (MODPA) into law. MODPA applies to any person who conducts business in Maryland or provides products or services targeted to Maryland residents and, during the preceding calendar year:

  1. Controlled or processed the personal data of at least 35,000 consumers (excluding personal data solely for the purpose of completing a payment transaction); or
  2. Controlled or processed the personal data of at least 10,000 consumers and derived more than 20 percent of its gross revenue from the sale of personal data.

MODPA does not apply to financial institutions subject to Gramm-Leach-Bliley or registered national securities associations. It also contains exemptions for entities governed by HIPAA.

Under MODPA, consumers have the right to access their data, correct inaccuracies, request deletion, obtain a list of those who have received their personal data, and rights to opt-out of processing for targeted advertising, the sale of personal data, and profiling in furtherance of solely automated decisions. The data controller must provide this information to the consumer free of charge once during any 12-month period unless the requests are excessive, in which case the controller may charge a reasonable fee or decline to honor the request.

MODPA prohibits a controller – defined as “a person who determines the purpose and means of processing personal data” – from selling “sensitive data.” MODPA defines “sensitive data” to include genetic or biometric data, children’s personal data, and precise geolocation data.  “Sensitive data” also means personal data that includes data revealing a consumer’s:

  • Racial or ethnic origin
  • Religious beliefs
  • Consumer health data
  • Sex life
  • Sexual orientation
  • Status as transgender or nonbinary
  • National origin, or
  • Citizenship or immigration status

A MODPA controller also may not process “personal data” in ways that violate discrimination laws. Under MODPA, “personal data” is any information that is linked or can be reasonably linked to an identified or identifiable consumer but not de-identified data or “publicly available information.” However, MODPA contains an exception if the processing is for (1) self-testing to prevent or mitigate unlawful discrimination; (2) diversifying an applicant, participant, or customer pool, or (3) a private club or group not open to the public.

MODPA has a data minimization requirement as well. Controllers must limit the collection of personal data to that which is reasonably necessary and proportionate to provide or maintain the specific product or service the consumer requested.

A violation of MODPA constitutes an unfair, abusive, or deceptive (UDAP) trade practice, which the Maryland Attorney General can prosecute. Each violation may incur a civil penalty of up to $10,000 for each violation and up to $25,000 for each repeated violation. Additionally, a person committing a UDAP violation is guilty of a misdemeanor, punishable by a fine of up to $1,000 or imprisonment of up to one year, or both. MODPA does not allow a consumer to pursue a UDAP claim for a MODPA violation, although it also does not prevent a consumer from pursuing any other legal remedies. MODPA will take effect October 1, 2025, with enforcement beginning April 1, 2026.

Congress is once again entertaining federal privacy legislation. The American Privacy Rights Act (APRA) was introduced by Senate Commerce Committee Chair Maria Cantwell (D-WA) and House Energy and Commerce Chair Cathy McMorris Rodgers (R-WA).

Unlike current laws, the APRA would apply to both commercial enterprises and nonprofit organizations, as well as common carriers regulated by the Federal Communications Commission (FCC). The law would have a broad scope but provide a conditional exemption for small businesses with less than $40 million in revenue and data on fewer than 200,000 consumers. However, this exemption would not apply if the small business transfers data to third parties for value. The APRA would require data minimization, i.e., prohibiting covered entities from collecting more personal information than is strictly necessary for the stated purpose.

The APRA defines sensitive data broadly as data related to government identifiers, health, biometrics, genetics, financial accounts and payments, precise geolocation, log-in credentials, private communications, revealed sexual behavior, calendar or address book data, phone logs, photos and recordings for private use, intimate imagery, video viewing activity, race, ethnicity, national origin, religion or sex, online activities over time and across third-party websites, information about a minor under the age of 17, and other data the FCC defines as sensitive covered data by regulation. Sensitive data would require affirmative express consent before transfer to third parties. Those meeting the definition of “covered entities would need to give clear disclosures and easy opt-out options.

Notably, the APRA is a departure from the current federal standard set by the Children’s Online Privacy Protection Act (COPPA), which places the cutoff at 13.

The APRA would require algorithmic bias impact assessments for “covered algorithms” that make consequential decisions. It would also prohibit discriminatory use of data. “Large data holders” and “covered high-impact social media companies” would face additional obligations around reporting, algorithm audits, and designated privacy/security officers.

While privacy professionals across the country will collectively groan at a law other than HIPAA using the term “covered entity,” the simplicity of a single standard rather than the current patchwork of state laws may just be worth the headache of two federal privacy laws using the same term with different definitions. However, it remains to be seen whether the APRA will make it to the Congress floor. We’ve reported in the past about attempts at a federal standard that ended up stalling in committee.

You can read the full APRA draft here.

Last month, Nebraska passed the Nebraska Data Privacy Act (NDPA), making it the latest state to enact comprehensive privacy legislation. Nebraska joins California, Virginia, Colorado, Connecticut, Utah, Iowa, Indiana, Tennessee, Montana, Oregon, Texas, Florida, Delaware, New Jersey, New Hampshire, Kentucky, and Maryland. The law will take effect on January 1, 2025.

The NDPA applies to entities that conduct business in Nebraska or produce products or services consumed by Nebraska residents and that process or sell personal data of Nebraska residents. Similar to other state consumer privacy laws, the NDPA exempts nonprofits, government entities, financial institutions, and protected health information under the Health Insurance Portability and Accountability Act.

Consumers are granted the following rights under the NDPA: rights of access, deletion, portability, and correction; the right to opt-out of targeted advertising; and the sale of personal data, and/or automated profiling. Similar to the California Consumer Privacy Act, the NDPA defines the “sale of personal data” as “the exchange of personal data for monetary or other valuable consideration.” The NDPA also requires businesses to obtain consent before processing consumer-sensitive data. Sensitive data includes personal data that reveals racial or ethnic origin, religious beliefs, a mental or physical health diagnosis, sexual orientation, citizenship or immigration status, genetic or biometric data processed to uniquely identify individuals, personal data collected from a known minor, and precise geolocation data.

Lastly, the NDPA will require businesses to conduct Data Protection Impact Assessments for all processing activities that involve targeted advertising, the sale of personal data, some types of profiling, the processing of sensitive data, or that otherwise present a heightened risk of harm to the consumer.

If the NDPA applies to your business, the business is subject to enforcement by the Nebraska Attorney General, but there is a 30-day right to cure violations of the NDPA that does not sunset.

The Federal Trade Commission (FTC) and the California Attorney General teamed up against California company CRI Genetics, LLC, filing a joint complaint against the company alleging that it engaged in deceptive practices when it “deceived consumers about the accuracy of its test reports compared with those of other DNA testing companies, falsely claimed to have patented an algorithm for its genetic matching process, and used fake reviews and testimonials on its websites.” In addition, the complaint alleged that CRI “used ‘dark patterns’ in its online billing process to trick consumers into paying for products they did not want and did not agree to buy.”

CRI agreed to pay a $700,000 civil penalty to settle the action, and agreed to change its marketing practices so it does not misrepresent that: its DNA testing product or service is more accurate or detailed than others; it can show the geographic location of ancestors “with a 90 percent or higher accuracy rate”; or it will show “exactly where consumers’ ancestors came from or exactly when or where they arrived, among others.

The proposed order also requires CRI to disclose the costs of products and services to consumers, obtain consent from consumers about how it may share DNA information, and delete the genetic and other information of consumers who received refunds and requested that their data be deleted.

The full FTC unanimously authorized the proposed order which will now be presented to a federal judge for approval.

We previously reported on the unfortunate data breach suffered by 23andMe last month and its implications. We never imagined how horrible it could be.

According to an October 6, 2023, posting by Wired earlier that week, hackers involved with the 23andMe breach posted “an initial data sample on the platform BreachForums…claiming that it contained 1 million data points exclusively about Ashkenazi Jews…and starting selling 23andMe profiles for between $1 and $10 per account.”

Several days later, it was reported that “another user on BreachForums claimed to have the 23andMe data of 100,000 Chinese users.”

The implications of posting account information, including potential genetic information of users for political or hateful reasons is real and happening in real time. According to news reports, the war in Gaza “is stoking antisemitism in the U.S.” and across the world. Preliminary data from the Anti-Defamation League shows a 388% jump of antisemitic incidents in the U.S. since Hamas’ attack on Israel on October 7, 2023.  

If you are a 23andMe user, it is important to find out for your safety and well being whether your genetic data was compromised and is posted by extremist threat actors. The Electronic Frontier Foundation published an article, “What to Do if You’re Concerned About the 23andMe Breach” providing more information about the background of the breach, the selling of information, and what you can do to protect yourself further, including deleting your data.

We have published blog posts before on sharing genetic information and the risk associated with the disclosure of such sensitive information.

Unfortunately, our concerns have been realized. On Monday, October 9, 2023, 23andMe confirmed that its investigation into a data security incident involving customer profile information shared through its DNA Relatives feature “was compiled from individual 23andMe.com accounts without the account users’ authorization” by threat actors.

Although its investigation continues, 23andMe is “requiring that all customers reset their passwords and are encouraging the use of multi-factor authentication (MFA).”

The company recommends that its customers take action to keep their account and password secure. It recommends:

  • Confirm you have a strong password, one that is not easy to guess and that is unique to your 23andMe account. If you are not sure whether you have a strong password for your account, reset it.
  •  Enable multi-factor authentication (MFA) on your 23andMe account.
  • Review its Privacy and Security Checkup page with additional information on how to keep your account secure.

We will follow this incident closely. In the meantime, if you or a family member has provided genetic information to 23andMe, you may wish to consider changing your password, telling your family members to do the same and follow the recommendations of 23andMe.