The Federal Trade Commission (FTC) has assumed the authority to enforce unauthorized data disclosures under the Federal Trade Commission Act (FTC Act). During the past three weeks, the FTC has used this authority to go after healthcare companies that disclose their customers’ personal data without permission.

On April 11, the FTC sued Monument, an online

Congress is once again entertaining federal privacy legislation. The American Privacy Rights Act (APRA) was introduced by Senate Commerce Committee Chair Maria Cantwell (D-WA) and House Energy and Commerce Chair Cathy McMorris Rodgers (R-WA).

Unlike current laws, the APRA would apply to both commercial enterprises and nonprofit organizations, as well as common carriers regulated by

On May 1, 2024, the Federal Trade Commission (FTC) announced a settlement with InMarket Media (InMarket), a digital marketing and data aggregator, to resolve the FTC’s allegations that InMarket “unlawfully collected and used consumers’ location data for advertising and marketing.”

The complaint filed by the FTC against InMarket alleged that InMarket collects and aggregates location

Increasingly, companies use AI to evaluate job applications and make interviewing or hiring decisions. However, government contractors who use artificial intelligence to evaluate job applications should ensure that the AI not only complies with anti-discrimination laws but also fulfills their contractual responsibilities. Federal contractors with contracts of $10,000 or more are subject to Executive Order

The Federal Trade Commission (FTC) has declined to approve a new method for obtaining parental consent under the Children’s Online Privacy Protection Act (COPPA) that would involve analyzing facial geometry to verify an adult’s identity.

In a letter to the Entertainment Software Rating Board (ESRB), Yoti (a digital identity company), and SuperAwesome (a company that

U.S. Senator Maria Cantwell (D-WA) and U.S. Representative Cathy McMorris Rodgers (R-WA) have made a breakthrough by agreeing on a bipartisan data privacy legislation proposal. The legislation aims to address concerns related to consumer data collection by technology companies and empower individuals to have control over their personal information.

The proposed legislation aims to restrict

*This post was co-authored by Josh Yoo, legal intern at Robinson+Cole. Josh is not admitted to practice law.

Health care entities maintain compliance programs in order to comply with the myriad, changing laws and regulations that apply to the health care industry. Although laws and regulations specific to the use of artificial intelligence (AI) are limited at this time and in the early stages of development, current law and pending legislation offer a forecast of standards that may become applicable to AI. Health care entities may want to begin to monitor the evolving guidance applicable to AI and start to integrate AI standards into their compliance programs in order to manage and minimize this emerging area of legal risk.

Executive Branch: Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence

Following Executive Order 13960 and the Blueprint for an AI Bill of Rights, Executive Order No. 14110 (EO) amplifies the current key principles and directives that will guide federal agency oversight of AI. While still largely aspirational, these principles have already begun to reshape regulatory obligations for health care entities. For example, the Department of Health and Human Services (HHS) has established an AI Task Force to regulate AI in accordance with the EO’s principles by 2025. Health care entities would be well-served to monitor federal priorities and begin to formally integrate AI standards into their corporate compliance plans.

  • Transparency: The principle of transparency refers to an AI user’s ability to understand the technology’s uses, processes, and risks. Health care entities will likely be expected to understand how their AI tools collect, process, and predict data. The EO envisions labelling requirements that will flag AI-generated content for consumers as well.
  • Governance: Governance applies to an organization’s control over deployed AI tools. Internal mechanical controls, such as evaluations, policies, and institutions, may ensure continuous control throughout the AI’s life cycle. The EO also emphasizes the importance of human oversight. Responsibility for AI implementation, review, and maintenance can be clearly identified and assigned to appropriate employees and specialists.
  • Non-Discrimination: AI must also abide by standards that protect against unlawful discrimination. For example, the HHS AI Task force will be responsible for ensuring that health care entities continuously monitor and mitigate algorithmic processes that could contribute to discriminatory outcomes. It will be important to permit internal and external stakeholders to have access to equitable participation in the development and use of AI.

National Institute of Standards and Technology: Risk Management Framework

The National Institute of Standards and Technology (NIST) published a Risk Management Framework for AI (RMF) in 2023. Similar to the EO, the RMF outlines broad goals (i.e., Govern, Map, Measure, and Manage) to help organizations address and manage the risks of AI tools and systems. A supplementary NIST “Playbook”  provides actionable recommendations that implement EO principles to assist organizations to proactively mitigate legal risk under future laws and regulations. For example, a health care organization may uphold AI governance and non-discrimination by deploying a diverse, AI-trained compliance team.Continue Reading Forecasting the Integration of AI into Health Care Compliance Programs

The Federal Trade Commission (FTC) keeps track of scams that are reported to it and summarizes those scams in a report outlining the most successful scams of the prior year.

Last year’s statistics are disturbing, as many of the same techniques from previous years are still being used successfully by threat actors. Old scams are

In a matter of weeks, the Federal Trade Commission (FTC) has settled another case against a company it alleges tracks consumers and sells their “precise location data” to third parties. This continues the FTC’s aggressive approach toward location-based consumer data.

According to the FTC’s complaint, Texas-based InMarket offered two apps to consumers: shopping rewards app