The Health Sector Cybersecurity Coordination Center (HC3) recently issued an Alert warning that “threat actors employing advanced social engineering tactics to target IT help desks in the health sector and gain initial access to target organizations” have been on the rise.

The social engineering scheme starts with a telephone call to the IT help desk from “an area code local to the target organization, claiming to be an employee in a financial role (specifically in revenue cycle or administrator roles). The threat actor is able to provide the required sensitive information for identity verification, including the last four digits of the target employee’s social security number (SSN) and corporate ID number, along with other demographic details. These details were likely obtained from professional networking sites and other publicly available information sources, such as previous data breaches. The threat actor claimed that their phone was broken and, therefore, could not log in or receive MFA tokens. The threat actor then successfully convinced the IT help desk to enroll a new device in multi-factor authentication (MFA) to gain access to corporate resources.”

After the threat actor gains access, login information related to payer websites is targeted, and they submit a form to make ACH changes for payer accounts. “Once access has been gained to employee email accounts, they sent instructions to payment processors to divert legitimate payments to attacker-controlled U.S. bank accounts. The funds were then transferred to overseas accounts. During the malicious campaign, the threat actor also registered a domain with a single letter variation of the target organization and created an account impersonating the target organization’s Chief Financial Officer (CFO).”

The threat actors are leveraging spearphishing voice techniques and impersonating employees, also known as “vishing.” IC3 noted that “threat actors may also attempt to leverage AI voice impersonation techniques to social engineer targets, making remote identity verification increasingly difficult with these technological advancements. A recent global study found that out of 7,000 people surveyed, one in four said that they had experienced an AI voice cloning scam or knew someone who had.”

IC3 provides numerous mitigations to assist with the prevention of these vishing schemes, which are outlined in the Alert.

The Federal Trade Commission (FTC) has declined to approve a new method for obtaining parental consent under the Children’s Online Privacy Protection Act (COPPA) that would involve analyzing facial geometry to verify an adult’s identity.

In a letter to the Entertainment Software Rating Board (ESRB), Yoti (a digital identity company), and SuperAwesome (a company that provides technology parental verification requirements), the FTC denied the June 2023 application for the “Privacy-Protective Facial Age Estimation” software as a new means of obtaining parental consent under COPPA. However, the FTC made this determination “without prejudice to the applicants filing in the future” because the FTC anticipates receiving additional information and research about age verification technologies and their applications. The FTC said that “this insight is expected to be provided in a report that the National Institute of Standards and Technology (NIST) is slated to soon release about Yoti’s facial age estimation model.”

COPPA requires websites and online services directed at children under the age of 13 to obtain verifiable parental consent before collecting or using personal information from children. COPPA explicitly outlines several methods of obtaining such parental consent, and also includes a provision that allows for entities to submit new methods of obtaining verifiable parental consent under the rule with the FTC’s approval. In the application to the FTC, ESRB said that the “Privacy-Protective Facial Age Estimation provides an accurate, reliable, accessible, fast, simple and privacy-preserving mechanism for ensuring that the person providing consent is the child’s parent.” The application further states that the technology “can be implemented in a way that is consistent with COPPA’s data minimization, confidentiality, security, and integrity, and retention and deletion provisions, as well as the [FTC’s] concerns about potential bias and discrimination.”

However, the FTC stated in its decision that, in response to its call for public comments on the application, those who opposed the method raised “concerns about privacy protections, accuracy, and deepfakes.”

The application for this technology will have to be resubmitted after the NIST report is released.

The California Privacy Protection Agency (CPPA) recently issued an enforcement advisory encouraging covered businesses to focus on their data minimization obligations related to consumer requests under the California Consumer Privacy Act (CCPA). The advisory categorizes data minimization as a “foundational principle” of the CCPA and reflects the reasons why businesses will apply this principle for better compliance with the CCPA. The advisory states: “[b]usinesses should apply this principle [of data minimization] to every purpose for which they collect, use, retain, and share consumers’ personal information.”

The publication of this advisory stems from the CPPA Enforcement Division’s observation of businesses “asking consumers to provide excessive and unnecessary personal information in response to requests that consumers make under the CCPA.”

However, note that this advisory and any others issued by the CPPA “do not implement, interpret, or make specific the law enforced or administered by the [CPPA], establish substantive policy or rights, constitute legal advice, or reflect the views of the Agency’s Board.” But note that the CPPA was also careful to note that adherence to an advisory is NOT “alternative relief or safe harbor from potential violations.”

The advisory also cites four examples of less obvious areas where data minimization applies under the CCPA: 1) the handling of user opt-out preference signals; 2) requests for data sale and sharing opt-outs; 3) requests around the use and disclosure of sensitive personal information; and 4) identity verification. To see the full advisory, click here.

U.S. Senator Maria Cantwell (D-WA) and U.S. Representative Cathy McMorris Rodgers (R-WA) have made a breakthrough by agreeing on a bipartisan data privacy legislation proposal. The legislation aims to address concerns related to consumer data collection by technology companies and empower individuals to have control over their personal information.

The proposed legislation aims to restrict the amount of data technology companies can gather from consumers. This step is particularly important given the large amount of data these technology companies possess. It would grant Americans the authority to prevent the sale of their personal information or request its deletion. This step gives individuals more control over their personal data. The Federal Trade Commission (FTC) and state attorneys general would be given significant authority to monitor and regulate matters related to consumer privacy. This measure will ensure that the government has a say in matters associated with consumer privacy. The bill includes robust enforcement measures, such as granting individuals the right to take legal action. This step is necessary to ensure that any violations of the legislation are dealt with effectively. While targeted advertising would not be prohibited, the proposed legislation would allow consumers to opt out of it. This step gives consumers more control over the ads they receive. The privacy violations listed in the legislation would also be applicable to telecommunications companies. This measure ensures that no company is exempt from consumer privacy laws. Annual assessments of algorithms would be conducted to ensure that they do not harm individuals, particularly young people. This is an important, step given the rise of technology and its impact on consumers, especially among younger generations.

The bipartisan proposal for data privacy legislation is a positive step forward in terms of consumer privacy in America. While there is still work to be done, it is essential that the government takes proactive steps to ensure that individuals have greater control over their personal data. This is a positive development for the tech industry and consumers alike.

However, as we reported on before, this is not the first time Congress has made strides towards comprehensive data privacy legislation,). Hopefully, this new bipartisan bill will enjoy more success than past efforts and bring the United States closer in line with international data privacy standards.

*This post was co-authored by Josh Yoo, legal intern at Robinson+Cole. Josh is not admitted to practice law.

Health care entities maintain compliance programs in order to comply with the myriad, changing laws and regulations that apply to the health care industry. Although laws and regulations specific to the use of artificial intelligence (AI) are limited at this time and in the early stages of development, current law and pending legislation offer a forecast of standards that may become applicable to AI. Health care entities may want to begin to monitor the evolving guidance applicable to AI and start to integrate AI standards into their compliance programs in order to manage and minimize this emerging area of legal risk.

Executive Branch: Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence

Following Executive Order 13960 and the Blueprint for an AI Bill of Rights, Executive Order No. 14110 (EO) amplifies the current key principles and directives that will guide federal agency oversight of AI. While still largely aspirational, these principles have already begun to reshape regulatory obligations for health care entities. For example, the Department of Health and Human Services (HHS) has established an AI Task Force to regulate AI in accordance with the EO’s principles by 2025. Health care entities would be well-served to monitor federal priorities and begin to formally integrate AI standards into their corporate compliance plans.

  • Transparency: The principle of transparency refers to an AI user’s ability to understand the technology’s uses, processes, and risks. Health care entities will likely be expected to understand how their AI tools collect, process, and predict data. The EO envisions labelling requirements that will flag AI-generated content for consumers as well.
  • Governance: Governance applies to an organization’s control over deployed AI tools. Internal mechanical controls, such as evaluations, policies, and institutions, may ensure continuous control throughout the AI’s life cycle. The EO also emphasizes the importance of human oversight. Responsibility for AI implementation, review, and maintenance can be clearly identified and assigned to appropriate employees and specialists.
  • Non-Discrimination: AI must also abide by standards that protect against unlawful discrimination. For example, the HHS AI Task force will be responsible for ensuring that health care entities continuously monitor and mitigate algorithmic processes that could contribute to discriminatory outcomes. It will be important to permit internal and external stakeholders to have access to equitable participation in the development and use of AI.

National Institute of Standards and Technology: Risk Management Framework

The National Institute of Standards and Technology (NIST) published a Risk Management Framework for AI (RMF) in 2023. Similar to the EO, the RMF outlines broad goals (i.e., Govern, Map, Measure, and Manage) to help organizations address and manage the risks of AI tools and systems. A supplementary NIST “Playbook”  provides actionable recommendations that implement EO principles to assist organizations to proactively mitigate legal risk under future laws and regulations. For example, a health care organization may uphold AI governance and non-discrimination by deploying a diverse, AI-trained compliance team.

Continue Reading Forecasting the Integration of AI into Health Care Compliance Programs

The recent increase in smishing and vishing schemes is prompting me to remind readers of schemes designed to trick users into providing credentials to perpetrate fraud. We have previously written on phishing, smishing, vishing, and QRishing schemes to increase awareness about these methods of intrusion.

HC3 recently warned the health care sector about vishing schemes designed to impersonate employees in order to access financial systems. See previous blog on this topic here.

The City of New York was recently forced to take its payroll system down for more than a week after a smishing scheme that was designed to steal employees’ pay. The attack targeted the city’s Automated Personnel System Employee Self Service users. The threat actor sent fake text messages with multi-factor authentication to employees with a link to insert their self-service credentials, including usernames, passwords, and copies of driver’s licenses. The scheme was designed to steal the information so the payroll system could be accessed in order to divert payroll to the threat actor’s account. 

Phishing, vishing, smishing, and QRishing continue to be successful ways for threat actors to perpetrate fraud. Applying a healthy dose of paranoia whenever you receive any request for credentials, whether by email, phone, text or through a QR code is warranted and wise.

Adding to the list of many other municipalities, the city of Pensacola, Florida, was hit with a cyber-attack last weekend that affected services to residents, including emergency telephone assistance. Although Pensacola is recovering, some services are still down, including online bill paying. The city of Pensacola is requesting residents to pay using other methods, including by check and is waiving certain penalties for late payment.

Pensacola experienced a ransomware attack in 2019, which emphasizes that municipalities continue to be targeted by threat actors, and they just don’t care if you have already been hit.

On March 18, the Office for Civil Rights of the U.S. Department of Health and Human Services issued a Bulletin updating its guidance to HIPAA-covered entities and business associates on the use of tracking technology on websites and mobile apps.

The Bulletin supplements the original guidance published by OCR in December 2022.

According to the Bulletin,

Regulated entities are not permitted to use tracking technologies in a manner that would result in impermissible disclosures of protected health information (PHI) to tracking technology vendors or any other violations of the HIPAA Rules. For example, disclosures of PHI to tracking technology vendors for marketing purposes without individuals’ HIPAA-compliant authorizations would constitute impermissible disclosures.

The Bulletin then provides several unhelpful examples of when the use of tracking technologies, in OCR’s view, may capture PHI, even if an individual is a patient or not. Although we appreciate that OCR is struggling with this new issue (brought to the forefront by class action litigation), so also are health care entities. Capturing an IP address randomly roaming a health care entity’s website and suggesting that a health care entity knows why an IP is visiting the website is unrealistic. Unfortunately, health care entities will continue to struggle with this issue despite the issuance of the Bulletin.

Convergent Outsourcing Inc., a debt-collection agency, settled a data breach class action in the U.S. District Court for the Western District of Washington for $2.45 million. The class action suit against Convergent alleged that the business failed to protect the personal information of over 640,000 individuals. The breach occurred in June 2022.

Plaintiffs alleged that Convergent failed to implement appropriate security measures to protect and secure personal information in its possession, failed to monitor its network for security vulnerabilities, or implemented appropriate security practices.

Pursuant to the settlement, class members may receive up to $1,500 each for lost time and direct expenses related to the breach (e.g., credit monitoring services), or upwards of up to $10,000 for extraordinary expenses. A third option would allow class members to receive a pro-rata share of the remaining settlement fund.

The California Privacy Protection Agency’s (CPPA) highly anticipated regulations for automated decision-making technology and risk assessment requirements are likely far from final. The CPPA met at the beginning of the month but did not come to a consensus on what the final regulations should look like.

The CPPA’s vote was expected to be procedural but the final review to begin formal rulemaking will now not begin until the summer. The CPPA’s General Counsel, Phil Laird, stated that the rulemaking process may not be completed until sometime in 2025.

The CPPA will continue developing the final rules to govern how developers of automated decision-making technology (ADMT) (which includes artificial intelligence (AI)) and businesses using such technology can obtain and use personal information. The rules are also expected to include specific details on how to collect opt-outs and when risk assessments must be conducted. Risk assessments would be required when training ADMT or AI models that will be used for significant decisions, profiling, generating deepfakes, or establishing identity.

Further, personal information of minors would be classified as sensitive personal information under the California Consumer Privacy Act/California Privacy Rights Act, and “systematic observation,” which is the consistent tracking by use of Bluetooth, wi-fi, drones or livestream technologies that can collect physical or biometric data, would qualify as “extensive profiling” when used in a work or educational setting.

So, where do we stand on these potential requirements? Without a unanimous vote from the CPPA on the proposed regulations, the CPPA will take another two months to rework the rules and get all members in alignment. We’ll continue to monitor the progress.