The Federal Trade Commission (FTC) keeps track of scams that are reported to it and summarizes those scams in a report outlining the most successful scams of the prior year.

Last year’s statistics are disturbing, as many of the same techniques from previous years are still being used successfully by threat actors. Old scams are continuing to be profitable for fraudsters, as the amount of money scammers obtained from victims last year was the most reported to the FTC ever. That amount is $10 billion, which is a whopping $1 billion more than in 2023.

The Data Book, as the FTC calls it, found that “email was the #1 contact method for scammers this year, especially when scammers pretended to be a business or government agency to steal money.”

According to the report, here are other takeaways for 2023:

Imposter scams. Imposter scams remained the top fraud category, with reported losses of $2.7 billion. These scams include people pretending to be your bank’s fraud department, the government, a relative in distress, a well-known business, or a technical support expert.

Investment scams. While investment-related scams were the fourth most reported fraud category, losses in this category grew. People reported median losses of $7.7K – up from $5K in 2022.

Social media scams. Scams starting on social media accounted for the highest total losses at $1.4 billion – an increase of 250 million from 2022. But scams that started with a phone call caused the highest per-person loss ($1,480 average loss).

Payment methods. How did scammers prefer that people pay? With bank transfers and payments, which accounted for the highest losses ($1.86 billion). Cryptocurrency is a close second ($1.41 billion reported in losses).

Losses by age. Of people who reported their age, younger adults (20-29) reported losing money more often than older adults (70+). However, when older adults lost money, they lost the most.

The Data Book provides a sobering look at victims’ losses and is a document that everyone can learn from, no matter your age.

This post was co-authored by Yelena Greenberg, a member Robinson+Cole’s Health Law Group.

On February 8, 2024, the U.S. Department of Health and Human Services (HHS) issued a final rule (Final Rule) updating federal “Part 2” regulations to more closely align the requirements applicable to substance use disorder (SUD) treatment records with the HIPAA privacy rule, and to make certain other changes. The regulations at 42 CFR Part 2 have long set forth strict rules governing the uses and disclosures of medical records of certain SUD treatment facilities and programs. HHS is now proposing to scale back those rules slightly, in accordance with statutory changes to federal law governing the privacy of SUD records in the 2020 “CARES Act” legislation enacted in response to COVID-19.[i] This Final Rule follows a proposed rule issued by HHS on December 2, 2022, which we previously analyzed here.

The Final Rule is anticipated to take effect on April 16, 2024 (60 days from the anticipated publication date of February 16). The compliance date by which individuals and entities must comply with the Final Rule’s requirements is February 16, 2026 (except as specifically tolled in the Final Rule).

Below we provide a high-level summary of the changes included in the Final Rule.  We will supplement this analysis in the coming days with additional detailed reviews of certain of these changes referenced below. 

The key updates in the Final Rule include:

  • Consent: A long-standing tenet of the Part 2 regulations was that SUD records could not be used or disclosed without specific patient consent, except in very narrow circumstances.  The Final Rule updates this regulation to allow a patient to give a single, broad consent that covers all future uses and disclosures of Part 2 records for treatment, payment, and health care operations purposes (as defined under the HIPAA privacy rule), subject to certain exceptions (hereinafter, “TPO Consent”). This alignment with the HIPAA privacy rule is an important development to streamline compliance with the previously incongruent consent regimens under the Part 2 and HIPAA regulations across health systems and Part 2 programs (as defined under the Part 2 regulations).
  • TPO Consent Elements: The Final Rule indicates that a valid TPO Consent must have all of the required elements of a valid HIPAA authorization.
  • Redisclosures: The Final Rule newly allows Part 2 programs, as well as HIPAA-covered entities and business associates, who have received Part 2 records in accordance with TPO Consent, to “redisclose the records as permitted by the HIPAA regulations” except in proceedings against a patient requiring a court order or specific written consent, or until the patient revokes the consent.
  • SUD Counseling Notes: The Final Rule revises the definition of “SUD counseling notes” under the Part 2 regulations “to parallel the HIPAA psychotherapy note provisions,” which are subject to heightened confidentiality restrictions under Part 2 and HIPAA, respectively.
  • Segregation/Segmentation of Part 2 Records: The Final Rule states that a Part 2 program, or HIPAA-covered entity or business associate, which receives Part 2 records based on a single TPO Consent, is “not required to segregate or segment such records.” This may be an important clarification for health systems and other entities that rely on integrated and unified electronic health records.
  • Part 2 Record Breaches: Extends applicability of breach notification requirements consistent with those under HIPAA to breaches of Part 2 records.
  • Civil and Criminal Enforcement: The Final Rule incorporates HIPAA’s criminal and civil enforcement authorities into the Part 2 regulations, allowing for imposition of civil money penalties and other sanctions available under HIPAA for Part 2 violations.
  • Accounting of Disclosures: The Final Rule grants patients a new right to request an accounting of disclosures made by a Part 2 program based on a consent, for up to 3 years prior to the date of the accounting. However, the compliance date for this provision is tolled by HHS in the Final Rule until HHS revises the HIPAA privacy rule’s accounting for disclosures regulation to address disclosures through an electronic health record.

The Final Rule represents the latest in a series of efforts by HHS to more closely align HIPAA and Part 2 requirements and processes, in recognition of industry shifts to more integrated and coordinated medical, behavioral health, and SUD care. Health care organizations will need to assess the various provisions of the Final Rule closely to determine their compliance obligations and any necessary operational changes.

We will continue to monitor and track developments related to the Part 2 requirements and implications of this Final Rule.


[i] Coronavirus Aid, Relief, and Economic Security Act, Pub. L. No 116-136, 134 Stat 281 (27 March 2020) (CARES Act) – https://www.congress.gov/116/bills/hr748/BILLS-116hr748enr.pdf (codified in pertinent part at 42 U.S.C. 290dd–2).

 This post is also being shared on our Health Law Diagnosis blog. If you’re interested in getting updates on developments affecting health information privacy and HIPAA related topics, we invite you to subscribe to the blog. 

Unfortunately, according to Unit 42 of Palo Alto’s recently published “Ransomware and Extortion Report,” ransomware groups had a good year in 2022. They found that threat actors are using multi-extortion tactics to get paid by victims, including data exfiltration. In addition, there was “a 49% increase in victims reported by ransomware leak sites, with a total of 3,998 posts from various ransomware groups.”

Twenty-five new ransomware groups attacked companies in 2023, though the most successful continued to be some well-known groups, including BlackCat, CL0P, and Lockbit.

According to its analysis of leak site data, the manufacturing sector was the hardest hit in 2023, “signaling significant vulnerabilities in this sector.” Unit 42 surmises this is because the manufacturing sector is using old software that makes patching difficult. Further, based on leak data, U.S.-based organizations were most severely affected by ransomware, a whopping 42 percent of leaks in 2022.

Threat actors are increasing their usage of harassment techniques, including communicating with C-Suite executives to apply pressure to pay.

Unit 42’s experts have predictions for what to expect from extortion groups in 2024 and it is not pretty. The predictions include:

  • “2024 will be the year we see a large cloud ransomware compromise.
  • A rise in extortion related to insider threats.
  • A rise in politically motivated extortion attempts.
  • The use of ransomware and extortion to distract from attacks aimed to infect the supply chain or source code.”

Unfortunately, ransomware is still dominating security incidents and will continue to cause chaos for U.S. companies. Conducting a tabletop exercise on a ransomware attack is imperative to prepare for the attack. Schedule one now before you get hit.

The Connecticut Data Privacy Act (CDPA), which became effective on July 1, 2023, provides Connecticut residents with certain rights over their personal information and establishes responsibilities and privacy protection standards for businesses that process personal information. Notably, the CDPA allows businesses a 60-day cure period to correct violations without penalties through the end of 2024. However, after that cure period, civil penalties may be enforced for up to $5,000 per violation. The CDPA applies to businesses that control or process the data of at least 100,000 Connecticut residents per year, or 25,000 residents per year, if more than 25 percent of their gross revenue comes from selling personal data. A new report was recently released by the office of the Connecticut Attorney General (AG), which outlines how the state has been enforcing this new law. 

The report summarizes the enforcement of this law since its effective date, indicating that the AG’s office has issued about a dozen violation notices to businesses related to the collection and use of consumer data. Of course, these violation notices granted the companies the ability to correct the violation within the 60-day cure period. According to the report, most of Connecticut’s enforcement efforts so far have focused on privacy policies that had confusing disclosures or failed to provide consumers with a clear and conspicuous way to exercise their privacy rights under the CDPA. Additionally, there were a few violation notices sent to businesses for violations related to the collection of sensitive data, such as a grocery store collecting biometric data to prevent shoplifting.

The report concludes, “There is much yet to be done in the balancing act of privacy of consumer information and the need to use and maintain that same information in our global economy. We remain ready to do our part, encouraging and guiding compliance, but prepared to undertake enforcement when necessary.”

The Electronic Privacy Information Center gave Connecticut a D grade for the CDPA, citing the fact that the law is overly favorable to the tech industry and “a favored piece of template legislation for lobbyists.”

In addition to the violation notices sent by the AG’s office, the office also received over 30 individual consumer complaints. However, many of those complaints were related to companies or data that are exempt from the CDPA, such as nonprofits and entities covered under the Health Insurance Portability and Accountability Act.

To read the full report, click here.

The World Health Organization (WHO) recently published “Ethics and Governance of Artificial Intelligence for Health: Guidance on large multi-modal models” (LMMs), which is designed to provide “guidance to assist Member States in mapping the benefits and challenges associated with the use of for health and in developing policies and practices for appropriate development, provision and use. The guidance includes recommendations for governance within companies, by governments, and through international collaboration, aligned with the guiding principles. The principles and recommendations, which account for the unique ways in which humans can use generative AI for health, are the basis of this guidance.”

The guidance focused on one type of generative AI, large multi-modal models (LMMs), “which can accept one or more type of data input and generate diverse outputs that are not limited to the type of data fed into the algorithm.” According to the report, LMMs have “been adopted faster than any consumer application in history.” The report outlines the benefits and risks of LLMs, particularly the risk of using LLMs in the healthcare sector.

The report proposes solutions to address the risks of using LMMs in health care during development, provision, and deployment of LMMs and ethics and governance of LLMs, “what can be done, and by who.”

In the ever-changing world of AI, this is one report that is timely and provides steps and solutions to follow to tackle the risk of using LMMs.

Artificial Intelligence (AI) has emerged as a major player in the realm of health care, promising to completely transform­ its delivery. With AI’s remarkable ability to analyze data, learn, solve problems, and make decisions, it has the potential to enhance patient care, improve outcomes, and foster innovation in the health care industry. In this blog post, we will delve into the guidance provided by the U.S. Department of Health and Human Services (HHS) regarding the application and development of AI in the healthcare sector. There is more guidance than one might think.

To address this transformative power of AI and machine learning, the Office of the Chief Artificial Intelligence Officer (OCAIO) has outlined a strategic approach to prioritize the application and development of AI across various HHS mission areas. OCAIO will focus on two major themes in AI adoption:

  1. Pioneering Health and Human Services AI Innovation: HHS will prioritize the application and development of AI and machine learning. This includes regulating and overseeing the use of AI in the healthcare industry and ensuring ethical and responsible implementation. Additionally, HHS aims to fund programs, grants, and research that leverage AI-based solutions to deliver improved outcomes for patients and healthcare providers.
  2. Collaborating and Responding to AI-Driven Approaches within the Health Ecosystem: Recognizing the dynamic nature of the healthcare landscape, HHS will collaborate with external partners, including academia, the private sector, and state, local, tribal, and territorial governments. HHS also aims to identify gaps and unmet needs in health and scientific areas that would benefit from government involvement and AI application.

To ensure effective governance and execution of these initiatives, HHS has established the AI Council and AI Community of Practice. The HHS AI Council plays a pivotal role in supporting AI governance, strategy execution, and the development of strategic AI priorities across the enterprise. Its objectives include effectively communicating and championing HHS’ AI vision and ambition, as well as governing and executing the implementation of the HHS enterprise AI strategy. By aligning efforts and fostering collaboration, the AI Council aims to expand the use of AI throughout the Department.

The AI Council will focus on four key areas to drive the adoption and innovation of AI within the healthcare sector:

  1. Cultivate an AI-ready workforce and foster an AI culture: HHS recognizes the importance of equipping healthcare professionals with the necessary skills to effectively leverage AI. By fostering a robust and responsible AI culture, HHS aims to create an environment that embraces technological advancements and encourages the integration of AI into healthcare practices.
  1. Promote health AI innovation and research and development (R&D): HHS is dedicated to promoting innovation in the healthcare industry through AI. By encouraging R&D, HHS aims to drive advancements in AI technology and its application in healthcare settings.
  1. Democratize foundational AI tools and resources: HHS aims to make foundational AI tools and resources accessible to all stakeholders in the healthcare ecosystem. By democratizing these tools, HHS seeks to empower healthcare providers, researchers, and other stakeholders to leverage AI for improved patient care and outcomes.
  1. Foster trustworthy AI use and development: Trustworthiness is a critical aspect of AI implementation in healthcare. HHS has committed to promoting the responsible and ethical use of AI, ensuring patient privacy, data security, and transparency.

HHS has also published a useful online portal collecting AI Regulations and Executive Orders. Subsequent blog posts will explore the AI Regulations and Executive Orders.

The HHS guidance underscores the significant role of AI in the health care industry and its unwavering commitment to harnessing its potential. By prioritizing the application and development of AI, collaborating with external stakeholders, and establishing effective governance structures, HHS aims to drive innovation, improve patient care, and enhance health outcomes. As AI continues to evolve, its integration into the vast and complex health care ecosystem holds immense promise for the future of health care. Health care organizations, including hospital systems, physician groups, laboratories, and other organizations in the health care industry, should consider following HHS’s guidance to embrace AI in a responsible, ethical, and legal manner.

Click here to learn more about the HHS AI approach. 

Most organizations and online platforms use multifactor authentication (MFA) (also called two-factor authentication) to confirm that the user is an authorized individual and not a scammer or fraudster. We have all been trained to use MFA through our workplaces to gain access to our work emails; tech companies offering free email services are suggesting that users deploy MFA, and online banking and other platforms use MFA to authenticate customers. We are getting used to receiving MFA codes as a push to authenticate us before we can access the application. We click “It’s me” or “Yes” and we are in.

Unfortunately, because we are getting so used to MFA pushes, scammers and cyber criminals know that users will just click on the push without researching or looking closely at the code to determine whether or not it is one that they generated. It is the perfect scam, and they are using it.

How does MFA fatigue happen? Usually, the threat actor has obtained the credentials of the user first through social engineering, a phishing attack, or obtaining compromised credentials on the dark web. (Note to readers: Don’t ever give up your credentials.) The scammer then uses the credentials and sends a rapid series of MFA pushes to the real user through email or text. The user then gets a bunch of pushes, which is annoying, and may click “yes” just to get them to stop, or thinks the MFA is stuck. Once the user clicks “yes,” the threat actor is in the device and can use the entry to implement a scam.

Individuals should remain vigilant and be suspicious of multiple MFA pushes and not click on “yes” unless the user has performed some activity that would generate an MFA push. If you receive multiple pushes, you may wish to call your IT help desk.

Companies may wish to consider increasing employee education about MFA fatigue so they will remain vigilant against an attack.

Here is some background and more tips to combat MFA fatigue.

I hang out with a lot of Chief Information Security Officers (CISOs), so this piece is for them. Of course, it will be of interest to all security professionals struggling with assessing the risk of large language models (LLMs).

According to DarkReading, Berryville Institute of Machine Learning (BIML) recently issued a report entitled “An Architectural Risk Analysis of Large Language Models: Applied Machine Learning Security,” which is designed “to provide CISOs and other security practitioners with a way of thinking about the risks posed by machine learning and artificial intelligence (AI) models, especially LLMs and the next-generation large multimodal models so they can identify those risks in their own applications.”

The core issue addressed in the report is that users of LLMs do not know how the developers have collected and validated the data to train the LLM models. BIML found that the “lack of visibility into how artificial intelligence (AI) makes decisions is the root cause of more than a quarter of risks posed by LLMs….”

According to BIML, risk decisions are being made by large LLM developers “on your behalf without you even knowing what the risks are…We think that it would be very helpful to open up the black box and answer some questions.”

The report concludes that “[s]ecuring a modern LLM system (even if what’s under scrutiny is only an application involving LLM technology) must involve diving into the engineering and design of the specific LLM system itself. This architectural risk analysis is intended to make that kind of detailed work easier and more consistent by providing a baseline and a set of risks to consider.”

CISOs and security professionals may wish to dive into the report by requesting a download from BIML. The 28-pager is full of ideas.

Last week, California Attorney General Rob Bonta announced a new enforcement focus on streaming apps’ failure to comply with the California Consumer Privacy Act (CCPA). This investigation will examine whether streaming services are complying with the opt-out requirements for businesses that sell or share consumers’ personal information as required by the CCPA. Specifically, the agency will examine those services that do not offer an easy mechanism for consumers to exercise this opt-out right.

Attorney General Bonta said that he “urge[s] consumers to learn about and exercise their rights under the [CCPA], especially the right to tell these businesses to stop selling their personal information.” He also warned that the agency will be “taking a close look at how these streaming services are complying with requirements that have been in place since 2020.”

Under the CCPA’s right to opt-out, companies that sell or share personal information for targeted advertising purposes are required to provide consumers with the right to opt-out of such sales or sharing. Not only must the opt-out be available, but the ability to exercise the right must be easy and involve minimal steps. The agency provided an example: on your SmartTV, you should be able to enable a “Do Not Sell My Personal Information” setting in a streaming service’s app. Further, you should not have to opt-out on different devices if you are logged into your account once the opt-out request has been submitted. Lastly, a streaming service’s privacy policy should also be easily accessible to the consumer and include details on individual CCPA rights. Letters of non-compliance are forthcoming.

Mercedes-Benz reportedly suffered a security incident that exposed confidential source code on an Enterprise Git server. The incident occurred due to a compromised GitHub exposed by an employee. Although the incident occurred on September 29, 2023, it wasn’t discovered until January 11, 2024. A cybersecurity firm discovered the token during an internet scan and informed Mercedes-Benz, which quickly revoked it.

The exposure of proprietary source code can be a nightmare. The worst-case scenario is when malicious code is injected into an application and then shipped to consumers, as happened in the SolarWinds data breach.

This incident emphasizes the importance of embedding security in the development of code to prevent leakage of data and intellectual property by developers. Reviewing processes used by developers is essential to minimize the risk of inadvertent disclosure of confidential company information.