The Health Sector Cybersecurity Coordination Center (HC3) recently warned the health care sector about the Akira ransomware group that has been hitting health care organizations since May of 2023. In an Analyst Note dated February 7, 2024, HC3 stated that although Akira is a relatively new ransomware group, it has attacked at least 81 organizations in its short life, and “U.S. healthcare organizations are advised to follow the steps in this alert to minimize their risk of attack.”

Akira uses double extortion strategies to maximize its profits and operates a leak site to assert additional pressure on its victims. The most recent tactics, techniques, and procedures used by Akira are outlined in the Alert. HC3 surmises that Akira has some relationship with another well-known ransomware group, Conti, through an analysis of shared financial infrastructure for payments through cryptocurrency wallets.

HC3 provides defense and mitigation recommendations, and healthcare organizations may wish to review these following the warning.

In a joint release last week, the Cybersecurity and Infrastructure Security Agency (CISA) and other federal agencies issued a chilling Advisory about the ongoing attacks by Volt Typhoon on U.S. critical infrastructure. Volt Typhoon is a People’s Republic of China (PRC) sponsored group that uses slow and persistent techniques to gain entry into U.S.-based critical infrastructure. CISA urges “critical infrastructure organizations and technology manufacturers to read the joint advisory and guidance to defend against this threat.

Soon after the Joint Alert, Dragos released its Report “VOLTZITE Espionage Operations Targeting U.S. Critical Systems,” which provides concerning information about the overlap between Volt Typhoon and VOLTZITE and how it is targeting and successfully gaining access to U.S. critical infrastructure.

According to Dragos, “VOLTZITE has been observed performing reconnaissance and enumeration of multiple U.S.-based electric companies since early 2023, and since then has targeted emergency management services, telecommunications, satellite services, and the defense industrial base. Additionally, Dragos has discovered VOLTZITE targeting electric transmission and distribution organizations in African nations.” Dragos also notes that the threat actors are difficult to detect, and therefore, the “slow and steady reconnaissance, enables VOLTZITE to avoid detection for lengthy periods of time.”

Dragos has tracked VOLTZITE in 2023 as follows:

  • Early 2023 – US Territory of Guam compromise.
  • June 2023 – VOLTZITE infiltrates United States emergency management organization.
  • August 2023 – Dragos discovers VOLTZITE targeting African electric transmission and distribution providers.
  • November 2023 – Dragos collaborated with E-ISAC on analysis of VOLTZITE activity against multiple U.S. based electric sector organizations.
  • December 2023 – Dragos discovered evidence that VOLTZITE has overlaps with UTA0178, a threat activity cluster tracked by Volexity, exploiting Ivanti ICS VPN zero-day vulnerabilities.
  • January 2024 – Extensive reconnaissance of a U.S. telecommunication’s providers external network gateways.
  • January 2024 – Evidence of compromise against a large U.S. city’s emergency services GIS network.

Not only is the PRC conducting slow and steady reconnaissance of critical infrastructure in the U.S., but it is also conducting daily reconnaissance of TikTok users. The PRC is a threat to national security on both fronts. Dragos provides ways critical infrastructure operators can mitigate the threat posed by VOLTZITE, which is an important read.

This week we are pleased to have a guest post by Robinson+Cole Artificial Intelligence Team patent agent Daniel J. Lass.

After several high-profile instances of artificial intelligence (AI) hallucination and Chief Justice John Roberts’s year-end report acknowledging the shortcomings of blindly relying on AI in legal writing, Kathi Vidal, the Director of the U.S. Patent and Trademark Office (USPTO), issued a memo concerning the use of AI when practicing before the USPTO. She echoed that while AI can be helpful to practitioners, the existing USPTO Rules of Professional Conduct impose duties related to any submission. 

Practitioners are required to sign most papers filed with the USPTO. By signing the paper, the practitioner is indicating that they reasonably believe the statements made in the paper are true and that any legal contentions are warranted. Director Vidal specifically indicated that assuming that an AI tool is correct without any verification is not reasonable. Practitioners who fail to follow this warning risk the paper being given less weight, the USPTO terminating the proceeding, or facing discipline.

Despite these warnings, the USPTO is looking into ways to utilize AI in the patent process. President Biden instructed Director Vidal to issue guidance on AI inventorship and patent eligibility. This guidance indicates that AI-assisted inventions are not necessarily unpatentable. To be patentable, the claims should highlight the human contribution. The USPTO listed five non-exhaustive factors to assist in determining whether a human’s contribution is significant enough to qualify the human as an inventor. These generally indicate that a person must contribute to an inventive concept beyond presenting a problem to be solved by an AI model or simply overseeing the AI system.

In noting that an AI model can be a contributor but not an inventor, Director Vidal reinforced that practitioners have a reasonable duty to inquire into the inventorship of an application and ensure that a human contributed to the invention in a significant enough way to be named an inventor properly. However, the USPTO does not require any further disclosure related to the using AI in the inventive process outside of any pre-existing requirements. The USPTO will host a webinar on March 5, 2024, to explain its new guidance further.

The Federal Trade Commission (FTC) keeps track of scams that are reported to it and summarizes those scams in a report outlining the most successful scams of the prior year.

Last year’s statistics are disturbing, as many of the same techniques from previous years are still being used successfully by threat actors. Old scams are continuing to be profitable for fraudsters, as the amount of money scammers obtained from victims last year was the most reported to the FTC ever. That amount is $10 billion, which is a whopping $1 billion more than in 2023.

The Data Book, as the FTC calls it, found that “email was the #1 contact method for scammers this year, especially when scammers pretended to be a business or government agency to steal money.”

According to the report, here are other takeaways for 2023:

Imposter scams. Imposter scams remained the top fraud category, with reported losses of $2.7 billion. These scams include people pretending to be your bank’s fraud department, the government, a relative in distress, a well-known business, or a technical support expert.

Investment scams. While investment-related scams were the fourth most reported fraud category, losses in this category grew. People reported median losses of $7.7K – up from $5K in 2022.

Social media scams. Scams starting on social media accounted for the highest total losses at $1.4 billion – an increase of 250 million from 2022. But scams that started with a phone call caused the highest per-person loss ($1,480 average loss).

Payment methods. How did scammers prefer that people pay? With bank transfers and payments, which accounted for the highest losses ($1.86 billion). Cryptocurrency is a close second ($1.41 billion reported in losses).

Losses by age. Of people who reported their age, younger adults (20-29) reported losing money more often than older adults (70+). However, when older adults lost money, they lost the most.

The Data Book provides a sobering look at victims’ losses and is a document that everyone can learn from, no matter your age.

This post was co-authored by Yelena Greenberg, a member Robinson+Cole’s Health Law Group.

On February 8, 2024, the U.S. Department of Health and Human Services (HHS) issued a final rule (Final Rule) updating federal “Part 2” regulations to more closely align the requirements applicable to substance use disorder (SUD) treatment records with the HIPAA privacy rule, and to make certain other changes. The regulations at 42 CFR Part 2 have long set forth strict rules governing the uses and disclosures of medical records of certain SUD treatment facilities and programs. HHS is now proposing to scale back those rules slightly, in accordance with statutory changes to federal law governing the privacy of SUD records in the 2020 “CARES Act” legislation enacted in response to COVID-19.[i] This Final Rule follows a proposed rule issued by HHS on December 2, 2022, which we previously analyzed here.

The Final Rule is anticipated to take effect on April 16, 2024 (60 days from the anticipated publication date of February 16). The compliance date by which individuals and entities must comply with the Final Rule’s requirements is February 16, 2026 (except as specifically tolled in the Final Rule).

Below we provide a high-level summary of the changes included in the Final Rule.  We will supplement this analysis in the coming days with additional detailed reviews of certain of these changes referenced below. 

The key updates in the Final Rule include:

  • Consent: A long-standing tenet of the Part 2 regulations was that SUD records could not be used or disclosed without specific patient consent, except in very narrow circumstances.  The Final Rule updates this regulation to allow a patient to give a single, broad consent that covers all future uses and disclosures of Part 2 records for treatment, payment, and health care operations purposes (as defined under the HIPAA privacy rule), subject to certain exceptions (hereinafter, “TPO Consent”). This alignment with the HIPAA privacy rule is an important development to streamline compliance with the previously incongruent consent regimens under the Part 2 and HIPAA regulations across health systems and Part 2 programs (as defined under the Part 2 regulations).
  • TPO Consent Elements: The Final Rule indicates that a valid TPO Consent must have all of the required elements of a valid HIPAA authorization.
  • Redisclosures: The Final Rule newly allows Part 2 programs, as well as HIPAA-covered entities and business associates, who have received Part 2 records in accordance with TPO Consent, to “redisclose the records as permitted by the HIPAA regulations” except in proceedings against a patient requiring a court order or specific written consent, or until the patient revokes the consent.
  • SUD Counseling Notes: The Final Rule revises the definition of “SUD counseling notes” under the Part 2 regulations “to parallel the HIPAA psychotherapy note provisions,” which are subject to heightened confidentiality restrictions under Part 2 and HIPAA, respectively.
  • Segregation/Segmentation of Part 2 Records: The Final Rule states that a Part 2 program, or HIPAA-covered entity or business associate, which receives Part 2 records based on a single TPO Consent, is “not required to segregate or segment such records.” This may be an important clarification for health systems and other entities that rely on integrated and unified electronic health records.
  • Part 2 Record Breaches: Extends applicability of breach notification requirements consistent with those under HIPAA to breaches of Part 2 records.
  • Civil and Criminal Enforcement: The Final Rule incorporates HIPAA’s criminal and civil enforcement authorities into the Part 2 regulations, allowing for imposition of civil money penalties and other sanctions available under HIPAA for Part 2 violations.
  • Accounting of Disclosures: The Final Rule grants patients a new right to request an accounting of disclosures made by a Part 2 program based on a consent, for up to 3 years prior to the date of the accounting. However, the compliance date for this provision is tolled by HHS in the Final Rule until HHS revises the HIPAA privacy rule’s accounting for disclosures regulation to address disclosures through an electronic health record.

The Final Rule represents the latest in a series of efforts by HHS to more closely align HIPAA and Part 2 requirements and processes, in recognition of industry shifts to more integrated and coordinated medical, behavioral health, and SUD care. Health care organizations will need to assess the various provisions of the Final Rule closely to determine their compliance obligations and any necessary operational changes.

We will continue to monitor and track developments related to the Part 2 requirements and implications of this Final Rule.


[i] Coronavirus Aid, Relief, and Economic Security Act, Pub. L. No 116-136, 134 Stat 281 (27 March 2020) (CARES Act) – https://www.congress.gov/116/bills/hr748/BILLS-116hr748enr.pdf (codified in pertinent part at 42 U.S.C. 290dd–2).

 This post is also being shared on our Health Law Diagnosis blog. If you’re interested in getting updates on developments affecting health information privacy and HIPAA related topics, we invite you to subscribe to the blog. 

Unfortunately, according to Unit 42 of Palo Alto’s recently published “Ransomware and Extortion Report,” ransomware groups had a good year in 2022. They found that threat actors are using multi-extortion tactics to get paid by victims, including data exfiltration. In addition, there was “a 49% increase in victims reported by ransomware leak sites, with a total of 3,998 posts from various ransomware groups.”

Twenty-five new ransomware groups attacked companies in 2023, though the most successful continued to be some well-known groups, including BlackCat, CL0P, and Lockbit.

According to its analysis of leak site data, the manufacturing sector was the hardest hit in 2023, “signaling significant vulnerabilities in this sector.” Unit 42 surmises this is because the manufacturing sector is using old software that makes patching difficult. Further, based on leak data, U.S.-based organizations were most severely affected by ransomware, a whopping 42 percent of leaks in 2022.

Threat actors are increasing their usage of harassment techniques, including communicating with C-Suite executives to apply pressure to pay.

Unit 42’s experts have predictions for what to expect from extortion groups in 2024 and it is not pretty. The predictions include:

  • “2024 will be the year we see a large cloud ransomware compromise.
  • A rise in extortion related to insider threats.
  • A rise in politically motivated extortion attempts.
  • The use of ransomware and extortion to distract from attacks aimed to infect the supply chain or source code.”

Unfortunately, ransomware is still dominating security incidents and will continue to cause chaos for U.S. companies. Conducting a tabletop exercise on a ransomware attack is imperative to prepare for the attack. Schedule one now before you get hit.

The Connecticut Data Privacy Act (CDPA), which became effective on July 1, 2023, provides Connecticut residents with certain rights over their personal information and establishes responsibilities and privacy protection standards for businesses that process personal information. Notably, the CDPA allows businesses a 60-day cure period to correct violations without penalties through the end of 2024. However, after that cure period, civil penalties may be enforced for up to $5,000 per violation. The CDPA applies to businesses that control or process the data of at least 100,000 Connecticut residents per year, or 25,000 residents per year, if more than 25 percent of their gross revenue comes from selling personal data. A new report was recently released by the office of the Connecticut Attorney General (AG), which outlines how the state has been enforcing this new law. 

The report summarizes the enforcement of this law since its effective date, indicating that the AG’s office has issued about a dozen violation notices to businesses related to the collection and use of consumer data. Of course, these violation notices granted the companies the ability to correct the violation within the 60-day cure period. According to the report, most of Connecticut’s enforcement efforts so far have focused on privacy policies that had confusing disclosures or failed to provide consumers with a clear and conspicuous way to exercise their privacy rights under the CDPA. Additionally, there were a few violation notices sent to businesses for violations related to the collection of sensitive data, such as a grocery store collecting biometric data to prevent shoplifting.

The report concludes, “There is much yet to be done in the balancing act of privacy of consumer information and the need to use and maintain that same information in our global economy. We remain ready to do our part, encouraging and guiding compliance, but prepared to undertake enforcement when necessary.”

The Electronic Privacy Information Center gave Connecticut a D grade for the CDPA, citing the fact that the law is overly favorable to the tech industry and “a favored piece of template legislation for lobbyists.”

In addition to the violation notices sent by the AG’s office, the office also received over 30 individual consumer complaints. However, many of those complaints were related to companies or data that are exempt from the CDPA, such as nonprofits and entities covered under the Health Insurance Portability and Accountability Act.

To read the full report, click here.

The World Health Organization (WHO) recently published “Ethics and Governance of Artificial Intelligence for Health: Guidance on large multi-modal models” (LMMs), which is designed to provide “guidance to assist Member States in mapping the benefits and challenges associated with the use of for health and in developing policies and practices for appropriate development, provision and use. The guidance includes recommendations for governance within companies, by governments, and through international collaboration, aligned with the guiding principles. The principles and recommendations, which account for the unique ways in which humans can use generative AI for health, are the basis of this guidance.”

The guidance focused on one type of generative AI, large multi-modal models (LMMs), “which can accept one or more type of data input and generate diverse outputs that are not limited to the type of data fed into the algorithm.” According to the report, LMMs have “been adopted faster than any consumer application in history.” The report outlines the benefits and risks of LLMs, particularly the risk of using LLMs in the healthcare sector.

The report proposes solutions to address the risks of using LMMs in health care during development, provision, and deployment of LMMs and ethics and governance of LLMs, “what can be done, and by who.”

In the ever-changing world of AI, this is one report that is timely and provides steps and solutions to follow to tackle the risk of using LMMs.

Artificial Intelligence (AI) has emerged as a major player in the realm of health care, promising to completely transform­ its delivery. With AI’s remarkable ability to analyze data, learn, solve problems, and make decisions, it has the potential to enhance patient care, improve outcomes, and foster innovation in the health care industry. In this blog post, we will delve into the guidance provided by the U.S. Department of Health and Human Services (HHS) regarding the application and development of AI in the healthcare sector. There is more guidance than one might think.

To address this transformative power of AI and machine learning, the Office of the Chief Artificial Intelligence Officer (OCAIO) has outlined a strategic approach to prioritize the application and development of AI across various HHS mission areas. OCAIO will focus on two major themes in AI adoption:

  1. Pioneering Health and Human Services AI Innovation: HHS will prioritize the application and development of AI and machine learning. This includes regulating and overseeing the use of AI in the healthcare industry and ensuring ethical and responsible implementation. Additionally, HHS aims to fund programs, grants, and research that leverage AI-based solutions to deliver improved outcomes for patients and healthcare providers.
  2. Collaborating and Responding to AI-Driven Approaches within the Health Ecosystem: Recognizing the dynamic nature of the healthcare landscape, HHS will collaborate with external partners, including academia, the private sector, and state, local, tribal, and territorial governments. HHS also aims to identify gaps and unmet needs in health and scientific areas that would benefit from government involvement and AI application.

To ensure effective governance and execution of these initiatives, HHS has established the AI Council and AI Community of Practice. The HHS AI Council plays a pivotal role in supporting AI governance, strategy execution, and the development of strategic AI priorities across the enterprise. Its objectives include effectively communicating and championing HHS’ AI vision and ambition, as well as governing and executing the implementation of the HHS enterprise AI strategy. By aligning efforts and fostering collaboration, the AI Council aims to expand the use of AI throughout the Department.

The AI Council will focus on four key areas to drive the adoption and innovation of AI within the healthcare sector:

  1. Cultivate an AI-ready workforce and foster an AI culture: HHS recognizes the importance of equipping healthcare professionals with the necessary skills to effectively leverage AI. By fostering a robust and responsible AI culture, HHS aims to create an environment that embraces technological advancements and encourages the integration of AI into healthcare practices.
  1. Promote health AI innovation and research and development (R&D): HHS is dedicated to promoting innovation in the healthcare industry through AI. By encouraging R&D, HHS aims to drive advancements in AI technology and its application in healthcare settings.
  1. Democratize foundational AI tools and resources: HHS aims to make foundational AI tools and resources accessible to all stakeholders in the healthcare ecosystem. By democratizing these tools, HHS seeks to empower healthcare providers, researchers, and other stakeholders to leverage AI for improved patient care and outcomes.
  1. Foster trustworthy AI use and development: Trustworthiness is a critical aspect of AI implementation in healthcare. HHS has committed to promoting the responsible and ethical use of AI, ensuring patient privacy, data security, and transparency.

HHS has also published a useful online portal collecting AI Regulations and Executive Orders. Subsequent blog posts will explore the AI Regulations and Executive Orders.

The HHS guidance underscores the significant role of AI in the health care industry and its unwavering commitment to harnessing its potential. By prioritizing the application and development of AI, collaborating with external stakeholders, and establishing effective governance structures, HHS aims to drive innovation, improve patient care, and enhance health outcomes. As AI continues to evolve, its integration into the vast and complex health care ecosystem holds immense promise for the future of health care. Health care organizations, including hospital systems, physician groups, laboratories, and other organizations in the health care industry, should consider following HHS’s guidance to embrace AI in a responsible, ethical, and legal manner.

Click here to learn more about the HHS AI approach. 

Most organizations and online platforms use multifactor authentication (MFA) (also called two-factor authentication) to confirm that the user is an authorized individual and not a scammer or fraudster. We have all been trained to use MFA through our workplaces to gain access to our work emails; tech companies offering free email services are suggesting that users deploy MFA, and online banking and other platforms use MFA to authenticate customers. We are getting used to receiving MFA codes as a push to authenticate us before we can access the application. We click “It’s me” or “Yes” and we are in.

Unfortunately, because we are getting so used to MFA pushes, scammers and cyber criminals know that users will just click on the push without researching or looking closely at the code to determine whether or not it is one that they generated. It is the perfect scam, and they are using it.

How does MFA fatigue happen? Usually, the threat actor has obtained the credentials of the user first through social engineering, a phishing attack, or obtaining compromised credentials on the dark web. (Note to readers: Don’t ever give up your credentials.) The scammer then uses the credentials and sends a rapid series of MFA pushes to the real user through email or text. The user then gets a bunch of pushes, which is annoying, and may click “yes” just to get them to stop, or thinks the MFA is stuck. Once the user clicks “yes,” the threat actor is in the device and can use the entry to implement a scam.

Individuals should remain vigilant and be suspicious of multiple MFA pushes and not click on “yes” unless the user has performed some activity that would generate an MFA push. If you receive multiple pushes, you may wish to call your IT help desk.

Companies may wish to consider increasing employee education about MFA fatigue so they will remain vigilant against an attack.

Here is some background and more tips to combat MFA fatigue.