On April 12, 2023, the U.S. Department of Health & Human Services (HHS) released a Notice of Proposed Rulemaking (Proposed Rule) that seeks to enhance safeguards of reproductive health care information through changes to the Health Insurance Portability and Accountability Act (HIPAA) Privacy Rule. The proposal is intended to align with President Biden’s Executive Order 14076, which instructed HHS to examine avenues to reinforce protections of HIPAA protected health information (PHI) and patient-provider confidentiality in the wake of the U.S. Supreme Court’s 2022 decision in Dobbs v. Jackson Women’s Health Organization.

Strengthening the HIPAA Privacy Rule

According to HHS, the Dobbs decision “makes it more likely” that PHI may be disclosed in ways that impair the privacy interests HIPAA “seeks to protect.”  HHS is concerned that these developments increase the potential for improper uses or disclosures of PHI that may “undermine access to and quality of health care generally” in part because “medical mistrust” can create “damaging and chilling effects” on access to essential care, particularly in vulnerable communities. A fundamental principal underlying the Privacy Rule has long been the need to appropriately protect the relationship of trust between patients and providers, while also preserving access to that information for patients. Under the Privacy Rule, this principal is reasonably balanced against the interests of providers and society in allowing appropriate disclosures of PHI, including for treatment or operational purposes. In response to post-Dobbs legislation and policy proposals that in HHS’s view threaten that privacy and trust, and thus threaten that balance, HHS determined that “information about reproductive health care… requires heightened protections” under HIPAA because of its sensitivity.

Accordingly, in the Proposed Rule HHS seeks specifically to restrict the use and disclosure of certain PHI for “non-health care purposes,” and in doing so proposes to establish conditional restrictions on uses and disclosures based on whether the PHI includes reproductive health care information. Similar to the Privacy Rule’s protections of psychotherapy notes, this Proposed Rule seeks to implement safeguards to protect reproductive health care information. However, in recognition that reproductive health care information is embedded within a patient’s medical records and cannot readily be separated (as in the case of psychotherapy notes), HHS proposes a “purpose-based prohibition on certain uses and disclosures” to protect individuals and their PHI.

How would the Proposed Rule change the Privacy Rule regulations?

In the Proposed Rule, HHS proposes a new category of prohibited uses and disclosures that would prohibit:

            “using or disclosing an individual’s PHI for the purpose of conducting a criminal, civil, or administrative investigation into or proceeding against the individual, a health care provider, or other person in connection with seeking, obtaining, providing, or facilitating reproductive health care that:

            (1) is provided outside of the state where the investigation or proceeding is authorized and such health care is lawful in the state in which it is provided;

            (2) is protected, required, or authorized by Federal law, regardless of the state in which such health care is provided; or

            (3) is provided in the state in which the investigation or proceeding is authorized and that is permitted by the law of that state.”

The Proposed Rule further would prohibit “using or disclosing an individual’s PHI for the purpose of identifying an individual, health care provider, or other person for the purpose of initiating such an investigation or proceeding against the individual, a health care provider, or other person in connection with seeking, obtaining, providing, or facilitating reproductive health care that is lawful under the circumstances in which it is provided.”

In order to protect individuals under HIPAA, HHS proposes to newly require entities to obtain an attestation prior to certain uses and disclosures of PHI without the individual’s authorization under the Privacy Rule (at 45 C.F.R. § 164.512), by adding a new regulation to the Privacy Rule (which would be found at 45 C.F.R. § 164.509). This will require certain parties seeking PHI from covered entities (or their business associates) to submit an attestation – limited to the specific use or disclosure – stating that the use or disclosure is not for a prohibited purpose related to reproductive health care, as a condition to use or disclosure without an authorization (i) for health oversight purposes, (ii) for judicial and administrative proceedings, (iii) for law enforcement purposes, or (iv) regarding decedents to coroners or medical examiners.

HHS also proposes certain additional definitions and changes to the regulations deemed necessary to operationalize these changes and implement the Proposed Rule. These include, among other things, a proposal to require covered entities to update their Notices of Privacy Practices (NPPs) to ensure the NPPs address the new proposed safeguards for reproductive health care PHI.

Rules of Applicability and Construction

In the Proposed Rule, HHS incorporates a proposed “Rule of Applicability” that would guide the applicability of the new proposed prohibition related to reproductive health care PHI.  Specifically, the Rule of Applicability states that it applies where one or more of the following exist:

            (1) The relevant criminal, civil, or administrative investigation or proceeding is in connection with any person seeking, obtaining, providing, or facilitating reproductive health care outside of the state where the investigation or proceeding is authorized and where such health care is lawful in the state in which it is provided;

            (2) The relevant criminal, civil, or administrative investigation or proceeding is in connection with any person seeking, obtaining, providing, or facilitating reproductive health care that is protected, required, or authorized by Federal law, regardless of the state in which such health care is provided; or

            (3) The relevant criminal, civil, or administrative investigation or proceeding is in connection with any person seeking, obtaining, providing, or facilitating reproductive health care that is provided in the state in which the investigation or proceeding is authorized and that is permitted by the law of that state.

Second, in recognition of the potential challenge to covered entities and business associates posed by a new “purpose-based” prohibition on uses and disclosures of reproductive health care PHI, and the reality that such information may be embedded throughout patient medical records, in the Proposed Rule HHS proposes a “Rule of Construction” to guide covered entities. This Rule states that only where a proposed use or disclosure that otherwise would be permitted under the Privacy Rule is “primarily for the purpose of investigating or imposing liability on any person for the mere act of seeking, obtaining, providing, or facilitating reproductive health care” would the use or disclosure be prohibited. As an example, per HHS the Rule of Construction clarifies that the Proposed Rule “does not inhibit the ability of a covered health care provider to use or disclose [PHI] to defend themselves” in an investigation or litigation related to professional practice.

Comments on the Proposed Rule; Fact Sheet

HHS has issued a Fact Sheet (available here) which describes the Proposed Rule and provides additional information regarding public comment submission.

HHS is accepting public comments on the Proposed Rule through June 16, 2023.  During the 60-day public comment period, the existing Privacy Rule will stay in place.  

*This post was co-authored by Paul Sevigny, legal intern at Robinson+Cole. Paul is not admitted to practice law.

This post is also being shared on our Health Law Diagnosis blog. If you’re interested in getting updates on developments affecting health information privacy and HIPAA related topics, we invite you to subscribe to the blog. 

On April 11, 2023 – one month in advance of the end of the COVID-19 public health emergency (PHE) on May 11, 2023 – the federal Office for Civil Rights (OCR) confirmed that various Notifications of Enforcement Discretion issued under HIPAA during the PHE will expire at the end of the day on May 11, 2023.

OCR’s notice applies to four Notifications of Enforcement Discretion of HIPAA related to the following circumstances:

  1. COVID-19 Community-Based Testing Sites during the PHE (available here);
  2. Telehealth Remote Communications during the PHE (available here);
  3. Uses and Disclosures of PHI by Business Associates for Public Health and Health Oversight Activities (available here); and
  4. Online or Web-Based Scheduling Applications for Scheduling COVID-19 Vaccination Appointments (available here).

In its announcement, OCR noted that it is supporting continued utilization of telehealth services by providing a 90-day transition period for providers to “come into compliance” with HIPAA requirements applicable to the provision of telehealth, starting May 12, 2023, and ending August 9, 2023.  OCR also states in its notice that it “will provide additional guidance on telehealth remote communications to help covered health care providers come into compliance during this transition period.”  Health care providers and organizations should therefore assess their current telehealth services and programs, and also be on the lookout for additional guidance to support continued delivery of telehealth services in a compliant manner once the PHE ends.

This post is also being shared on our Health Law Diagnosis blog. If you’re interested in getting updates on developments affecting health information privacy and HIPAA related topics, we invite you to subscribe to the blog. 

Russia-linked ransomware gang Clop has claimed that it has attacked over 130 organizations since late January, using a zero-day vulnerability in the GoAnywhere MFT secure file transfer tool, and was successful in stealing data from those organizations. The vulnerability is CVE-2023-0669, which allows attackers to execute remote code execution.

The manufacturer of GoAnywhere MFT notified customers of the vulnerability on February 1, 2023, and issued a patch for the vulnerability on February 7, 2023.

HC3 issued an alert on February 22, 2023, warning the health care sector about Clop targeting healthcare organizations and recommended:

  • Educate and train staff to reduce the risk of social engineering attacks via email and network access.
  • Assess enterprise risk against all potential vulnerabilities and prioritize implementing the security plan with the necessary budget, staff, and tools.
  • Develop a cybersecurity roadmap that everyone in the healthcare organization understands.

Security professionals are recommending that information technology professionals update machines to the latest GoAnywhere version and “stop exposing port 8000 (the internet location of the GoAnywhere MFT admin panel).”

New York Attorney General Letitia James announced on March 27, 2023 that she had levied a fine against law firm Heidell, Pittoni, Murphy & Bach LLP for failing to secure personal and health information of clients exposing the information in a data breach.

According to the press release, the law firm agreed to pay $200,000 for “poor data security measures [that] made it vulnerable to a 2021 data breach” and compromised the private information of 61,438 individuals residing in New York. James stated, “The law firm represents New York City area hospitals and maintains sensitive private information from patients,” and its “data security failures violated not only state law, but also HIPAA.”

The incident occurred when a threat actor was able to exploit a vulnerability in the law firm’s Microsoft Exchange server that had not been patched, resulting in the exposure of personal and health information of 114,979 individuals. The Attorney General found that the firm “failed to adopt several measures required by HIPAA. . . .  including conducting regular risk assessments of its systems, encrypting the private information on its servers, and adopting appropriate data minimization practices.” The agreement requires the law firm to pay the fine and strengthen its cybersecurity measures.

As artificial intelligence, also known as “AI” becomes more of a household word, it is worth pointing out not only how cool it can be, but also how some uses raise privacy concerns.

The rapid growth of technological capabilities often surpasses our ability to understand long-term implications on society. Decades later, we find ourselves looking back and wishing that development of certain technology would have been more measured and controlled to mitigate risk. Examples of this are evident in the massive explosion of smartphones and social media. Studies today show clear negative consequences from the proliferation of the use of certain technology.

The development of AI is still in its early stage, even though it has been developed for years. It is not widely used yet by individuals, though it is clear that we are on the cusp.

The privacy risks of AI have been outlined in an article published in The Digital Speaker, Privacy in the Age of AI: Risks, Challenges and Solutions. The concerns about privacy in the use of AI is succinctly summarized by the author:

Privacy is crucial for a variety of reasons. For one, it protects individuals from harm, such as identity theft or fraud. It also helps to maintain individual autonomy and control over personal information, which is essential for personal dignity and respect. Furthermore, privacy allows individuals to maintain their personal and professional relationships without fear of surveillance or interference. Last, but not least, it protects our free will; if all our data is publicly available, toxic recommendation engines will be able to analyse our data and use it to manipulate individuals into making certain (buying) decisions.

In the context of AI, privacy is essential to ensure that AI systems are not used to manipulate individuals or discriminate against them based on their personal data. AI systems that rely on personal data to make decisions must be transparent and accountable to ensure that they are not making unfair or biased decisions.

The article lists the privacy concerns of using AI, including a violation of one’s privacy, bias and discrimination and job displacement, data abuse, the power of big tech on data, the collection and use of data by AI companies, and the use of AI in surveillance by private companies and law enforcement. The examples used by the author are eye-opening and worth a read. The article sets forth a cogent path forward in the development and use of AI that is broad and thoughtful.

The World Economic Forum published a paper last year (before ChatGPT was in most people’s vocabulary) also outlining some of the privacy concerns raised by the use of AI and why privacy must be included in the design of AI products. The article posits:

Massive databases might encompass a wide range of data, and one of the most pressing problems is that this data could be personally identifiable and sensitive. In reality, teaching algorithms to make decisions does not rely on knowing who the data relates to. Therefore, companies behind such products should focus on making their datasets private, with few, if any, ways to identify users in the source data, as well as creating measures to remove edge cases from their algorithms to avoid reverse-engineering and identification….

We have talked about the issue of reverse engineering, where bad actors discover vulnerabilities in AI models and discern potentially critical information from the model’s outputs. Reverse engineering is why changing and improving databases and learning data is vital for AI use in cases facing this challenge….

As for the overall design of AI products and algorithms, de-coupling data from users via anonymization and aggregation is key for any business using user data to train their AI models….

AI systems need lots of data, and some top-rated online services and products could not work without personal data used to train their AI algorithms. Nevertheless, there are many ways to improve the acquisition, management, and use of data, including the algorithms themselves and the overall data management. Privacy-respecting AI depends on privacy-respecting companies.

Both articles give a good background on the privacy concerns posed by the use of AI and solutions for the development and use of AI that are worth consideration to have a more comprehensive approach to the future of collection, use and disclosure of big data. Hopefully, we will learn from past mistakes to think about the use of AI for good purposes and minimize its use for nefarious or bad purposes. Now is the time to develop a comprehensive strategy and work together to implement it. One way we can help is to stay abreast of the issues and concerns and use our voices to advocate for a comprehensive approach to the problem.

The FBI, CISA and the Multi-State Information Sharing and Analysis Center (MS-ISAC) recently released a joint cybersecurity advisory, warning organizations about indicators of compromise, and tactics, techniques, and procedures that have been associated with LockBit 3.0 ransomware.

The Advisory, #StopRansomware: LockBit 3.0, states that LockBit 3.0 is an affiliate-based ransomware variant that functions as a Ransomware-as-a-Service model that is a continuation of its predecessors, LockBit and LockBit 2.0

LockBit 3.0, also known as LockBit Black, is more evasive than its predecessors, and “shares similarities with Blackmatter and Blackcat ransomware.” The attackers using LockBit 3.0 use remote desktop protocol, drive-by compromise, phishing campaigns, abuse of valid accounts, and exploitation of public-facing applications to access networks. Once inside the victim’s network, the attackers escalate privileges, and then move through the victim’s network. Once inside the network, the attackers exfiltrate data using Stealbit,  use publicly-available legitimate file sharing services, then encrypt the files, and finally send a ransom note to the victim.

The Alert outlines the indicators of compromise, and suggestions for mitigation.  Those suggestions include:

  • Prioritized remediating known exploited vulnerabilities
  • Train users to recognize and report phishing attempts
  • Enable and enforce phishing-resistant multifactor authentication.

The New York City Department of Consumer and Worker Protection will delay enforcement of Local Law 144, until April 15, 2023. The law requires companies operating in the City to audit automated employment decision tools for bias prior to use, and to post these audit reports publicly. The bill would also require that companies notify job candidates (and employees residing in the city) that they will be evaluated by automated decision-making tools and disclose the qualifications and characteristics that the tool considers. The AI bias law still has an effective date of January 1, 2023, and violations of it are be subject to a civil penalty.

The City is delaying enforcement due to a “substantial volume of thoughtful comments” from concerned parties. Most of these comments likely came from NYC-area businesses, many of which use AI tools in hiring. These tools generally rank resumes and filter out low-quality applicants.

Bias in AI is difficult to isolate. These technologies tend to be black boxes, and the companies that use third-party AI services may not have access to the ins-and-outs of a system. Even if a business develops an AI with the purest of intentions, bias can creep in. AI bias derives from programming, baselines, and inputs established by people, and people are inherently biased.

For example, suppose a company trains its AI hiring system by feeding it past resumes and hiring decisions to teach the AI what a “successful” resume looks like. The AI then categorizes and scores new applicants’ resumes based on how well they compare to the baselines set by the training. The company has been historically white-dominated and has hired fewer qualified candidates from Historically Black Colleges and Universities (HBCU’s). The AI picks up on this trend as one of several factors that predict whether a candidate is “hirable” to the company. Even though the company’s leadership is dedicated to increasing diversity, the AI system filters out many qualified Black candidates.

While New York City’s law is on ice for now, some states are beginning to address AI bias as well. For example, the California Consumer Privacy Act (CCPA, as amended by the California Privacy Rights Act) requires businesses to allow consumers to opt-out of automated decision-making technologies and the California Privacy Protection Agency is expected to propose additional regulations in this area.

Additionally, employees are beginning to challenge allegedly biased AI tools in court. HR technology giant Workday is currently facing a class-action suit alleging that its system is biased against Black and older applicants. (Mobley v. Workday, Inc., Docket No. 3:23-cv-00770 (N.D. Cal. Feb 21, 2023)). The regulation of AI will almost certainly continue to develop as this technology becomes increasingly integrated in everyday life. For the time being, businesses can look to the U.S. Equal Employment Opportunity Commission’s guidance statement on AI hiring tools and the Americans With Disabilities Act.

Hackers are always looking for the next opportunity to launch attacks against unsuspecting victims. According to Cybersecurity Dive, researchers at Proofpoint recently observed “a phishing campaign designed to exploit the banking crisis with messages impersonating several cryptocurrencies.”

According to Cybersecurity Dive, cybersecurity firm Arctic Wolf has observed “an uptick in newly registered domains related to SVB since federal regulators took over the bank’s deposits…” and “expects some of those domains to serve as a hub for phishing attacks.”

This is the modus operandi of hackers. They use times of crises, when victims are vulnerable, to launch attacks. Phishing campaigns continue to be one of the top risks to organizations, and following the recent bank failures, everyone should be extra vigilant of urgent financial requests and emails spoofing financial institutions, and take additional measures, through multiple levels of authorization, when conducting financial transactions.

We anticipate increased activity following these recent financial failures attacking individuals and organizations. Communicating the increased risk to employees may be worth consideration.

Chinese company ByteDance faces growing concerns from governments and regulators that user data from its popular short video-sharing app TikTok could be handed over to the Chinese government. The concern is based on China’s national security laws, which give its government the power to compel Chinese-based companies to hand over any user data. More than 100 million Americans have reportedly downloaded this popular short video-sharing app on their devices.

In its defense, ByteDance maintains TikTok is operated independently of ByteDance, that all TikTok app user data is held on servers outside of China and further that it doesn’t share data with the Chinese government. ByteDance also claims other social media companies collect far more user data than does TikTok, yet aren’t being threatened with bans.

Concerns about TikTok have existed for years. Since 2017, the Committee on Foreign Investment in the United States (CFIUS), which investigates foreign investments in U.S. companies which have a potential national security risk, has been reviewing ByteDance’s practices, as a result of ByteDance’s acquisition of U.S. company Musical.ly. CFIUS’ investigation into the Bytedance/Musical.ly transaction remains open because of unresolved concerns about ByteDance’s use of user data, the potential data could be passed on to the Chinese government and concerns about the inability to monitor or enforce whatever restrictions ByteDance might even agree even to. However, CFIUS has suggested ByteDance should divest the TikTok’s American operations.

Meanwhile, more than 30 states and now the Biden Administration have banned government employees from using the TikTok app on government-owned devices. In Congress, the House Foreign Affairs Committee voted to advance a bill, known as the Deterring America’s Technology Adversaries Act (DATA Act) to ban anyone in the United States from accessing or downloading the TikTok app on their phones. If enacted into law, this would mean that Apple and Google would no longer be able to offer the TikTok app in their app stores. ByteDance is reportedly talking with Apple and Google about a data security plan that ByteDance has proposed to CFIUS to be sure the plan would also be acceptable to Apple and Google. The plan purportedly includes having Oracle host TikTok’s U.S. user data on its servers, as well as vet TikTok’s software and updates before they are sent to the app stores.

The U.S. is not alone in raising security concerns over the TikTok app. Canada, The European Parliament, European Commission and the EU Council have banned the TikTok app from being loaded onto government or organization owned devices. Some require employees and staff ban the TikTok app on personal devices with access to government or organization systems. Most have also recommended lawmakers and employees remove the TikTok app from their personal devices, even if they don’t access government or organization systems. Pakistan and Afghanistan have also imposed bans on TikTok, but because of its content, not because of security concerns.

Some countries have gone even further to impose outright bans on the TikTok app. In 2021, India imposed a permanent ban on the TikTok app and several other Chinese apps. In December 2022, Taiwan imposed a public sector ban on the TikTok app after the FBI warned that the TikTok app posed a national security risk. 

While TikTok is the current focus of legislators and regulators, some say security developments at other social media platforms should also be kept under constant review. The DATA Act bill would also require Biden to impose a ban on companies transferring sensitive personal data to an entity subject to the influence of China, although the details of this provision are not completely clear from the bill. 

It used to be that one of the sure ways to identify a phishing email was to notice grammatical errors or broken English in the text of the communication. Thanks to new translation tools like Google Translate, which are available worldwide, threat actors can translate a phishing email into any language, so it sounds authentic to the recipient and pull off a business email compromise attack (BEC) effortlessly.

Unfortunately, that is exactly what two threat actor groups are doing as we speak. According to Abnormal Intelligence, threat groups Midnight Hedgehog, “which engages in payment fraud,” and Mandarin Capybara, “a group that executes payroll diversion attacks” have “launched BEC campaigns in at least 13 different languages.”

According to Abnormal Intelligence, threat actors are using the same legitimate commercial tools that sales and marketing teams use to launch BEC campaigns, including collecting “leads” in a state or country. Using translation tools, they can launch multiple campaigns in different countries using the same text translated into the native language.

Midnight Hedgehog launches payment fraud attacks by targeting finance personnel and executives involved in financial transactions by spoofing the CEO. Before doing so, they “thoroughly research their target’s responsibilities and relationship to the CEO and then create spoofed email accounts that mimic a real account.”

The Mandarin Capybara group also impersonates executives and targets human resources personnel to carry out payroll diversion schemes to change direct deposit information to divert the executive’s pay to a fraudulent bank account. To combat these attacks, Abnormal Intelligence suggests that companies “put procedures in place to verify outgoing payments and payroll updates and keep your workforce vigilant with security awareness training.” It also suggests beefing up security through behavioral analytics.