OCR Comments on Recent Ciox Case Vacating Certain Omnibus Rule Regulations and Guidance Relating to Fees for Providing Patient Records

The U.S. Department of Health and Human Services’s (HHS) Office for Civil Rights (OCR) issued an Important Notice Regarding Individuals’ Right of Access to Health Records through its email list serve on January 29, 2020.  In the Notice, OCR addressed the recent memorandum Opinion issued in Ciox Health v. Azar, et al, No. 18-cv-00040 (D.D.C. January 23, 2020).

In that case, Ciox Health, LLC, a specialized medical records provider, had challenged certain provisions of the 2013 Omnibus Rule, including provisions pertaining to what can be charged for delivering records containing protected health information (PHI). One cited issue was whether the limitations on fees for these services applied only to requests for PHI that are made by the patient, for use by the patient (the Patient Rate), or whether the limitations also applied to PHI to be delivered to third parties.

An OCR guidance document published in 2016 (the 2016 Guidance) stated the Patient Rate would apply to patient requests, even where the requests directed the delivery of PHI to third parties. The 2016 Guidance noted that the Patient Rate would not apply to requests being made by a third party pursuant to a HIPAA authorization signed by the patient, but cautioned against circumventing the fee limit by treating individual requests for access like other HIPAA disclosures, such as by having an individual fill out a HIPAA authorization when the individual requests access to PHI, including directing a copy to a third party.  The 2016 Guidance also described the types of labor costs that are recoverable, and identified methods for calculating the Patient Rate.  The case additionally challenged a regulation in the 2013 Omnibus Rule that required PHI sent to third parties to be provided in the form and format requested by the patient, if readily producible in that form and format.

The Court ruled in favor of OCR on one of the issues — holding that identifying the methods for calculating the Patient Rate was not a reviewable final agency action.

The Court vacated and declared unlawful the “Patient Rate expansion” in the 2016 Guidance and the Omnibus Rule’s “mandate broadening PHI delivery to third parties regardless of format.” The Court held:

(1) HHS’s 2013 rule compelling delivery of PHI to third parties regardless of the records’ format is arbitrary and capricious insofar as it goes beyond the statutory requirements set by Congress; (2) HHS’s broadening of the Patient Rate in 2016 is a legislative rule that the agency failed to subject to notice and comment in violation of the APA; and (3), HHS’s 2016 explanation concerning what labor costs can be recovered under the Patient Rate is an interpretative rule that HHS was not required to subject to notice and comment.

The Court cited the HITECH Act, noting that it is silent on the allowable fees for PHI when an individual requests or directs the information be provided to a third party and, instead, restricts the fee to labor costs for “providing such individual” a copy of the information.

As OCR explained in its recent Notice, as a result of the Court’s ruling, “the fee limitation set forth at 45 C.F.R. § 164.524(c)(4) will apply only to an individual’s request for access to their own records, and does not apply to an individual’s request to transmit records to a third party.” OCR cautioned, however, that the right of individuals to access their own records and the fee limitations that apply in that context “are undisturbed and remain in effect” and that OCR will “continue to enforce the right of access provisions in 45 C.F.R. § 164.524 that are not restricted by the court order.”

This post was authored by Lisa Thompson and is also being shared on our Health Law Diagnosis blog. If you’re interested in getting updates on developments affecting health information privacy and HIPAA related topics, we invite you to subscribe to the blog.

Ransomware Attacks More Frequent and Recovery Efforts Extended in 2020

A new report published by Coveware concludes that companies hit with ransomware attacks spend an average of 16 days recovering from the attack. Think about being offline and unable to do business for 16 business days. It is extremely disruptive and costly. It takes larger organizations longer to recover than smaller ones, and larger organizations are getting hit more frequently with ransomware attacks in 2020 than 2019. That means that companies must expect them, prepare for them and be ready to respond to them.

We are finding that companies are not ready when they are hit with ransomware, even when they know the risk of a ransomware attack is real and now more likely than ever. We see that normal business operations overshadow preparedness for cyber-attacks. With this mindset, failing to prepare for a ransomware attack takes companies by surprise and hits them harder than if they were prepared.

Preparing for cyber-attacks, including ransomware attacks, should be a normal business process and part of a risk management strategy. Too many companies are ignoring this risk and attacks are blindsiding them.

Take heed from reports from security experts that ransomware attacks are on the rise, assume you will be hit, and prepare a response strategy.

Super Bowl LIV—No Drone Zone

The Federal Aviation Administration (FAA), along with federal, state and local law enforcement agencies, is informing drone operators of the restrictions on drone flights before, during and after Super Bowl LIV on Sunday, February 2 at Hard Rock Stadium in Miami Gardens, Florida. The FAA expects more than 2,500 additional take-offs and landings and almost 1,300 additional aircraft to be parked at South Florida airports during Super Bowl week. Therefore, Temporary Flight Restrictions (TFRs) and a “No Drone Zone” will limit flights around Hard Rock Stadium.

The Game Day TFR goes into effect at 5:30 p.m. EST and covers a 34.5-mile ring centered over the stadium and from the ground up to 18,000 feet in altitude. The TFR expires at 11:59 p.m. EST. Drones cannot fly inside the TFR zone. Additionally, no drones are permitted to fly around the Miami Beach Convention Center or Bayfront Park up to an altitude of 2,000 feet (which began January 25 and ends on February 1 during daytime hours).

Manned aircraft pilots and drone operators alike who enter the TFRs without permission could face civil penalties of up to $30,000 and potential criminal prosecution. The takeaway? Stay away from Super Bowl LIV with your drones and watch the game (and the commercials) from the couch.

USPS Issues Request for Information from the Unmanned Aircraft Industry

The United States Postal Service (USPS) recently issued a Request for Information (RFI) from industry experts on unmanned aircraft systems used for letter or parcel delivery. USPS says that it is merely investigating the feasibility of using drones as an integrated part of its vehicle delivery fleet, as well as to provide image and other data collection services.

Drones could aid USPS with “long driveway delivery,” rural package delivery, and delivery to remote or rugged locations. Additionally, USPS could use drones as part of a “ride sharing model” in which other businesses would use USPS drones to deliver their products or drone service providers use USPS drones for non-delivery tasks such as power line inspection.

The RFI, stated, “[USPS]’s investigation is focusing on developing solutions for remote piloted aircraft operations for delivery of mail beyond visual line of sight, as well as developing universal standard navigation capabilities for UAS, secure data protocols, and best practices for maintenance and training programs in the UAS arena.”

USPS is currently reviewing the materials and information it has received.

Privacy Tip #224 – Please Prepare for a Ransomware Attack

I am on vacation this week in beautiful Jackson Hole. The skiing is epic, the restaurants amazing, 1921 silver dollars inlaid in the tops of two bars, elk and moose abound, and I’ve had a sighting of several coyotes, a badger, and owls. Sounds idyllic. It is for the most part, except for ransomware attacks.

Technology intrusions, specifically ransomware attacks have stormed the pristine, majestic, and peaceful snow-capped mountains.

Threat actors don’t care that you are on vacation or that you have other priorities in your business that day. They know that you know security is important, but that you aren’t putting it as high on your priority list as you should. They know that you have not educated your employees on cyber-risks and that they will click on links and attachments that they shouldn’t – features that might be infected with malware or ransomware. They know that we are all working too hard and too fast. They know that someone in our organization will make a mistake. They know that we are dependent on technology and data, and they know where we are most vulnerable. So they attack and attack until they get in. And they complicate my vacation in a peaceful place.

We continually warn about ransomware attacks. We can’t warn you more. We write about it every week, warn about it in presentations, trainings, and webinars.  Let’s be super clear – we are seeing more ransomware attacks right now than ever before. All we can say is: prepare for ransomware attacks now, as they are more rampant and are becoming more frequent and more vicious than we have ever seen. They are disruptive, and preparing for them before you get hit will ease that disruption so you can recover more quickly.

IoT Manufacturers – What You Need to Know About California’s IoT Law

California has a privacy law that took effect on January 1, 2020, and it’s not the California Consumer Privacy Act (CCPA). This new privacy law regulates Internet of Things (IoT)-connected devices. SB 327 was enacted in 2018 and became effective on January 1, 2020. The California IoT law requires manufacturers of connected devices to equip the device with a reasonable security feature or features that are all of the following:

  • appropriate to the nature and function of the device;
  • appropriate to the information the device may collect, contain, or transmit; and,
  • designed to protect the device and any information contained therein from unauthorized access, destruction, use, modification, or disclosure.

So which manufacturers must comply with this new law and what is considered a connected device?

A manufacturer is defined as the person who manufactures, or contracts with another person to manufacture on the person’s behalf, connected devices that are sold or offered for sale in California. This seems clear enough – if you manufacture a connected device that is sold or offered for sale in California, the California IoT law applies.

What is a connected device?

A connected device is any device or other physical object that is capable of connecting to the Internet, directly or indirectly, and that is assigned an Internet Protocol address or Bluetooth address. Smart phones, watches, speakers, wearable devices, televisions, thermostats, doorbells – the list is almost endless — are all examples connected devices.

What is a reasonable security feature?

The law states it shall be deemed a reasonable security feature if either of the following requirements are met:

(1) The preprogrammed password is unique to each device manufactured; or

(2) The device contains a security feature that requires a user to generate a new means of authentication before access is granted to the device for the first time.

California joins Oregon as the two states that require reasonable security features for IoT devices. For more information on the Oregon IoT law, see our previously blog post here. Both of these laws mean that manufacturers must incorporate these security measures into connected devices. As a practical matter, these security features mean that IoT devices will be less vulnerable to attack since they will no longer work with the “generic” default password set by a manufacturer.

Crime-as-a-Service Targets Popular Platforms

It’s getting difficult to keep up with the jargon of all of the new digital scams. The SaaSes in the beginning became regular business terms, such as Software-as-a-Service (SaaS), and Business Processes-as-a-Service (BPaaS). But then the criminal enterprises came up with Malware-as-a-Service (MaaS), Ransomware-as-a-Service (RaaS) and now Crime-as-a-Service (CaaS).

A new Crime-as-a-Service offering is targeting PayPal, Apple, and Amazon accounts. The attack vector is a phishing campaign dubbed 16Shop, which is targeting victims through phishing emails with incentives to click on malicious links and attachments. Old tricks are still working, and the tools are being sold quite successfully on underground forums.

The most recent campaign, alleged to originate in Indonesia, is targeting PayPal customers in order to obtain usernames, passwords, credit card information, and other personal information. This phishing-kit-as-a-service (PkaaS) (I can’t even pronounce that acronym) boasts that it has induced over 23 million individuals to actually click malicious links in emails and provide personal information that can be sold for a profit. One scheme that is particularly successful is when the victim is told their email has been compromised and that they need to change their password for security purposes. Unfortunately, in a real cyber-incident, one of the first things we do is ask users to change their passwords. Criminals are leveraging this fact, using it to their advantage by duping users into believing that a false notice to change passwords is real and then stealing the credentials.

Security professionals continue to advocate that multi-factor authentication is critical to assist with combating these types of attacks. Employee education is also helpful, so that when employees receive an instruction to change their passwords, they reach out to confirm rather than blindly following blind instructions. Employees must understand that they cannot rely solely on digital instructions. Any and all instructions that come via emails regarding usernames and passwords must be confirmed face-to-face or verbally with a known source. It is sad, but true. Email communication should be just that—email communication. No personal information, sensitive information, critical business information or information allowing access to systems should ever be provided through email communication. It just can’t be trusted these days.

Changing the Conversation About Sharing and Using Health Information

Some app developers know more about our health than our doctors do. Take, for instance, FitBit, which is attached to our wrist and measuring in real time our temperature, our heart rate, our steps and whether we have had enough exercise for our age in a day.

Some people sleep with their phones on their pillows so they can monitor their sleep habits. Some people have apps, such as Bump, to determine when they are the most fertile and should do the thing that you still have to do to get pregnant. Some apps know when you are pregnant before your body even knows. 23andMe knows your entire DNA genome, and your family’s as well. None of this highly sensitive data is protected by law. When you consent to download the app, provide the information to the app developer, or send a DNA sample, that company has the right to do whatever it says it can do in its privacy policy.

Consumers are providing highly sensitive health information to app developers without a second thought. Millions and millions of health apps are downloaded by individuals for convenience so they can get immediate feedback on a specific data point. For some reason, individuals do not like their neighbors to know about certain things, but they have no problem with sharing intimate details with random app developers.

This health information is not protected by HIPAA, yet it is being shared willingly and freely by consumers (with consent through the Privacy Policy). Wouldn’t it be great if this information could be shared with health care providers to treat individuals and increase the quality of health care delivery for the entire population?

The paradigm must be shifted so consumers get the benefit of the newest technology, treatment is more convenient for patients, real-time data are being used for diagnostic purposes to provide the highest quality patient care possible, and the massive amount of information that consumers are freely giving to private companies could be used for population health, instead of the years and years it takes for Institutional Review Boards and research studies to get through the system. We need to figure out how to leverage technology and consumer convenience to drive research and outcomes. The medical community is getting left behind because consumers want answers in real time, are used to getting what they want in real time, and will bypass the medical community if it can’t provide that convenience and value in real time. Consumers’ behavior with health apps is instructive on how to engage patients for their own treatment and for research purposes. The paradigm is shifting, and looking at how consumers are behaving with health apps will shape how medical treatment is, and should be, provided in the future.

Privacy Tip #223 – Navigating Individual Data Privacy in a World with AI

The same week that the National Institute of Standards and Technology came out with its Privacy Framework [view related post], highlighting how privacy is basically a conundrum, news articles also highlighted a new technology, Clearview AI, that allows someone to snap a picture of anyone walking down the street and instantly find out that person’s name, address and “other details.” I want to know what that means? Does that mean they automatically know my salary, the number in my bank account, my prescription medication or health issues, my political affiliation, or what I buy at the drug store or grocery store? All of this information tells a lot about me. Some people don’t care, but I am not sure why. There just does not seem to be any respect or interest in the protection of individual privacy. It’s not that people have things to hide—it’s just that it is reminiscent of some darker days of humanity—such as World War II era Germany.

It is comforting to see that privacy advocates are warning us about Clearview AI. Clearview AI has obtained the information, including facial recognition information of individuals, by scraping common websites such as LinkedIn, Facebook, YouTube, and Venmo, and is storing that biometric information in its system and sharing it with others. According to Clearview AI, its database is for use only by law enforcement and security personnel, and has assisted law enforcement to solve crimes. That is obviously very positive. However, privacy advocates point out that the app may return false matches, could be used by stalkers and other bad actors, as well as for mass surveillance of the U.S. population. That is obviously very negative.

There has always been a tension between using technology for law enforcement and national security, which frankly, we all want, and using technology for uses that are less clear and may promote abuse, which we don’t want. Clearview AI is collecting facial images of millions of people without their consent, which may be used for good or bad purposes. This is where public policy and data ethics must play a part. The NIST Privacy Framework can help in determining whether the collection, use and disclosure of facial recognition on the spot is protecting the privacy and dignity of individuals. Technological capabilities must be used for good purposes, but in today’s world technology is moving fast, and data ethics, privacy considerations and abuse are not always being considered, including with facial recognition applications. Perhaps the Privacy Framework can help shape the discussion, which is why its release is so timely and important.

This article co-authored by guest contributor Victoria Dixon. Victoria is a Business Development & Marketing Coordinator at Robinson+Cole and is not admitted to practice law.

NIST Releases Privacy Framework: A Tool for Improving Privacy Through Enterprise Risk Management

The National Institute of Standards and Technology (NIST) released its first privacy framework tool  (the Privacy Framework) on January 16, 2020. In the Executive Summary, NIST states that with the unprecedented flow of data of individuals through a complex digital ecosystem, individuals may not be able to understand the potential consequences for their privacy as they interact with system and services, while at the same time, organizations “may not realize the full extent of these consequences for individuals, for society or for their enterprises…” For more on the background of the draft Privacy Framework, see our previous post.

NIST states in the Executive Summary that the tool will “enable better privacy engineering practices that support privacy by design concepts and help organizations protect individuals’ privacy. The Privacy Framework can support organizations in:

  • Building customers’ trust by supporting ethical decision-making in product and service design or deployment that optimizes beneficial uses of data while minimizing adverse consequences for individuals’ privacy and society as a whole;
  • Fulfilling current compliance obligations, as well as future-proofing products and services to meet these obligations in a changing technological and policy environment; and
  • Facilitating communication about privacy practices with individuals, business partners, assessors, and regulators.”

NIST considers the Privacy Framework to be “widely usable by organizations of all sizes and agnostic to any particular technology, sector, law, or jurisdiction” and able to provide flexibility to organizations by using a risk-and-outcome-based approach. The Privacy Framework’s “purpose is to help organizations manage privacy risks by:

  • “Taking privacy into account as they design and deply systems, products, and services that affect individuals;
  • Communicating about their privacy practices; and
  • Encouraging cross-organizational workforce collaboration—for example, among executives, legal and information technology *IT)—through the development of Profiles, selection of Tiers, and achievement of outcomes.”

The Privacy Framework is comprised of three parts: 1) The Core (a set of privacy protection activities and outcomes that can be communicated and developed throughout the organization); 2) A Profile (the organization’s current privacy activities or desired outcomes to focus on, which can change or be added depending on the organization’s needs; the Profiles can be used to conduct self-assessments and used for communication purposes within the organization); and 3) Implementation Tiers (Tiers reflect the current and changing posture of the organization to determine whether there is progress or whether processes and resources are in place to manage the risk.).

The Privacy Framework is designed to work with the NIST Cybersecurity Framework, recognizing the cybersecurity and privacy risk relationship and overlap.

As with the NIST Cybersecurity Framework, the Privacy Framework is easy to understand and user-friendly. It provides a roadmap for organizations to tackle privacy risk management, and it urges organizations to understand that privacy must be considered when developing new products and services, just as security must be considered. Further, with rapidly changing laws, privacy risk management is important to an organization’s overall risk management and compliance. But most of all, the Privacy Framework challenges organizations to consider the ethics of the collection, maintenance, use, disclosure and monetization of data, and to think about the consequences that the proliferation of data and the collection and use of data might have on individuals.

As stated in the Privacy Framework, “Privacy is challenging because not only is it an all-encompassing concept that helps to safeguard important values such as human autonomy and dignity, but also the means for achieving it can vary…human autonomy and dignity are not fixed, quantifiable constructs; they are filtered through cultural diversity and individual differences. This broad and shifting nature of privacy makes it difficult to communicate clearly about privacy risks within and between organizations and individuals.” The Privacy Framework is designed to provide a common language so that diverse privacy needs can be met and communication around those needs and expectations is clear. We applaud NIST on this needed assistance for organizations to tackle the ever-changing landscape of data privacy.

LexBlog