On December 15, 2023, the Cybersecurity & Infrastructure Security Agency (CISA) issued a Secure by Design Alert and guidance on “How Manufacturers Can Protect Customers by Eliminating Default Passwords.”

The guidance was created by CISA to “urge technology manufacturers to proactively eliminate the risk of default password exploitation by implementing principles one and three of the joint guidance, ‘Shifting the Balance of Cybersecurity Risk: Principles and Approaches for Secure by Design Software”:

  • Take ownership of customer security outcomes.
  • Build organizational structure and leadership to achieve these goals.

It is CISA’s conclusion that if software manufacturers implement these two principles, they “will prevent exploitation of static default passwords in their customers’ systems.” Since threat actors are exploiting default passwords, CISA is urging manufacturers to proactively eliminate them so customers can’t use them, and they can’t continue to be exploited. According to CISA, “Years of evidence have demonstrated that relying upon thousands of customers to change their passwords is insufficient, and only concerted action by technology manufacturers will appropriately address severe risks facing critical infrastructure organizations.”

May software developers listen and respond to CISA’s urging to help keep their customers safe from known threats.

On December 8, 2023, New York Attorney General Leticia James penned her approval to an Assurance of Discontinuance with third party dental administrator Healthplex, settling the enforcement action for $400,000 and a litany of data privacy and security compliance requirements.

The AG’s investigation commenced following a November 24, 2021, successful phishing attack against Healthplex. The threat actor gained access to an email account of a Healthplex employee containing over twelve years of email, including enrollment information of insureds.

The AG’s investigation learned that the threat actor had access to the employee’s account for less than one day, but during that time, may have had access to member data, including personal information. Healthplex’s O365 account did not enable multi-factor authentication at the time, and the logs were unable to determine which emails were accessed by the threat actor. Healthplex notified individuals whose information was included in the email account.

Following the incident, Healthplex enabled multi-factor authentication, upgraded its O365 license for enhanced logging capabilities, provided additional security training for employees and implemented a 90-day email retention policy.

Despite implementing these sound measures in response to the incident, note that the NYAG cites these measures as lacking before the incident, and in essence, relies on them for the settlement with Healthplex, along with another finding that Healthplex’s data security assessments did not identify those very vulnerabilities.

As with other regulatory settlements, the Assurance of Discontinuance is worthy of a read by those responsible for compliance in an organization. If there is a security incident, and an organization responds to the incident with security measures that may have prevented it or are sound measures that could have been implemented before the incident, regulators will take note. In this case, the security measures of implementing MFA, data retention procedures, employee education, and enhanced logging for O365 are measures that organizations may wish to implement now if they are not already in place.

Last week, the California Privacy Protection Agency (CPPA) voted in favor of a legislative proposal that would require web browsers to include a feature that allows web users the ability to exercise their privacy rights under the California Consumer Privacy Act (CCPA) through opt-out preference signals.

Under the California Consumer Privacy Act (CCPA), businesses must adhere to a user’s opt-out web preference signals as a valid request to opt-out of the sale and/or sharing of their information. These signals are supposed to be ‘global’ meaning that through an opt-out preference signal, a user can opt-out of the sale and sharing of their information on all websites that they interact with without having to make separate requests to each website. However, to exercise this right under the CCPA, a user must either use a browser that SUPPORTS these opt-out preference signals or take extra steps to download a browser plugin to support these signals. Currently, only a few browsers offer built-in support for opt-out preference signals: Mozilla Firefox, DuckDuckGo and Brave. That’s only 10 percent of the global desktop market share. But note, that none of these (or any others) are loaded onto devices by default, which means it is not apparent (or easy) for users to take advantage of these built-in protections.

If the California legislature adopts this proposal from the CPPA, it will be the first state to require browser vendors to enable this type of signals.

The California Privacy Protection Agency (CPPA) recently met to discuss automated decision-making technology, privacy risk assessments and cybersecurity audits under the California Consumer Privacy Act (CCPA) as amended by the California Privacy Rights Act (CPRA). However, the CPPA also decided to step outside the anticipated agenda and discuss additional revisions to the existing regulations. Once again. changes are on the horizon. What kind of changes? Here are the key things that would change under the CCPA for your organization’s online privacy policy:

  • “Meaningful Understanding” of Sources and Sales/Sharing with Third Parties: the draft revisions would add a requirement for privacy policies to provide “meaningful understanding” of the sources that the business uses to collect personal information and the categories of third parties to which the business shares or sells personal information.
  • Clarifying Disclosures to Service Providers and Contractors: the draft revisions would remove an ambiguity related to the definition of a “third party” and require businesses to explicitly identify the categories of personal information disclosed to a service provider or contractor in the last 12 months.
  • Privacy Policy Links for Mobile Applications: the draft revisions would require mobile apps to include a link to their privacy policies in the settings menu of the app. This link would be in addition to the link on the website homepage and the app store download page.

After the CPPA finalizes the draft revisions, the proposed rule changes will be published for a 45-day public comment period. However, the CPPA did not provide an anticipated start date for that comment period yet.

On Monday, December 18, 2023, the Federal Trade Commission (FTC) released its report on takeaways gleaned from a public event it held in October with creative professionals, including artists and authors. The 43-page report, entitled, “Generative Artificial Intelligence and the Creative Economy Staff Report: Perspectives and Takeaways,” provides an insider view of the FTC’s interest and thoughts on learning about the risks of generative AI on creative work. It is “intended as a useful resource for the legal, policy and academic communities who are considering the implications of generative AI. That, and establishing its jurisdiction over AI and the issues that “implicate the FTC’s enforcement and policy authority.”

We previously wrote about how toys, baby monitors, and other smart devices collect, use, and disclose personal information about children, and risks to children’s privacy. As adults responsible for the safety of children in our care, learning about how smart devices collect, use, and disclose personal information of children should be a top priority, just as we oversee physical safety for those in our care.

Although the Children’s Online Privacy Protection Act (COPPA) was a good first start to try to regulate the collection of children’s information online, the law has not kept pace with rapidly developing technology and online tools. The last update to COPPA was in 2013, which is ancient history in the world of technology.

The Federal Trade Commission is proposing changes to COPPA that would strengthen restrictions on the use and disclosure of children’s personal information and would “further limit the ability of companies to condition access to services on monetizing children’s data. The proposal aims to shift the burden from parents to providers to ensure that digital services are safe and secure for children.”

For parents and guardians: although the FTC is working to protect children, we should too. Stay educated on how children’s privacy is affected by their online use, how their data is stored, used, and monetized, and educate children about how to protect themselves.

There was a big win for the good guys against the bad guys this week. On December 13, 2023, after obtaining an order from the federal court in the Southern District of New York to seize U.S. based infrastructure and take offline websites used by a group Microsoft identifies as Storm-1152, Microsoft’s Digital Crimes Unit disrupted:

  • Hotmailbox.me, a website selling fraudulent Microsoft Outlook accounts
  • 1stCAPTCHA, AnyCAPTCHA, and NoneCAPTCHA, websites that facilitate the tooling, infrastructure, and selling of the CAPTCHA solve service to bypass the confirmation of use and account setup by a real person. These sites sold identity verification bypass tools for other technology platforms
  • The social media sites actively used to market these services

The takedown is impressive and outlined in detail by Amy Hogan-Burney, General Manager, Associate General Counsel, Cybersecurity Policy & Protection at Microsoft. According to Hogan-Burney, “Fraudulent online accounts act as the gateway to a host of cybercrime, including mass phishing, identity theft and fraud, and distributed denial of service (DDoS) attacks.” She issues a missive to cybercriminals, “We are sending a strong message to those who seek to create, sell or distribute fraudulent Microsoft products for cybercrime: We are watching, taking notice and will act to protect our customers.” You go, Microsoft!

According to new reporting from Reuters, cybercriminals are exploiting Wyoming’s limited liability corporation law to set up legitimate-seeming endpoints for illicit traffic. Filtering traffic through the United States allows criminals to evade detection by their targets and law enforcement. Wyoming’s LLC governance system, often promoted as being business-friendly and user-friendly, enables criminals to create legitimate-looking fronts for server registration.

According to Reuters, Wyoming also allows registered agents to serve as the public point of contact for LLCs, effectively shrouding the actual ownership behind a veil of anonymity. Such practices have drawn criticism from experts and victims who highlight the lax regulations in Wyoming and advocate for more stringent oversight to curb the misuse of LLCs for nefarious purposes.

The use of Wyoming limited liability companies as a veil for illicit cyber activities has raised alarms within the cybersecurity community. By exploiting the anonymity afforded by these entities, cybercriminals have orchestrated a series of distributed denial of service (DDoS) attacks, targeting journalists, nonprofits, and news organizations.

The findings have implications beyond cybersecurity and into the broader landscape of regulatory governance. As the digital world continues to evolve, regulatory bodies may wish to consider implementing measures to address uses of a state law for illicit activities. The intersection of cybersecurity and corporate governance requires a cohesive approach to address vulnerabilities within the digital sphere.

On December 13, 2023, the Office of the National Coordinator for Health Information Technology (ONC) issued its final rule entitled “Health Data, Technology, and Interoperability: Certification Program Updates, Algorithm Transparency, and Information Sharing” and known as “HTI-1” (Final Rule). Among other issues addressed in the Final Rule, ONC revised the information blocking rules to add clarity and to create a new information blocking exception. We outline these changes in further detail below. The information blocking provisions of the Final Rule will be effective 30 days after it is published in the Federal Register.

1. Definition of “Offer Health IT”

Under the information blocking regulatory framework, health IT developers of certified health IT are held to a higher standard with respect to information blocking and are subjected to higher potential penalties for information blocking than health care providers. As set forth in current regulations, the term “health IT developer of certified health IT” is defined to include individuals or entities that offer certified health IT, which ONC previously explained includes making such certified health IT available for purchase or license. As a result of this definition and interpretation, many health care providers (e.g., large hospitals) that make their ONC-certified electronic health record (EHR) system available to community physicians may be health IT developers of certified health IT for purposes of the information blocking regulations, and thus subjected to higher potential penalties for information blocking violations.

In the Final Rule, ONC adds a regulatory definition to the terms “offers health information technology” and “offers health IT,” and further described scenarios in which an individual or entity will not be deemed to offer health IT. The new definition provides that to offer health IT means:

“to hold out for sale, resale, license, or relicense; or to sell, resell, license, relicense, or otherwise provide or supply [ONC-certified health IT] for deployment by or for other individual(s) or entity(ies) under any arrangement [except as provided below].”

The definition goes on to specifically describe three categories of activities that are excluded from the definition. First are arrangements where an individual or entity donates or subsidizes the funding for a health care provider’s acquisition, augmentation, or upkeep of health IT. In its commentary, ONC draws a distinction between these funding arrangements (which will not meet the definition of offer health IT), and arrangements where an individual or entity re-licenses or otherwise makes available the health IT itself to a health care provider, which would meet the definition. This distinction means that many EHR donation arrangements between hospitals and physicians will continue to result in the donating hospital being deemed a health IT developer of certified health IT for purposes of the information blocking regulations.

The second category of excluded activities include issuing log-in credentials to an entity’s employees, health care providers performing services at the entity, and to public health authorities; implementing production instances of APIs to support access, exchange, and use of electronic health information (EHI); and implementing online portals for patients, clinicians, and certain other individuals to access, exchange, and use EHI.

Third and finally, certain consulting and legal services will be excluded from the definition of offer health IT.

2. New TEFCA Manner Exception

In general, the information blocking exceptions provide conditions that must be met for certain conduct by an actor (as defined under the information blocking regulations) to not be considered information blocking. Under the Final Rule, a new exception is added that applies when an actor will only fulfill requests for access, use, or exchange of EHI by way of the Trusted Exchange Framework and Common Agreement (TEFCA). To satisfy this TEFCA Manner Exception, both the actor and requestor must be part of TEFCA and the requestor must be able to access, use, or exchange EHI via TEFCA. Furthermore, the request for access, exchange or use is not via API standards certified by ONC, and to the extent fees are charged or interoperability elements licensed, the actor must comply with the Fees Exception and Licensing Exception, as applicable.

3.  Modifications to the Infeasibility Exception

The Final Rule adds two new conditions and modifies an existing condition to the Infeasibility Exception. The first new condition (entitled “Third Party Seeking Modification Use”) allows an actor to refuse a request by a third party to use EHI in order to modify it unless the request for modification is from a health care provider to its business associate.

The second condition (entitled “Manner Exception Exhausted”) permits an actor to not fulfill a request to access, exchange or use EHI where each of the following are met: (a) the actor could not come to an agreement with the requestor as provided in the Content and Manner Exception (renamed under this Final Rule as the “Manner Exception”) or was technically unable to fulfill the request; (b) the actor offered two alternative manners of access, use, or exchange as provided in the Manner Exception, one of which must be either the use of ONC-certified technology or content and transport standards published by the federal government or an ANSI-accredited organization; and (c) the actor does not provide the requested access, use or exchange to a “substantial number” of third parties that are similarly situated to the requestor. Further, the actor must not discriminate by health care provider type and size, based on whether the requestor is an individual or whether the requestor is a competitor.

Currently, an actor may decline to fulfill a request to access, use, or exchange EHI if it cannot do so “due to” certain uncontrollable events beyond its control, such as a natural disaster or public health emergency (and the other applicable conditions of the exception are met). ONC modifies this condition to clarify that the uncontrollable event must in fact negatively impact the actor’s ability to fulfill the request for access, use, or exchange of EHI. ONC explains that the mere occurrence of, for example, a public health emergency would not allow an actor to satisfy this exception; instead, the particular uncontrollable event would need to be the cause of the actor’s inability to fulfill the request.

4. Modifications to the Content and Manner Exception

The current Content and Manner Exception includes a provision permitting actors to respond to requests for access, use, or exchange of EHI with the USCDI v1 data elements; however, this provision applied only to requests prior to October 6, 2022. Accordingly, ONC revises this exception to remove this obsolete provision and renames the exception the “Manner Exception.”

Conclusion

The Final Rule makes several other conforming changes to the information blocking rules, including removal of other references to the USCDI v1 data elements. Beyond information blocking, the Final Rule implements a number of other provisions of the 21st Century Cures Act and addresses issues such as algorithm transparency for predictive artificial intelligence, updates to ONC’s certification standards for health information technology, and creates an “Insights Condition” as a condition of certification for developers of certified health information technology.

Members of the health care community would be well-served by reviewing the updated information blocking rules, as well as other relevant portions of the Final Rule to determine whether they need to undertake any actions to comply with the Final Rule.

This post was authored by Nathaniel Arden and is also being shared on our Health Law Diagnosis blog. If you’re interested in getting updates on developments affecting health information privacy and HIPAA related topics, we invite you to subscribe to the blog. 

The Office of the Controller of the Currency (OCC) issues a semiannual risk perspective report that “addresses key issues facing banks, focusing on those that pose threats to the safety and soundness of banks and their compliance with applicable laws and regulations.” The most recent report “presents data in five main areas: the operating environment, bank performance, special topics in emerging risks, trends in key risks, and supervisory actions.”

One of the special topics in emerging risks is artificial intelligence (AI). Although the OCC acknowledges the potential benefit of using AI in the banking industry, it also acknowledges the risks associated with its use, particularly generative AI tools.

The OCC states: “Consistent with existing supervisory guidance, it is important that banks manage AI use in a safe, sound, and fair manner, commensurate with the materiality and complexity of the particular risk of the activity or business process(es) supported by AI usage. It is important for banks to identify, measure, monitor, and control risks arising from AI use as they would for the use of any other technology.” Although this general statement is a no brainer, banks need better guidance on how to deal with the risks associated with AI. Telling the banking industry that the OCC is “monitoring” the use of AI is not particularly helpful.

As a former regulator, my sense is that it would be helpful if regulators would provide solid guidance to regulated industries on how the use of AI will be regulated. The risks associated with AI have been documented and are known. We are already behind in mitigating those risks. Regulators must take an active role to shape appropriate uses and mitigate risks posed by the use of AI and not wait until bad things happen to consumers. Regulations are always behind reality, and this is no exception.

One risk that is obvious and concerning to me is the use of voice recognition technology by banks and financial institutions to authenticate customers. With the astonishingly accurate depictions of voices by AI generated tools, threat actors are and will be using deep fakes with financial institutions to perpetrate fraud. Why don’t we just start there? Let’s figure out how financial institutions can identify customers without using Social Security numbers or voice recognition.