On November 13, 2023, Governor Kathy Hochul released proposed cybersecurity regulations applicable to all hospitals located within the state of New York. The Governor has included $500 million in grant funding in her FY24 budget to assist health care facilities with upgrading their systems to comply with the new requirements.

According to the Governor’s press release, the proposed regulations aim to strengthen the protections on hospital networks and systems that are critical to providing patient care, as a complement to the Health Insurance Portability and Accountability Act (HIPAA) Security Rule that focuses on protecting patient data and health records. Under the proposed provisions, hospitals will be required to establish a cybersecurity program and take proven steps to assess internal and external cybersecurity risks, use defensive techniques and infrastructure, implement measures to protect their information systems from unauthorized access or other malicious acts, and take actions to prevent cybersecurity events before they happen.

If the proposed regulations are accepted by the Public Health and Health Planning Council, they will be published in the State Register and undergo a 60-day public comment period through February 5, 2024. Once the regulations are final, hospitals will have one year to get in compliance. 

The Federal Communications Commission (FCC) has announced its proposal to create a Schools and Libraries Cybersecurity Pilot Program that would help K-12 schools and libraries protect their broadband networks and data from cyber threats. The pilot program is part of the FCC’s Learn Without Limits initiative, which aims to ensure connectivity and digital equity for online learning.

The pilot program would allocate up to $200 million over three years from the Universal Service Fund (USF) to fund the cybersecurity and advanced firewall services needed by eligible schools and libraries. The pilot program would also collect valuable data on the types and effectiveness of these services, as well as the best practices and challenges in implementing them.

The FCC’s proposal comes in response to the growing number of cyberattacks and ransomware incidents targeting schools and libraries, which disrupt the learning process and compromise sensitive information. Schools and libraries make tempting targets for attackers due to the large amount of personal information they collect, often supervised by under-resourced technology departments. The FCC hopes that this pilot program will help harden the defenses of these institutions and provide useful insights for future policymaking.

The FCC’s proposal is open for public comment until December 13, 2023.

Data privacy and cybersecurity risks are critical components of M&A transactions due to the potential exposure for legal liability for non-compliance, as well as the financial and reputational harm and the material impact that lax or failed data privacy compliance and cybersecurity safeguards can have on an entity’s ability to conduct its operations.

Therefore, part of the due diligence process of any M&A deal must include an assessment of the applicability of the California Consumer Privacy Act, as amended by the California Privacy Rights Act (collectively, CCPA). The CCPA is a consumer privacy law that applies to for-profit entities which collect personal information from California residents. The CCPA is enforced by very active regulators (the California Attorney General and the California Consumer Privacy Agency), and  provides state residents a private right of action in the event of certain security incidents that expose their personal information.

Beyond California, 13 other states have passed consumer privacy rights laws (the laws in Virginia, Colorado, Utah, and Connecticut took effect just this year), and many other states have such consumer privacy rights laws pending. Assessing the applicability of, and compliance with, these state privacy laws is critical to identifying the legal risks involved for businesses operating and providing products or services to customers in the U.S. As such, in an M&A transaction, the acquirer should first review the state-specific threshold requirements for applicability, which may include the target company’s gross annual revenue and the number of state residents’ information processed by the target company. The CCPA, for example, reaches any business that has over $25 million in gross revenue in a year, and that processes personal information of a California resident (note that processing has a very specific—and broad—definition under the CCPA). And, unlike other privacy statutes in the past that only apply to individual consumers, the CCPA applies to information collected from B2B partners and employees.

Confirming compliance (or non-compliance for that matter) with the CCPA and other similar state consumer privacy laws is essential to the deal. One way in which the acquirer can begin due diligence in this space is to review the entity’s online privacy policy to see if it outlines consumers’ rights related to their personal information under these state laws (note that there are very specific requirements). Of course, this is only one piece of privacy due diligence during a deal. There are sector-specific privacy and security laws, international privacy laws, and other applicable state privacy and security laws. Remember to do your homework.

Following the White House’s Executive Order on AI, the Cybersecurity & Infrastructure Security Agency (CISA) issued its Roadmap for Artificial Intelligence this week “which is a whole-of-agency plan aligned with national AI strategy to address our efforts to: promote the beneficial uses of AI to enhance cybersecurity capabilities, ensure AI systems are protected from cyber-based threats, and deter the malicious use of AI capabilities to threaten the critical infrastructure Americans rely on every day.”

CISA will implement the Roadmap through five lines of effort:

  • Effort 1: Responsibly use AI to support our mission.
  • Effort 2: Assure AI systems.
  • Effort 3: Protect critical infrastructure from malicious use of AI.
  • Effort 4: Collaborate and communicate on key AI efforts with the interagency, international partners, and the public.
  • Effort 5: Expand AI expertise in our workforce.

Although all of the lines of effort are important, Objective 3.1 is one that is crucial for securing critical infrastructure against new threats posed by AI: “CISA will build on existing structures to advance industry collaboration and coordination around AI security.” At this point, we know threat actors are and will continue to use AI tools to launch cyber-attacks, so it should be a high priority for us to get a handle on those threats, and how to mitigate them, particularly when they are directed at critical infrastructure.

CISA continues to evaluate and provide valuable information to public agencies and private industry. Take a look at the Roadmap to understand how CISA is tackling AI to understand how it may be applicable to your industry.

During the last Privacy Law class of the semester, we discuss Privacy and Emerging Technology. My students continue to learn about the collection, use, disclosure, and monetization of consumers’ data, and continue to be amazed at how their data is used without their knowledge. They often ask for tips on how to protect their data and make personal choices about when to allow its collection and use.

A helpful resource that I often peruse is the Electronic Frontier Foundation’s website. One tool that is particularly relevant to protecting one’s online privacy is the EFF’s Surveillance Self-Defense tools, which includes background on how online surveillance works, and tools to pick secure applications and security scenarios.

For my students reading this post this week, get ready to discuss the SSD tips and tools during class next week! For the rest of you, take a few minutes to remind yourself of how online surveillance works and how to best protect yourself online.

Boeing has confirmed that its parts and distribution site has been attacked by LockBit ransomware, which is believed to be Russian based. Boeing has said that the attack has not affected flight safety. Boeing is investigating the attack.

LockBit publicly claimed responsibility for the attack and boasted that it had stolen “sensitive data” from Boeing that it would publish. The public listing has subsequently been removed from LockBit’s shame site.

Boeing is notifying customers that have been impacted by the attack. Reports have indicated that the attack stemmed from a zero-day vulnerability.

On October 31, 2023, the Office for Civil Rights (OCR) issued a press release announcing that it has settled with Doctors’ Management Services for $100,000 following a ransomware attack that compromised the protected health information of 206,695 individuals.

According to the press release, “this marks the first ransomware agreement OCR has reached.”  The facts underlying the settlement include that Doctors’ Management Services was infected with GandCrab ransomware in April of 2017, but the intrusion was not detected until December of 2018. Doctors’ Management Services filed a breach report in April of 2019.

The OCR says that it found evidence that Doctors’ Management Services failed to implement a risk analyses to detect risks and vulnerabilities to protect health information including insufficient monitoring or its systems to protect against a cyber attack and a failure to implement requirements of HIPAA to protect the data.

In addition to the $100,000 settlement, Doctors’ Management Services is required to implement a corrective action plan.

YouTube’s ad blocker detection technology is facing legal challenges from privacy advocates who claim it violates their privacy rights under the General Data Protection Regulation (GDPR). According to the complaint, YouTube violates users’ privacy by using JavaScript-based detection scripts to look for specific HTML page elements rendered by a user’s browser. YouTube began rolling out adblocker detection to European markets earlier this year, and the site is now preventing some European users from viewing its content if they have an adblocker enabled.

The anti-adblocking JavaScript allegedly runs on the users’ local device to identify whether it is also running specific software – namely, whether the user is using an adblocker. Because this processing happens on the users’ computer, rather than on YouTube’s servers, activists claim that it violates the GDPR’s requirement that service providers get explicit permission to “gain access to information stored in the terminal equipment of a subscriber or user.”

Activists have petitioned the European Commission to determine whether this implementation of adblocking technology is “absolutely necessary to provide a service such as YouTube.” The Irish Data Protection Commissioner’s Office is also investigating YouTube’s use of adblocker detection.

YouTube’s current terms do not explicitly mention the use of ad-blocking tools or any detection measures. Activists argue that EU courts would hold such provisions invalid under the GDPR as a violation of privacy rights.

On October 30, 2023, President Biden issued the “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence” (AI EO), which has specific impacts on the healthcare industry. We detailed general aspects of the AI EO in a previous blog post.

Some impacts on the healthcare industry have been outlined in a Forbes article written by David Chou. Chou synthesizes the AI EO into four areas of impact for the healthcare industry:

  • HHS AI Task Force—after the task force is created, which includes representatives from Health and Human Services, it will “develop a strategic plan with appropriate guidance” including policies, frameworks, and regulatory requirements on “responsibly deploying and using AI and AI-enabled technologies in the health and human services sector, spanning research and discovery, drug and device safety, healthcare delivery and financing, and public health.”
  • AI Equity—AI-enabled technologies will be required to include equity principles, including “an active monitoring of the performance of algorithms to check for discrimination and bias in existing models” and “identify and mitigate any discrimination and bias in current systems.”
  • AI Security and Privacy—The AI EO requires “integrating safety, privacy, and security standards throughout the software development lifecycle, with a specific aim to protect personally identifiable information.”
  • AI Oversight—The AI EO “directs the development, maintenance, and utilization of predictive and generative AI-enabled technologies in healthcare delivery and financing. This encompasses quality measurement, performance improvement, program integrity, benefits administration, and patient experience.” These are obvious use cases where AI-enabled technology can increase efficiencies and decrease costs. That said, the AI EO requires that these activities should include human oversight of any output.

Although these four considerations are but a start, I would add that healthcare organizations (including companies supporting healthcare organizations) should consider looking beyond these basic principles when developing an AI Governance Program and strategy. There are numerous entities regulating different parts of the healthcare industry that provide insight into the use of AI tools, including the World Health Organization, the American Medical Association, the Food & Drug Administration, the Office of the National Coordinator, the White House, and the National Institutes of Standards and Technology. All of these entities have issued guidance and proposed regulations on the use of AI tools in the healthcare space, to address the risks of the use of AI in the healthcare industry, including bias, unauthorized disclosure of personal information or protected health information, unauthorized disclosure of intellectual property, unreliable or inaccurate output (also known as hallucinations), unauthorized practice of medicine, and medical malpractice.

Assessing and mitigating the risks to your organization starts with developing an AI Governance Program. The Program should encompass both the risk of your employees using AI tools and how you are using or developing AI tools in your environment and provide guidance to anyone in the organization who is using or developing AI-enabled tools. Centralizing governance of AI will enhance your ability to follow the rapidly-changing regulations and guidance issued by both state and federal regulators and implement a compliance program to respond to the changing landscape.

The healthcare industry is heavily regulated; compliance is no stranger to it. Healthcare organizations must be prepared to include AI development and use in its enterprise-wide compliance program.    

On October 30, 2023, President Biden issued the “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.” (AI EO). At 63 pages, the AI EO outlines a comprehensive framework for federal agencies to regulate all aspects of AI markets. Our team will continue to update you on portions of the AI EO that may be applicable to our clients. We covered general aspects of the AI EO in a previous blog post. This post examines how the AI EO strives to promote competition in AI markets and implications for future antitrust enforcement. 

Key Takeaways

  • Through the AI EO, President Biden established the White House Artificial Intelligence (AI) Council with representatives from over a dozen federal agencies and departments, but curiously omits the U.S. Department of Justice, Antitrust Division (the Division) and the FTC from the roster. However, the Chair of the White House AI Council has the power to add representatives from other agencies and executives; therefore, the Division and FTC are expected to make meaningful contributions to AI policy.
  • The AI Executive Order raises several questions about the future of antitrust enforcement:
    • President Biden urges the FTC to use its rulemaking authority and other enforcement powers to ensure that AI markets have “fair competition.” This mandate may encourage FTC Chair Lina Khan, who is on record about her willingness to push the envelope, [1] to test new theories of enforcement in a rapidly evolving market. 
    • The controversial “essential facilities” doctrine may gain traction as President Biden suggests that “competition-increasing measures”—including the sharing of datasets and intellectual property—for small businesses and entrepreneurs may be “appropriate . . . to the extent permitted by law.”

Section 2 of the AI EO sets forth eight principles and priorities that federal agencies should use as guideposts as they develop regulations pertaining to AI. Several of them draw upon settled tenets of antitrust law such as:

  1. The federal government will promote a “fair, open, and competitive ecosystem and marketplace for AI and related technologies so that small developers and entrepreneurs can continue to drive innovation.”
  2. As new industries and jobs materialize, the government “will seek to adapt job training and education to support a diverse workforce and help provide access to opportunities that AI creates.” [2]

President Biden grapples with competitive concerns throughout the AI EO. He stresses that, in order for AI markets to be competitive, government agencies must “stop unlawful collusion[3] and address risks from dominant firms’ use of key assets including data, to provide opportunities to small businesses . . . and entrepreneurs.”[4] However, the core of President Biden’s efforts to bolster competition are in Section 5.5., aptly titled “Promoting Competition.”

In Section 5.5, President Biden directs not just the Division and FTC, but the heads of all federal agencies “to promote competition in AI and related technolog[y] . . . markets.”[5] Recognizing that semiconductors and other inputs needed to train AI models are costly and present barriers to entering and competing in AI markets, President Biden orders the Secretary of Commerce to implement flexible membership structures for the National Semiconductor Technology Center and develop mentoring programs within the industry.[6] Significantly, given the U.S. Supreme Court’s retreat in Verizon v. Trinko from the “essential facilities” doctrine (which requires a company to give competitors access to an input that is necessary for operations).[7] President Biden suggests that startups and small businesses should have more access to datasets, design and process technology, and “technical and intellectual property assistance” that could accelerate commercialization of new technologies[8]—potentially from the dominant firms that own or control these “key assets.”

The AI EO is an ambitious attempt to regulate AI markets.  Ensuring that these markets remain competitive—and affording smaller firms an opportunity to compete—is critical to innovation and maintenance of a leadership position in the digital economy.  To that end, President Biden has directed federal agencies to facilitate access to funding and resources so that small businesses and entrepreneurs can compete with dominant firms in AI markets.  However, the AI EO creates uncertainty with respect to antitrust enforcement. One question is how the Division and FTC will participate in formulating AI-related antitrust policy with the newly-minted White House AI Council. Another issue is how broadly the FTC will interpret its enforcement powers under the FTC Act to effectuate competition policy in AI markets pursuant to President Biden’s mandate. Finally, any attempt to revive the essential facilities doctrine in the semiconductor and other input markets may conflict with principles articulated in Trinko and force companies to furnish competitors with technology and other resources to their own detriment.


[1] Cite Brookings Institute client alert. 

[2] AI Executive Order, supra note 1, at 2-4. 

[3] For a more detailed discussion of tacit collusion and AI, please click here.

[4] AI Executive Order, supra note 2, at 3.

[5] Id. at 32.

[6] Id. at 32-33 (citing Creating Helpful Incentives to Produce Semiconductors (CHIPS) Act of 2022, Pub. L. 117-167). 

[7] See, e.g., Verizon Communications v. Law Offices of Curtis V. Trinko, LLP, 540 U.S. 398 (2004).  The essential facilities doctrine

[8] See AI Executive Order, supra note 2, at 33.