This blog post is the second in a three-part series exploring the intersection between AI and antitrust. (Part I can be found here.)

Part II: Tacit Collusion and AI

This is the second part of a three-part series on artificial intelligence and anti-trust.

In Topkins, discussed in Part I, the defendants had already entered their unlawful agreement—algorithms were a tool to implement the agreement. The more vexing question facing antitrust enforcers is the issue of tacit collusion and AI—where companies may act in response to AI without an agreement to fix prices.

When Assistant Attorney General for the Division Jonathan Kanter spoke at the South by Southwest festival earlier this year, he called the Division’s AI effort “Project Gretzky”—a reference to Wayne Gretzky’s famous quote about not skating after the puck, but rather to skate where the puck is going to go. To that end, the Division is hiring data scientists and other experts to better understand AI so that antitrust enforcers can keep pace—or at least try— with a technology that is quickly shaping how markets function. See “DOJ Has Eyes on AI, Antitrust Chief Tells SXSW Crowd,” AXIOS, Mar. 13, 2023.

FTC Chairperson Lina Khan’s recent opinion piece cautioned that firms should not be complacent.

[T]he A.I. tools that firms use to set prices for everything from laundry detergent to bowling lane reservations can facilitate collusive behavior that unfairly inflates prices . . . . The FTC is well equipped with legal jurisdiction to handle the issues brought to the fore by the rapidly developing A.I. sector, including collusion . . . and unfair methods of competition. . . . Enforcers have the dual responsibility of watching out for the dangers posed by new A.I. technologies while promoting the fair competition needed to ensure the market for these technologies develops lawfully.

Lina M. Khan, We Must Regulate A.I. Here’s How,” N.Y. TIMES, May 3, 2023.

To summarize, unlike the Division, which is limited to enforcing Sections One and Two of the Sherman Act, the FTC has broad jurisdiction to combat abuse of AI under Section Five of the FTC Act, which prohibits “unfair or deceptive acts or practices in or affecting commerce.” Hence, the FTC has a wider net for pursuing and curtailing AI abuse. With promises of aggressive enforcement by both agencies, Part III in this series will describe how companies can keep pace with government regulation.

EyeMed Vision Care, LLC has agreed to settle allegations lodged against it by four state Attorneys General for $2.5 million stemming from a data breach that occurred in 2020 and effected 2.1 million people.

The settlement is with the AGs of Florida, New Jersey, Oregon, and Pennsylvania. The breach occurred when threat actors infiltrated EyeMed’s systems through a shared email account. The information compromised included individuals’ names, addresses, dates of birth, account information, Social Security numbers, government insurance numbers, and medical information. The threat actors used the email account to send phishing emails to approximately 2,000 EyeMed customers.

In addition to the monetary settlement, EyeMed agreed to implement a cybersecurity program overseen by an independent third party. The settlement is in the wake of EyeMed’s settlement with the New York Department of Financial Services in October of 2022 for $4.5 million.

On May 16, 2023, the Cybersecurity & Infrastructure Security Agency (CISA) released three advisories applicable to Industrial Control Systems (ICS). The Alerts cover vulnerabilities of Snap One OvrC Cloud, Rockwell ArmorStart, and Rockwell Automation Factory Talk Vantagepoint.  

The Snap One vulnerabilities, if exploited, “could allow an attacker to impersonate and claim devices, execute arbitrary code, and disclose information about the affected device.” CISA recommends that organizations minimize the vulnerability by following Snap One’s release notes on patching the vulnerabilities.

The Rockwell ArmorStart vulnerabilities, if exploited, “could allow a malicious user to view and modify sensitive data or make the web page unavailable.” CISA recommends that users follow the measures outlined by Rockwell and to:

  • Locate control system networks and remote devices behind firewalls and isolate them from business networks.
  • When remote access is required, use secure methods, such as virtual private networks (VPNs), recognizing VPNs may have vulnerabilities and should be updated to the most current version available. Also recognize VPN is only as secure as its connected devices.

According to CISA, the Rockwell Automation FactoryTalk Vantagepoint vulnerabilities, if exploited, “could allow an attacker to impersonate an existing user or execute a cross site request forgery attack.” According to the CISA Alert, Rockwell “recommends users update to V8.40 or later…and are encouraged to implement Rockwell Automation’s suggested Security Best Practices to minimize risk associated with the vulnerability and provide training about social engineering attacks, such as phishing.” In addition, CISA recommends that users be alerted to protect themselves from social engineering attacks.

Part I of this two-part post focused on new challenges in IP procurement for businesses using AI for innovation. This second and final post will identify potential risks of IP infringement, and some additional considerations.

AI-generated content may create risks of infringement of IP owned by third parties. This is particularly relevant to trademarks or copyrights. Specifically, generative AI systems might be trained using public data, which might be output without the user’s knowledge of the proprietary nature of the generated content. Accordingly, a company prompting generative AI to create a new logo may inadvertently be at risk of infringing trademarks or copyrights of third parties.

Patent infringement is also a possibility, especially as generative AI improves its ability to generate software on demand. These can be significant risks for any company delegating creative processes to AI engines.

Another potential risk associated with using AI in the work environment is the inadvertent disclosure of trade secrets. In the United States and other jurisdictions, such disclosure may also prevent later patenting. Companies should consider developing proper employee guidance and training to reduce such risk.

More generally, businesses routinely using AI to innovate may be at risk of losing creative value. AI cannot replace human ingenuity and creativity. Companies may wish to identify any overreliance on AI and continue to encourage human innovation. The risks associated with using AI in a company extend beyond the technology itself and into the realm of IP. As laws and regulations surrounding AI and IP continue to evolve, businesses should remain vigilant and seek to adapt their strategies accordingly.

This blog post is the first in a three-part series exploring the intersection between AI and antitrust. 

The first blog post in this series discusses the U.S. Department of Justice, Antitrust Division’s (Division) first criminal antitrust case involving the use of AI.  The second part, which will be published next week, summarizes the FTC’s and Division’s positions on AI collusion and unlawful agreements among competitors, and offers proactive measures that companies can take to avoid government inquiries and/or liability.

Part I:  Antitrust and Algorithms

The Division, which is the agency with authority to prosecute antitrust misconduct, brought its first case involving AI in United States v. Topkins, Case 3:15-cr000201-WHO (N.D. Cal. 2015).  In Topkins, defendant David Topkins sold posters through Amazon Marketplace, Amazon.com, Inc.’s Website for third-party sellers.  Topkins conspired with competitors to fix and maintain the price of certain posters.  Between September 2013 and January 2014, the companies not only engaged in direct discussions about the price of posters, they agreed to use an algorithm to coordinate their activity.  The algorithm used by the defendants collected data to identify the lowest price in the market.  The conspiring sellers set their selling price slightly below the market price, which inflated prices and impeded real competition in the market. 

In economic terms, the Topkins case is relatively small. The affected volume of commerce was a mere $175,000, and that limited the range penalties under the U.S. Sentencing Guidelines. In 2015, Topkins paid a fine of $20,000 and received no jail time. In 2016, Trod Limited, one of the companies participating in the conspiracy, paid a $50,000 fine and agreed to retain KPMG to serve as a compliance monitor.  In 2019, defendant Daniel William Aston, part owner of Trod when it was doing business as “Buy For Less,” was given a prison sentence of six months but received five months credit for time served in custody in Spain as he awaited extradition to the U.S. 

Despite the relatively low penalties, Topkins case is considered a watershed case because it was the first time that the Division prosecuted defendants where AI was a tool to further antitrust misconduct. In the years that have passed since Topkins, the Division and the FTC have closely scrutinized the intersection between AI and Antitrust as we will see in next week’s Part II.

On May 17, 2023, the U.S. Department of Health and Human Services’ Office for Civil Rights (OCR) announced a settlement with MedEvolve, Inc. for $350,000. MedEvolve provides practice and revenue cycle management and practice analytics software services to health care entities. The settlement resulted from MedEvolve’s alleged violation of the Health Insurance Portability and Accountability Act (HIPAA) related to a data breach of the protected health information (PHI) of 230,572 individuals that occurred in 2018. The OCR alleged that MedEvolve failed to analyze and assess risks and vulnerabilities to electronic PHI, and failed to enter into a business associate agreement with its subcontractor.

In July 2018, MedEvolve notified the OCR of a data breach resulting from PHI being made openly accessible via the internet through an FTP server. The PHI effected by this incident included patient names, addresses, telephone numbers, primary health insurer and doctor’s office account numbers, and some Social Security numbers.

In addition to the $350,000 penalty, MedEvolve has agreed to:

  • Conduct a risk analysis to determine risks and vulnerabilities to electronic patient data and its patient data systems;
  • Develop and implement a risk management plan to address and mitigate identified security risks and vulnerabilities identified in the risk analysis;
  • As necessary, develop, maintain, and revise its HIPAA policies and procedures; and,
  • Enhance and/or supplement its existing HIPAA training.

To read the complete resolution agreement, click here.

Tennessee, Montana, Iowa, and Indiana have each recently passed a consumer privacy statute in recent weeks. These laws follow the same trend started by California’s Consumer Privacy Act by granting consumers the right to know whether a company is processing their data; the right to access that data, obtain a copy, and to have it deleted; and to opt out of the sale of personal data. Similar to Connecticut’s Data Privacy Act, which appears to be emerging as a new standard, these laws grant special protections to children’s data up to age 16. The three statutes additionally impose data security, use and collection limitations, and consumer disclosure requirements.

However, each law comes with its own quirks. For example, Iowa’s Consumer Data Privacy Act includes a 90-day cure period for businesses before state Attorney General enforcement. None of these new laws provide consumers with a private right of action. California remains the only state to include a private right of action for consumer privacy violations. In addition, each of the new state privacy laws excludes an individual acting in a commercial or employment context, a move away from California’s law that applies in those scenarios.

Tennessee’s Information Protection Act was signed into law by Governor Bill Lee on May 11, 2023, and takes effect on July 1, 2024, the soonest of the additional four states to become effective. It applies to people conducting business in Tennessee or producing products or services that are targeted to residents of Tennessee that either control or process personal information of at least 100,000 consumers during a calendar year or control or process personal information of at least 25,000 consumers and derive more than 50 percent of their gross revenue from the sale of personal information. The Tennessee Attorney General has jurisdiction over violations and may seek civil penalties of up to $15,000 for each violation.

Montana’s Consumer Data Privacy Act goes into effect October 1, 2024. It applies to Montana companies that process or control the personal data of 50,000 or more Montana residents or control or process the personal data of 25,000 or more consumers and derive more than 25 percent of gross revenue from the sale of personal data.

Iowa’s Consumer Data Privacy Act becomes effective January 1, 2025. It applies to Iowa companies that control or process the personal data of at least 100,000 Iowa residents or derive more than 25 percent of gross revenue from the sale of personal data.

Indiana’s Consumer Data Privacy Act goes into effect January 1, 2026. It applies to Indiana companies that control or process the personal data of at least 100,000 Iowa residents or derive more than 25 percent of gross revenue from the sale of personal data. We will continue to monitor these consumer privacy laws as more states surely follow suit.

Globhe Drones, based in Sweden, provides a subscription model platform for businesses to access data from about 8,000 drone operators in 134 countries. Globhe’s drone data marketplace gathers aerial imagery and generates digital terrain models to assist in creating flood modeling. Users of the platform can order specific drone data missions from the marketplace almost instantly. Drones capture high-resolution data while leaving little to no carbon footprint during the operations.

Globhe CEO, Helena Samsioe, said, “Due to climate change, we see more destructive climate disasters more frequently worldwide. Malawi has suffered from three major floods in the past four years. Data from drones help respond to floods more efficiently but also help limit the destruction in the first place. However, it’s been hard to get ahold of data from drones, and we’re happy to have changed that by unleashing the power of drones through the Globhe marketplace – now making data from drones easily accessible when and where it’s needed.” Globhe’s most recent engagement was with UNICEF Malawi to provide drone services in emergency preparedness and response.

Overall, this platform is effective in preparedness for flood risk and disaster response. This drone data marketplace is an example of how drone technology can cut costs, increase safety, and improve on infrastructure. Data is captured using ground control points (GCPs) by connecting the models produced by the software with known GCPs, which provides a high vertical and horizontal accuracy of around 10 cm. The spatial resolution is much higher than the resolution produced by satellites, which generally produce a 5–30-meter resolution, compared to the 5-centimeter resolution for the certain types of high-resolution maps and 50 centimeters for Digital Surface and Digital Terrain models.

The Globhe platform has found a way to take data collected from drones that can be used to protect flood-prone communities from devasting effects. As evidenced by this example, drones can serve as useful tools for data collection and provide us with information that might otherwise be unavailable.

This week, the CEO of OpenAI, the company behind ChatGPT and the Chief Privacy Officer of IBM testified before the U.S. Senate Judiciary Subcommittee on Privacy, Technology and the Law. During that hearing, it is reported that both “called on U.S. senators…to more heavily regulate artificial intelligence technologies that are raising ethical, legal and national security concerns.”

According to the Los Angeles Times, OpenAI’s CEO, Sam Altman agreed that the use of AI could solve big problems, but simultaneously, stated that “If this technology goes wrong, it can go quite wrong.” He posited that companies should have to get a license to operate and conduct a series of tests before releasing new models.

IBM’s Chief Privacy and Trust Officer Christina Montgomery stated that it’s not too late to address how the AI tools are used through “precision regulation.”

For its part, the White House issued a “Blueprint for an AI Bill of Rights” for consumers, which is a helpful guide on some harmful effects of AI for consumers, including discrimination by algorithms and abusive data practices that affect consumers’ privacy.

While Congress grapples with how to regulate AI, consumers should continue to research and stay abreast of the good and evil of AI.

Threat actors never cease to find innovative ways to extort their victims. If only threat actors would use their creativity for good causes.

This week, Bluefield University communicated with its students to be careful of texts sent through the University’s communication system after a ransomware group used the communication system to message the campus about a ransomware attack in progress.

According to reports, the ransomware group used the University’s communication system to “send threatening messages out to all of Bluefield University’s students and employees.” The message stated “We’re the Avoslocker ransomware. We hacked the university network to exfiltrate 1.2 TB of files. We have admissions data from thousands of students. Your personal information is at risk to be leaked on the dark web blog. Do not allow the university to lie about the severity of the attack.”

The students received a one-day reprieve from exams because of the ransomware attack.

The FBI identifies AvosLocker as a ransomware-as-a-service group that targets critical infrastructure, including financial services, critical manufacturing and government facilities.