The United States joined 39 other countries this week in the International Counter Ransomware Initiative, an effort to stem the flow of ransom payments to cybercriminals. The initiative aims to eliminate criminals’ funding through better information sharing about ransom payment accounts. Member states will develop two information-sharing platforms, one created by Lithuania and another jointly by Israel and the United Arab Emirates. Members of the initiative will share a “black list” through the U.S. Department of Treasury, including information on digital wallets being used to move ransomware payments. Finally (in an interesting coming together of the last two oversized ticket items in technology), the initiative will utilize AI to analyze cryptocurrency blockchains to identify criminal transactions.

While government officials near-unanimously counsel against paying ransoms, organizations caught in a ransomware attack often pay to avoid embarrassment and to lower the cost of incident response and mitigation. However, in the macro, paying ransoms leads to ballooning ransom demands and escalating ransomware activity. This initiative may address these long-term trends.

On October 30, 2023, the Biden Administration issued its “Executive Order on the Safe, Secure, and Trustworthy Development and use of Artificial Intelligence.” The EO outlines how Artificial Intelligence (AI) “holds extraordinary potential for both promise and peril.” As the Administration “places the highest urgency on governing the development and use of AI safely and responsibly,” the EO is designed to advance “a coordinated, Federal Government-wide approach” on “governing the development and use of AI safely and responsibly.”

The EO outlines eight guiding principles and priorities:

  1. Artificial Intelligence must be safe and secure, including understanding and mitigating risks of AI systems before use.
  2. Promoting responsible innovation, competition, and collaboration around AI’s use, including investments in AI-related education, training, development, research, and capacity to promote a fair, open, and competitive ecosystem and marketplace.
  3. A commitment to support American workers for the responsible development and use of AI.
  4. AI policies that are consistent with advancing equity and civil rights.
  5. Protecting Americans using AI and AI-enabled products in daily activities from harm.
  6. Protecting Americans’ privacy and civil liberties as AI advancements continue.
  7. Managing the risks from the Federal Government’s own use of AI and increase its internal capacity to regulate, govern, and support responsible use of AI to deliver better results for Americans.
  8. Enabling the Federal Government to lead the way to global societal, economic, and technological progress.

The EO tasks the Secretary of Commerce with leading the effort on a number of fronts. For instance, the EO requires the Department of Commerce’s National Institute of Standards and Technology, in coordination with Energy, Homeland Security “and the heads of other relevant agencies as the Secretary of Commerce may deem appropriate” shall within 270 days of the Order:

  • Establish guidelines and best practices with promoting consensus industry standards, for developing and deploying safe, secure, and trustworthy AI systems.
  • Establish appropriate guidelines, procedures, and processes to enable developers of AI to conduct red-teaming tests.

The EO grants the Secretary of Commerce to “take such actions, including the promulgation of rules and regulations, and to employ all powers granted to the president by the International Emergency Economic Powers Act…, as may be necessary to carry out the purposes” of certain sections of the EO.

The EO also requires that within 90 days of the EO, and at least annually thereafter, the head of each agency with regulatory authority over critical infrastructure, in coordination with the Cybersecurity and Infrastructure Security Agency, to consider cross-sector risks and evaluate potential risks related to the use of AI in critical infrastructure sectors.

The EO establishes the White House Artificial Intelligence Council (the White House AI Council) with representatives from 28 federal agencies and departments tasked with “coordinating the activities of agencies across the Federal Government to ensure the effective formation, development, communication, industry engagement related to, and timely implementation of AI-related policies.” As we continue to wade through the 63-page EO and consider its implications for different industries, we will update you on relevant portions that may be applicable.

Next week, the House of Representatives China Committee plans to introduce a bill that would ban the purchase of Chinese-made drones by the U.S. government. This bill is an effort to revamp the prior push for this ban that was derailed by lobbying efforts.

The American Security Drone Act, as it is coined, would not only ban U.S. government agencies from using Chinse-made drones, but would also bar local and state governments from purchasing Chinese drones with federal grants.

While the bill does not specifically address particular drone manufacturers, DJI, the world’s largest manufacturer of commercial drones and a Chinese company, would be directly affected by this law. Right now, governmental agencies as well as numerous industries across the U.S. use DJI drones for their operations. Representative Mike Gallagher said, “The Chinese Communist party consistently weaponizes its near monopoly on the drone market against the good guys; restricting drone exports to Ukraine while Hamas uses them to perpetrate brutal terrorist attacks.” DJI has opposed laws like this in the past and also opposes this bill. DJI’s past efforts to oppose bans like this have been supported by U.S. police agencies that have argued there are no comparable cost-effective drones available from U.S. manufacturers.

Security expert on Chinese threats, Eric Sayer of Beacon Global Strategies said, “This bill would prohibit the federal government from using American taxpayer dollars to purchase this equipment from countries like China, supporting the PRC’s malign behavior and posing a serious national security threat to the U.S. and our allies. It is imperative that Congress pass this bipartisan bill to protect U.S. interests and our national security supply chain.” This bill is an example of lawmakers’ efforts to prevent Chinese technology from being used in ways that diminish U.S. national security. And, perhaps more importantly, to halt reliance on Chinese-made technology. As Sayers also said, “The lesson here is we must identify and prevent critical dependencies on the PRC before they emerge, burrow in our economy, and become politically and financially expensive to reverse.”

We previously reported on the unfortunate data breach suffered by 23andMe last month and its implications. We never imagined how horrible it could be.

According to an October 6, 2023, posting by Wired earlier that week, hackers involved with the 23andMe breach posted “an initial data sample on the platform BreachForums…claiming that it contained 1 million data points exclusively about Ashkenazi Jews…and starting selling 23andMe profiles for between $1 and $10 per account.”

Several days later, it was reported that “another user on BreachForums claimed to have the 23andMe data of 100,000 Chinese users.”

The implications of posting account information, including potential genetic information of users for political or hateful reasons is real and happening in real time. According to news reports, the war in Gaza “is stoking antisemitism in the U.S.” and across the world. Preliminary data from the Anti-Defamation League shows a 388% jump of antisemitic incidents in the U.S. since Hamas’ attack on Israel on October 7, 2023.  

If you are a 23andMe user, it is important to find out for your safety and well being whether your genetic data was compromised and is posted by extremist threat actors. The Electronic Frontier Foundation published an article, “What to Do if You’re Concerned About the 23andMe Breach” providing more information about the background of the breach, the selling of information, and what you can do to protect yourself further, including deleting your data.

Resilience issued its Midyear 2023 Claims Report, which is well worth the read.

In addition to commenting on the impact of the MOVEit incident, some of the key findings include:

  • Ransomware remains a leading cause of losses of insureds
  • Ransom costs continue to increase, which may mean that threat actors are targeting larger targets
  • Third-party vendors are being targeted and have taken over as the highest point of failure in claims
  • Extortion by threatening to release exfiltrated data continues to be used as a strategy for threat actors
  • Cybercrime is “indiscriminate,” and all industries are targets

It is important to stay on top of data that is shared by insurers and others in the cybersecurity industry to assess risk and new threats. This midyear report is current and timely and provides valuable information to consider when assessing your cybersecurity strategy.

According to a press release, Personal Touch, a home health company located on Long Island, has reached a settlement with New York Attorney General Letitia James for $350,000 for a data breach that occurred in January of 2021 when a Personal Touch employee “opened a malware-infected file attached to a phishing email that allowed a hacker to gain access to Personal Touch’s network and collect patient and employee records from an unencrypted server.”

The incident compromised the personal information, including names, Social Security numbers and health information of 316,845 New Yorkers. In addition to the settlement, Personal Touch has to offer affected consumers with free identity theft protection and recovery services and to enhance its information security program.

This week we are pleased to have a guest post by Robinson+Cole Artificial Intelligence Team patent agent Daniel J. Lass.

After previously finding that the Biden White House and the FBI likely violated First Amendment free speech protections for some users of online social media platforms, the Fifth Circuit expanded its ruling to find that the Cybersecurity and Infrastructure Security Agency (CISA) also likely violated the First Amendment.

To stop the spread of misinformation about the 2020 election and COVID-19 on social media platforms, the government attempted to work with the social media companies to remove or restrict false or misleading posts. The Fifth Circuit, however, found that these interactions were coercive and pushed social media companies to adopt a more restrictive content removal policy. Although previously not included, the Fifth Circuit now determined that CISA worked jointly with the FBI to push these policies onto the social media companies. Specifically, the new decision found that CISA told companies whether individual posts were true, and that CISA pressured companies to remove posts that it determined were false. As a result of this ruling, CISA and its director Jen Easterly are barred from coercing or significantly encouraging social media companies to censor posts. The Biden Administration had previously asked the Supreme Court to stay the injunction following the Fifth Circuit’s initial ruling and renewed that request after the updated ruling. In the updated request, the Court agreed to stay the injunction issued by the Fifth Circuit until the Supreme Court has an opportunity to fully consider the case. A ruling later this term will likely influence how the government combats potential misinformation and interference in the 2024 election.

Pixels, a piece of tracking software businesses use to assess the success of their advertising campaigns, are creating headaches for in-house counsel as decades-old laws are being revived by litigants. Unlike cookies, pixels cannot be easily blocked with privacy software. The potential consequences for improper use have increased due the Federal Trade Commission’s increasingly close scrutiny of technology that collects and shares consumer data.

The common litigation strategy for pixel consumer class actions has focused on the plaintiffs’ lack of express consent to share their personal data with the third parties that made the tracker. Litigants are most active against healthcare providers and businesses that host videos online. On the regulatory side, the FTC announced two enforcement actions against telehealth companies allegedly transmitting sensitive medical data to adtech giants like Google and Meta. In response to this trending litigation, many privacy policies now include explicit language describing the tracking technologies used. This brings up an interesting tension for professionals writing privacy policies: privacy policies must be detailed enough to ensure informed consent while remaining short, engaging, and accessible to consumers. Navigating the emerging privacy landscape is increasingly complex, and prudent businesses should proactively engage in these issues.

Axios is reporting that the White House will unveil an executive order on artificial intelligence (AI) at an event scheduled on Monday, October 30, 2023, at 2:30 EST.

The subject matter of the executive order will concentrate on “safe, secure and trustworthy artificial intelligence” usage. We will summarize the AI executive order next week after it is issued.

Federal Communications Commission Chair Jessica Ronsenworcel recently announced a proposed directive to focus the FCC on AI’s burgeoning role in scam calls, particularly those targeting the elderly. The proposal will be presented at the November 15 commission meeting and would direct research into the strengths and weaknesses of generative AI, the role that AI plays in scamming consumers, and the role that AI may play in enforcing the Telephone Consumer Protection Act (TCPA).

This proposal is very timely. We’ve written about the proliferation of AI–voice scamming, including the importance of training and awareness of this new threat. Elder financial scams are a particular area for concern to cybersecurity professionals and consumer protection advocates, given that elder fraud losses increased by 391.9 percent ($343 million to $1.685 billion) from 2017 to 2021, and financial loss due to confidence schemes targeting elderly consumers is notoriously underreported. As generative AI blurs the lines between real life and simulation, effective consumer protection will depend on robust regulation, education, and law enforcement efforts.