Security research firm Halcyon recently reported that it “encountered” a new ransomware organization dubbed Volcano Demon several times in the past few weeks.

According to its report, Volcano Demon uses the encryptor LukaLocker with a .nba file extension. Halcyon provided an encryptor sample in its post.

Although Volcano Demon uses traditional methods of extortion, including encryption, exfiltration, and double extortion techniques, Halcyon noted that “logs were cleared prior to exploitation and…a full forensic evaluation was not possible due to their success in covering their tracks and limited victim logging and monitoring solutions installed prior to the event.”

Further, and very concerning to this writer, is that Volcano Demon doesn’t establish a leak site or negotiate under what we sickeningly call “normal” communication methods. No, Volcano Demon doesn’t email or use the Onion or Tor platforms; Volcano Demon calls the victim. This means they are calling random people in the organization (people who are probably not part of the incident response team) and threatening and scaring them with angry phone calls. During an incident, it is crucial to try to control communication with the threat actor and the organization, and professionals are hired to assist. This goes out the window when the threat actor starts calling random people in the organization who are unprepared and vulnerable. Needless to say, I don’t need to detail the risks and concerns with this new technique.

Once one threat actor finds a successful technique, others will copy it, so I predict that this will not be the last time we see this technique used. It is important to highlight this new technique when you are conducting tabletop exercises, to determine steps you will take to respond and mitigate, and when rolling out wider cybersecurity training to the organization. Your people need to know what to do if they get called by a threat actor. They need to know who to contact and exactly what to do. They can’t be left to figure it out on their own. I am now incorporating this into all training sessions to at least try to give employees a heads up and provide tips to keep their heads cool during stressful situations.

Some writers (not from my great state of Rhode Island) act like Rhode Island has been behind the times when it comes to data privacy and security when discussing the state’s new privacy law. I feel a need to explain that this is just not so. Rhode Island is not a laggard when it comes to data privacy.

Rhode Island has had a data privacy law on its books for a long time, though it was not called a privacy law. It was the Rhode Island Identity Theft Protection Act, which was enacted in 2015. It was designed to protect consumers’ privacy and provide data breach notification. It was amended to include data security requirements in the footsteps of the then-novel Massachusetts data security regulations. It was a one-stop shop for data privacy, security, and breach notification. Still, it did not provide individuals the right to access or delete data and was not as robust as new data privacy laws. Rhode Island was an early state to include health information in its definition of personal information that requires breach notification in the event of unauthorized access, use, or disclosure of health information. Many states still do not include health information in the definition of breach notification.

But just so the record is clear, consumer protection has been in the DNA of Rhode Island’s laws for many years, and the new privacy law was an expansion of previous efforts to protect consumers.

The new privacy law in Rhode Island expands the privacy protections for consumers and is the latest in a wave of privacy laws being enacted in the United States. As of this writing, 19 states have new privacy laws, and Rhode Island makes it 20.

All of the privacy laws are fairly similar, except for California, which is the only state to date that provides for a private right of action in the event of a data breach (with requirements prior to the filing of a lawsuit).

That said, for those readers who will fall under the Rhode Island law and are in my home state, here are the details of the law (the Rhode Island Data Transparency and Privacy Protection Act (RIDTPPA)) of which you should be aware:

Continue Reading Rhode Island’s New Data Privacy Law

Artificial Intelligence (AI) can offer manufacturers and other companies necessary assistance during the current workforce shortage. It can help workers answer questions from customers and other workers, fill skill gaps, and even help get your new employees up to speed faster. However, using AI comes with challenges and risks that companies must recognize and address.

For example, AI can produce a compelling and utterly wrong statement – a phenomenon called “hallucination.” If your car’s GPS has ever led you to the wrong location, you have experienced this. Sometimes, this happens because the AI was given bad information, but even AI supplied with good information can hallucinate, to your company’s detriment. And your employees cannot produce good work with bad information any more than an apple tree can produce pears.

Also, many real-world situations can confuse AI. AI can only recognize a pattern it has seen before, and if it encounters something it has not seen before, it can react unpredictably. For example, putting a sticker on a stop sign can flummox an AI system, and it can confidently misidentify images. Misidentifying images in real-world situations can cause problems if organizations employ facial or image recognition technology.

These problems can be managed, however. Through AI governance, companies can mitigate these issues to use AI safely, productively, and effectively. 

For example, AI can only supplement human thought, not replace it. So, appropriate AI usage requires humans to monitor what AI is doing. Your company should no more have AI running without human monitoring than you would follow your GPS’s instructions into a lake. Without appropriate monitoring, your AI can easily start hallucinating and promulgating incorrect information across your organization, or it can perpetuate biases that your company is legally obligated to avoid.

This monitoring will have to take place in the context of written policies and procedures. Just like you would tell your teenager how to drive a car before letting them behind the wheel, you should have written policies in place to inform your employees on the safest, most effective use of AI. These procedures will need buy-in from your organization’s relevant stakeholders and will need to be reviewed by legal counsel knowledgeable about AI. Your organization will have to leverage its culture to ensure that the key personnel know about the plan and can implement it properly.

Also, your company will need to have an AI incident response plan. We tell teenagers what to do if they have an accident, and the same proactive preventative strategy applies to AI. An incident response plan will inform your company how to address problems before they arise rather than forcing you to scramble in real-time to scrap together a suboptimal solution to a foreseeable problem. Should litigation or a government enforcement proceeding follow an AI incident, a written incident response plan can offer welcome guidance and protection.

Like a car, AI can make you more productive and get you to where you’re going faster. Also, like a car, AI can land you in a wreck if you’re not careful. Your company can enjoy the benefits and manage AI’s risks with thoughtful AI governance.

Verizon’s 2024 Data Breach Report, a must-read publication, was published on May 1, 2024. The report indicates that “Over the past 10 years, the use of stolen credentials has appeared in almost one-third (31%) of all breaches…”

Stolen credentials mean a user has given their username and password to a threat actor. When that happens, the threat actor has complete authenticated, unfettered access to all of the data the user has access to in the system. The result is that the threat actor can access data without being detected by tools put in place to detect malicious intrusions. This is a nightmare for organizations. Compromised passwords are an issue because threat actors gather and use them in brute-force attacks. When a user’s password is compromised, if the user has used that password on any other platform, it gives threat actors an easy way to get into any account for which the user has used that password. That is why we always tell users not to use the same password across platforms.

It is important to change passwords frequently and to follow your organization’s procedure for changing passwords. It is also crucial not to use the same password across different platforms.

A recent article by Cybernews shows how vital this mantra is. According to the article, “Cybernews researchers discovered what appears to be the largest password compilation with a staggering 9,948,575,739 unique plaintext passwords. The file with the data, titled rockyou2024.txt, was posted on July 4th by forum user ObamaCare.” The passwords came from a mix of old and new data breaches.”

Apparently, the threat actors compiled “real-world passwords used by individuals all over the world. Revealing that many passwords for threat actors substantially heightens the risk of credential stuffing attacks.”

Cybernews further states that it believes “that attackers can utilize the ten-billion-strong RockYou2024 compilation to target any system that isn’t protected against brute-force attacks. This includes everything from online and offline services to internet-facing cameras and industrial hardware.”

Here are the recommendations from Cybernews:

The Cybernews research team advises to:

  • Immediately reset the passwords for all accounts associated with the leaked passwords. It is strongly recommended that strong, unique passwords be selected that are not reused across multiple platforms.
  • Enable multi-factor authentication (MFA) wherever possible. This enhances security by requiring additional verification beyond a password.
  • Utilize password manager software to generate and store complex passwords securely. Password managers mitigate the risk of password reuse across different accounts.

TeamViewer, which provides remote connectivity products and services, announced that it detected a cybersecurity event on its internal IT system on June 26, 2024. TeamViewer stated that it did not affect the TeamViewer product environment, connectivity platform, or any customer data.

A recent update by TeamViewer states: “According to current findings, the threat actor leveraged a compromised employee account to copy employee directory data, i.e., names, corporate contact information, and encrypted employee passwords for our internal corporate IT environment. We have informed our employees and the relevant authorities.”

TeamViewer is rebuilding its internal corporate IT environment.

In a July 1, 2024 post, SecurityWeek reported that TeamViewer has confirmed that the attack was launched by the Russian-based cybercriminal group known as APT29, which NCC Group and HISAC previously reported. This attack came on the heels of an attack by the same group against Microsoft, which has been alerting customers that APT29 stole customer emails during an attack against it.

APT29 is a notorious group that has attacked many US-based companies over many years, and it does not look like this threat will abate any time soon.

The Health Sector Cybersecurity Coordination Center (HC3) provides timely updates to the health care sector on cybersecurity threats and mitigation. In the last several weeks, HC3 has issued two alerts worth paying close attention to if you are in the health care sector.

The first, issued on June 18, 2024, warns of Qilin, aka Agenda Ransomware. According to the HC3 threat profile:

Qilin is a ransomware-as-a-service (RaaS) offering in operation since 2022, and which continues to target healthcare organizations and other industries worldwide. The group likely originates from Russia and was recently observed recruiting affiliates in late 2023. The ransomware has variants written in Golang and Rust and is known to gain initial access through spear phishing, as well as leverage Remote Monitoring and Management (RMM) and other common tools in its attacks. The group is also known to practice double extortion, demanding ransom payments from victims to prevent data from being leaked.

The threat actors using Qilin have claimed responsibility for more than 60 ransomware attacks already in 2024.

The second alert, issued on June 27, 2024, relates to a new critical vulnerability discovered in the MOVEit file transfer platform, which is used by many health care organizations. According to HC3, “exploit code is also available to the public, and this vulnerability is being actively targeted by cyber threat actors. All healthcare organizations are strongly urged to identify any vulnerable instances of MOVEit that exist in their infrastructure and patch them as a high priority.”

The vulnerabilities relate to improper authentication processes. Progress, the owner of MOVEit, identified the vulnerabilities in early June and has issued two patches to address them. Security firms have provided additional research on the vulnerabilities which is provided in the Alert. These vulnerabilities are capable of being exploited, and are actively being, exploited if they have not been patched. If exploited, a threat actor could gain access to the environment and cause data loss and compromise. This is considered a critical vulnerability, so assuring your organization has patched these vulnerabilities is crucial.

On May 8, 2024, Chief Judge Miranda Du of the U.S. District Court for the District of Nevada granted defendants’ motion to dismiss with prejudice the complaint in Gibson v. Cendyn Group, LLC, Docket No. 2:23-cv-00140-MMD-DJA, an antitrust case alleging that hotel operators on the Las Vegas Strip used algorithms to inflate room prices in violation of Section One of the Sherman Act. The court’s reasoning provides litigants on both sides with a framework for future cases.    

Plaintiffs claimed that Caesars Entertainment, Inc., Treasure Island, LLC, and Wynn Resorts Holdings, LLC (hereinafter, the “Hotel Operators”) charged supercompetitive prices for rooms through GuestRev (individual rooms) and GroupRev (rooms for groups), which are shared-revenue management systems licensed by the Cendyn Group. Cendyn allegedly spearheaded a hub-and-spoke conspiracy[1] through an algorithm that used price and occupancy data to recommend room rates. The algorithm’s “optimal” rate was visible to individual hotel operators, who were discouraged by system prompts from overriding the recommendation. To establish anticompetitive effects in the relevant market, the plaintiffs relied on third-party economic analyses of revenue and price trends as well as circumstantial evidence known as “plus factors”—e.g., the motive and opportunity to conspire, market structure, the interchangeability of hotel rooms, and inelastic demand.

Before the court entered judgment in favor of defendants, Judge Du closely scrutinized plaintiffs’ claims. In an October 23, 2023 order dismissing plaintiff’s original complaint with leave to amend, the court asked plaintiffs to address: (i) when the conspiracy began and who participated; (ii) whether the Hotel Operators colluded to adopt a shared set of pricing algorithms; (iii) whether the Hotel Operators must accept the price recommendations; and (iv) whether the algorithm facilitated the exchange of non-public information.[2] 

In its 2024 decision, the court ruled that plaintiffs’ amended complaint failed to meet these threshold requirements. First, the court disagreed with plaintiffs’ contention that the initial timing of the conspiracy was irrelevant because the Hotel Operators renewed their licensing agreements every year. Because defendants started using Cendyn’s technology at various points in time over a 10-year period, there was “no existing agreement to fix prices that a late-arriving spoke could join” and “a tacit agreement among [the Hotel Operators] was implausible.”[3] 

Nor did plaintiffs allege that the Hotel Operators “agreed to be bound by [Cendyn’s] recommendations, much less that they all agreed to charge the same prices.”[4] To the contrary, plaintiffs maintained that Cendyn had difficulty getting customers to accept the recommendations. Even drawing all inferences in plaintiffs’ favor, the court determined that the Hotel Operators were independently reacting to similar pressures within an interdependent market, consistent with lawful conscious parallelism.   

Finally, the court rejected plaintiffs’ contention that the Hotel Operators used Cendyn to exchange confidential information or, in the alternative, that Cendyn used machine learning and algorithms to facilitate the exchange of confidential information. The court reasoned that without more evidence, “using data across all your customers for research does not plausibly suggest that one customer has access to the confidential information of another customer—it instead plausibly suggests that Cendyn uses data from various customers to improve its products.”[5] The Cendyn dismissal will not be the last word on the “relatively novel antitrust theory premised on algorithmic pricing.”[6] Pricing algorithms are the focus of three class action lawsuits pending in different jurisdictions.[7] As algorithms become a mainstream tool for pricing, more are certain to follow. 


[1] A hub-and-spoke antitrust conspiracy consists of (i) a leading party (“the hub”); (ii) co-conspirators (“the spokes”); and (iii) connecting agreements (“the rim”). 

[2] See generally Order, Gibson v. Cendyn Group, Inc., 2:23-cv-00140-MMD-DJA (D. Nev. Oct. 23, 2023).

[3] Order, Gibson v. Cendyn Group, Inc., 2:23-cv-00140-MMD-DJA at 4 (D. Nev. May 8, 2024).

[4] Id. at 6. 

[5] Id. at 10. 

[6] Id. at 5. 

[7] See Cornish-Adebiyi v. Caesars Entertainment, Inc., 1:23-cv-02536-KMW-EAP (D. N.J. filed Mar. 28, 2024); Duffy v. Yardi Sys. Inc., 2:23-cv-01391-RSL (W.D. Wash. filed on Mar. 1, 2024); In re: RealPage, Rental Software Antitrust Litig., 3:23-md-03071 (M.D. Tenn. filed on Nov. 15, 2023).

Manufacturers and other companies are facing a critical shortage of skilled workers in manufacturing, technology, healthcare, construction, hospitality, and other industries that are outpacing educational institutions’ training ability. As baby boomers retire without sufficient younger workers to replace them, the problem will only worsen. Many companies are spending money on artificial intelligence (AI) to address this issue to compensate for labor shortages.

AI refers to computers that can perform actions that typically require human intelligence. For example, finding your way from Point A to Point B used to require you to use your intelligence to read a map and navigate your path. Now, however, you just tell your car’s GPS where to go, and the AI figures out how to get there, taking into account traffic patterns, speed traps, and tolls. 

Just like AI can direct your driving, it can direct your employees to optimize their productivity.  AI tools can help workers answer questions from customers and other workers. AI can also assume basic tasks that would typically involve employees, such as the use of customer service chatbots to answer basic questions without involving call center employees. In this way, AI can free up employees to tackle more complicated tasks that may require human creativity.

AI can also fill skill gaps. Organizations are using AI to automate detection and response to ransomware and other cyber-attacks. In the healthcare field, AI can help doctors analyze patient data and trajectories. More broadly, AI might be able to notice transferable skills better than humans can; for example, an AI algorithm might notice that your receptionist has developed skills that would make her an exceptional salesperson.

Many manufacturers use AI to scan resumes. AI can review more resumes more quickly than any HR department can. Trained properly, AI can select the best resumes and enable your team to interview higher-quality candidates.

And when your company hires someone, AI can help get your new employees up to speed faster.  AI chatbots can guide new hires through the onboarding process and provide answers to questions in real time. The United Kingdom’s National Health Service is exploring the use of AI to help train new workers.

Of course, all of the foregoing uses have legal and logistical pitfalls. Using AI in a way that complies with the law and fulfills your requirements requires a robust AI governance program, which I will describe in my next post.

July is Military Consumer Month. On the eve of the Fourth of July, we celebrate democracy and the past service of our veterans and those presently in service for protecting us and our democracy. Thank you so very much.

It is therefore fitting for July to be deemed Military Consumer Month. Special attention should be given to our service men and women to protect them from fraud, identity theft, and imposter scams. The Federal Trade Commission has deployed a website set up specifically for those in the military to access helpful tips and education about scams affecting consumers, including those in the military. The website is MilitaryConsumer.gov. If you are in the military, avail yourself of these resources to protect you and your family from fraud.

The website is a one stop shop for military personnel seeking information on scams and schemes, how to detect them, how to prevent yourself from becoming a victim, what to do if you are a victim, and resources to better equip you to avoid becoming a victim.

Kudos to the FTC for providing a comprehensive place for those in the military to get helpful tips and tools on schemes, scams, and frauds.

Happy Fourth of July to all and stay safe this holiday weekend.

In the Biden Administration’s continuing effort to reduce the risk of cybersecurity spyware from foreign adversaries, including Russia, the United States Department of Commerce (Commerce) issued a final rule (Rule) on June 16, 2023, entitled “Protecting Americans’ Sensitive Data from Foreign Adversaries” and also amended a previously issued rule (“Securing the Information and Communications Technology Supply Chain”) that had been published under a Biden Executive Order. The new Rule gives Commerce authority to prohibit or regulate communications technology or services connected to foreign adversaries that pose a risk to national security, including software.

For the first time using the authority provided by the Rule, on June 19, 2024, Commerce issued a final determination prohibiting Kaspersky Lab, Inc., its affiliates, subsidiaries, and parent companies from “directly or indirectly” providing anti-virus software and cybersecurity products or services in the U.S. According to Commerce, “Kaspersky will generally no longer be able to, among other activities, sell its software within the United States or provide updates to software already in use. The full list of prohibited transactions can be found here. ” Kaspersky has until September 29, 2024, to cease doing business in the U.S. and provide existing customers anti-virus and codebase updates until that time.

Kaspersky has been selling software and services in the U.S. for years, so it is no doubt embedded in company cybersecurity programs throughout the U.S. according to Commerce:

            “Individuals and businesses that utilize Kaspersky software are strongly encouraged to expeditiously transition to new vendors to limit exposure of personal or other sensitive data to malign actors due to a potential lack of cybersecurity coverage. Individuals and businesses that continue to use existing Kaspersky products and services will not face legal penalties under the Final Determination. However, any individual or business that continues to use Kaspersky products and services assumes all the cybersecurity and associated risks of doing so.”

Commerce determined that Kaspersky posed an undue or unacceptable risk to national security because “the ability to gather valuable U.S. business information, including intellectual property, and to gather U.S. persons’ sensitive data for malicious use by the Russian Government, pose an undue or unacceptable national security risk and therefore prohibits continued transactions involving Kaspersky’s products and services.”

On June 20, 2024, in coordination with Commerce, the Department of Treasury’s Office of Foreign Assets Control (OFAC) designated twelve executives and senior leadership from Kaspersky to the OFAC sanctions list. If you are using Kaspersky products or services, the final determination has a meaningful impact on your organization. This means that as of June 19, 2024, Kaspersky will no longer be able to provide support for any of its products or services in the U.S., and its executives are listed on the OFAC sanctions list. You may wish to heed Commerce’s recommendations if you hare in this position.