Unfortunately, I’ve had unpleasant dealings with the Phobos ransomware group. My interactions with Phobos have been fodder for a good story when I educate client employees on recent cyber-attacks to prevent them from becoming victims. The story highlights how these ransomware groups, including Phobos, are sophisticated criminal organizations with managerial hierarchy. They use common slang in their communications and have to get “authority” to negotiate a ransom. It’s a strange world.

Because of my unpleasant dealings with Phobos, I was particularly pleased to see that the Department of Justice (DOJ) recently announced the arrest and extradition of Russian national Evgenii Ptitsyn on charges that he administered the Phobos ransomware variant.

This week, the DOJ unsealed charges against two more Russian nationals, Roman Berezhnoy and Egor Nikolaevich Glebov, who “operated a cybercrime group using the Phobos ransomware that victimized more than 1,000 public and private entities in the United States and around the world and received over $16 million in ransom payments.” They were arrested “as part of a coordinated international disruption of their organization, which includes additional arrests and the technical disruption of the group’s computer infrastructure.” I’m thrilled about this win. People always ask me whether these cyber criminals get caught. Yes, they do. This is proof of how important the Federal Bureau of Investigation (FBI) is in assisting with international cybercrime, and how effective its partnership with international law enforcement is in catching these pernicious criminals. This is why I firmly believe that we must continue to share information with the FBI to assist with investigations, and why the FBI must be allowed to continue its important work to protect U.S. businesses from cybercrime.

New York, Texas, and Virginia are the first states to ban DeepSeek, the Chinese-owned generative artificial intelligence (AI) application, on state-owned devices and networks.

Texas was first to tackle the problem when it banned state employees from using both DeepSeek and RedNote on January 31, 2025. The Texas ban includes other apps affiliated with the People’s Republic of China, including “Webull, Tiger Brokers, Moomoo[,] and Lemon8.”

According to the Texas Governor’s press release:

“Texas will not allow the Chinese Communist Party to infiltrate our state’s critical infrastructure through data-harvesting AI and social media apps. To achieve that mission, I ordered Texas state agencies to ban Chinese government-based AI and social media apps from all state-issued devices. State agencies and employees responsible for handling critical infrastructure, intellectual property, and personal information must be protected from malicious espionage operations by the Chinese Communist Party. Texas will continue to protect and defend our state from hostile foreign actors.” 

New York soon followed on February 10, 2025, banning DeepSeek from being downloaded on any devices managed by the New York State Office of Information Technology. According to the New York Governor’s release, “DeepSeek is an AI start-up founded and owned by High-Flyer, a stock trading firm based in the People’s Republic of China. Serious concerns have been raised concerning DeepSeek AI’s connection to foreign government surveillance and censorship, including how DeepSeek can be used to harvest user data and steal technology secrets.” The release further states: “The decision by Governor Hochul to prevent downloads of DeepSeek is consistent with the State’s Acceptable Use of Artificial Intelligence Technologies policy that was established at her direction over a year ago to responsibly evaluate AI systems, better serve New Yorkers, and ensure agencies remain vigilant about protecting against unwanted outcomes.”

The Virginia Governor signed Executive Order 26 on February 11, 2025, “banning the use of China’s DeepSeek AI on state devices and state-run networks.” According to the Governor’s press release, “China’s DeepSeek AI poses a threat to the security and safety of the citizens of the Commonwealth of Virginia…We must continue to take steps to safeguard our operations and information from the Chinese Communist Party. This executive order is an important part of that undertaking.”

The ban “directs that no employee of any agency of the Commonwealth of Virginia shall download or use the DeepSeek AI application on any government-issued devices, including state-issued cell phones, laptops, or other devices capable of connecting to the internet. The Order further prohibits downloading or accessing the DeepSeek AI app on Commonwealth networks.”

These three states determined that Chinese-owned applications DeepSeek and RedNote pose threats by granting a foreign adversary access to critical infrastructure data. The proactive ban by these states will no doubt be followed by others, much like we saw with the TikTok ban until the federal government, bipartisanly, issued one nationwide. President Trump has paused that ban, despite the well-documented national security threats posed by the social media platform. Hopefully, more states will follow suit in banning DeepSeek and RedNote. Consumers and employers can take matters into their own hands by not downloading either app and banning them from the workplace. Get ahead of the curve, learn from the TikTok experience, and avoid DeepSeek and RedNote now.

Thomson Reuters scored a major victory in one of the first cases dealing with the legality of using copyrighted data to train artificial intelligence (AI) models. In 2020, Thomson Reuters sued the now-defunct AI start-up Ross Intelligence for alleged improper use of Thomson Reuters materials, including case headnotes in its Westlaw search engine, to train its new AI model.

A key issue before the court was whether Ross Intelligence’s usage of headnotes constituted fair use, which permits a person to use portions of another’s work in limited circumstances without infringing on their copyright. Courts use four factors to determine whether a defendant can successfully use the fair use defense: (1) the purpose and character of the use; (2) the nature of the copyrighted work; (3) how much of the work was copied and was that a substantial part of the entire work; and (4) whether the defendant’s use of the work affected its value.

In this case, federal judge Stephanos Bibas determined that each side had two factors in their favor. But the fourth factor, which supported Thomson Reuters, weighed most heavily in his finding that the fair use defense was inapplicable because Ross Intelligence sought to develop a competitive product. Lawsuits against other companies, like OpenAI and Microsoft, are currently pending in courts throughout the country, and decisions in those cases may involve similar questions about the fair use defense. However, Judge Bibas noted that Ross Intelligence’s AI model was not generative and that his decision was based only on Ross’s non-generative AI model. The distinction between the training data and resulting outputs from generative and non-generative AI will likely be key to deciding future cases.

According to a highly critical article recently published by TechCrunch,  the Department of Government Efficiency (DOGE), President Trump’s advisory board headed by Elon Musk, has “taken control of top federal departments and datasets” and has access to “sensitive data of millions of Americans and the nation’s closest allies.” The author calls this “the biggest breach of US government data.” He continues, “[w]hether a feat or a coup (which depends entirely on your point of view), a small group of mostly young, private-sector employees from Musk’s businesses and associates — many with no prior government experience — can now view and, in some cases, control the federal government’s most sensitive data on millions of Americans and our closest allies.”

According to USA Today, “The amount of sensitive data that Musk and his team could access is so vast it has historically been off limits to all but a handful of career civil servants.” The article points out that:

If you received a tax refund, Elon Musk could get your Social Security number and even your bank account and routing numbers. Paying off a student loan or a government-backed mortgage? Musk and his aides could dig through that data, too.

If you get a monthly Social Security check, receive Medicaid or other government benefits like SNAP (formerly known as food stamps), or work for the federal government, all of your personal information would be at the Musk team’s fingertips. The same holds true if you’ve been awarded a federal contract or grant.

Private medical history could potentially fall under the scrutiny of Musk and his assistants if your doctor or dentist provides that level of detail to the government when requesting Medicaid reimbursement for the cost of your care.

A federal judge in New York recently issued a preliminary injunction stopping Musk and his software engineers from accessing the data, despite Musk calling the judge “corrupt” on X. USA Today reports that the White House says Musk and his engineers only have “read-only” access to the data, but that is not very comforting from a security standpoint. The Treasury Department has reportedly admitted that one DOGE staffer, a 25-year-old software engineer, had been mistakenly granted “read/write” permission on February 5, 2025. That is just frightening to me as one who works hard to protects my personal information.

Tech Crunch reported that data security is not a priority for DOGE.

“For example, a DOGE staffer reportedly used a personal Gmail account to access a government call, and a newly filed lawsuit by federal whistleblowers claims DOGE ordered an unauthorized email server to be connected to the government network, which violates federal privacy law. DOGE staffers are also said to be feeding sensitive data from at least one government department into AI software.”

We all know that Musk loves AI. We are also well aware of the risks of using AI with highly sensitive data, including unauthorized disclosure and the ability to include it in outputs.

All of this has prompted questions about whether this advisory board has proper security clearance to access our data.

Should you be concerned? Absolutely. I understand the goal of cutting costs. But why do these employees have access to our most private information, including our full Social Security numbers and health information? Do they really need that specific data to determine fraud or overspending?

I argue no. A tenet of data security is proper access controls, only having access to the data needed for business purposes. DOGE’s unfettered access to our highly sensitive information is not limited to only data needed for a specific purpose. The security procedures for accessing the data are in question, and proper security protocols must be followed. According to Senator Ron Wyden of Oregon and Senator Jon Ossoff  of Georgia, who is a member of the U.S. Senate Intelligence Committee, this is “a national security risk.” As a privacy and cybersecurity lawyer, I am very concerned. A hearing on an early lawsuit filed to prohibit this unrestricted access is scheduled for tomorrow. We will keep you apprised of developments as they progress.

Soon after the Chinese generative artificial intelligence (AI) company DeepSeek emerged to compete with ChatGPT and Gemini, it was forced offline when “large-scale malicious attacks” targeted its servers. Speculation points to a distributed denial-of-service (DDoS) attack.

Security researchers reported that DeepSeek “left one of its databases exposed on the internet, which could have allowed malicious actors to gain access to sensitive data… [t]he exposure also includes more than a million lines of log streams containing chat history, secret keys, backend details, and other highly sensitive information, such as API Secrets and operational metadata.”

On top of that, security researchers identified two malicious packages using the DeepSeek name posted to the Python Package Index (PyPI) starting on January 29, 2025. The packages are named deepseeek and deepseekai, which are “ostensibly client libraries for access to and interacting with the DeepSeek AI API, but they contained functions designed to collect user and computer data, as well as environment variables, which may contain API keys for cloud storage services, database credentials, etc.” Although PyPI quarantined the packages, developers worldwide downloaded them without knowing they were malicious. Researchers are warning developers to be careful with newly released packages “that pose as wrappers for popular services.”

Additionally, due to DeepSeek’s popularity, it is warning X users  of fake social media accounts impersonating the company.

But wait, there’s more! Cybersecurity firms are looking closely at DeepSeek and are finding security flaws. One firm, Kela, was able to “jailbreak the model across a wide range of scenarios, enabling it to generate malicious outputs, such as ransomware development, fabrication of sensitive content, and detailed instructions for creating toxins and explosive devices.” DeepSeek’s chatbot provided completely made-up information to a query in one instance. The firm stated, “This response underscores that some outputs generated by DeepSeek are not trustworthy, highlighting the model’s lack of reliability and accuracy. Users cannot depend on DeepSeek for accurate or credible information in such cases.”

We remind our readers that TikTok and DeepSeek are based in China, and the same national security concerns apply to both companies. DeepSeek is unavailable in Italy due to information requests from the Italian DPA, Garante. The Irish Data Protection Commissioner is also requesting information from DeepSeek. In addition, there are reports that U.S.-based AI companies are investigating whether DeepSeek used OpenAI’s API to train its models without permission. Beware of DeepSeek’s risks and limitations, and consider refraining from using it at the present time. “As generative AI platforms from foreign adversaries enter the market, users should question the origin of the data used to train these technologies. They should also question the ownership of this data and ensure it was used ethically to generate responses,” said Jennifer Mahoney, Advisory Practice Manager, Data Governance, Privacy and Protection at Optiv. “Since privacy laws vary across countries, it’s important to be mindful of who’s accessing the information you input into these platforms and what’s being done with it.”

The Google Threat Intelligence Group (GTIG) recently published a new report “Adversarial Misuse of Generative AI,” which is well worth the read. The report shares findings on how government-backed threat actors use and misuse the Gemini web application. Although the GTIG is committed to countering threats across Google’s platforms, it is also committed to sharing findings “to raise awareness and enable stronger protections across the wider ecosystem.” This is an excellent mission.

GTIG found government adversaries, including the People’s Republic of China (PRC), Russia, Iran, and North Korea, are attempting to misuse Gemini through jailbreak attempts, “coding and scripting tasks, gathering information about potential targets, researching publicly known vulnerabilities and enabling post-compromise activities, such as defense evasion in a target environment.”

According to the report, Iranian threat actors used Gemini the most, for “crafting phishing campaigns, conducting reconnaissance on defense experts and organizations, and generating content with cybersecurity themes.” Over ten Iran-backed groups were using Gemini for these purposes.

PRC threat actors used Gemini the second most to “conduct reconnaissance, for scripting and development, to troubleshoot code, and to research how to obtain deeper access to target networks. They focused on topics such as lateral movement, privilege escalation, data exfiltration, and detection evasion.” GTIG found over 20 China-backed groups were using and misusing Gemini.

Nine North Korean-backed groups “used Gemini to support several phases of the attack lifecycle, including researching potential infrastructure and free hosting providers, reconnaissance on target organizations, payload development, and assistance with malicious scripting and evasion techniques. They also used Gemini to research topics of strategic interest to the North Korean government, such as the South Korean military and cryptocurrency. Of note, North Korean actors also used Gemini to draft cover letters and research jobs—activities that would likely support North Korea’s efforts to place clandestine IT workers at Western companies.”

Russian threat actors are using Gemini the least. Three Russia-backed groups focused on coding tasks, including converting publicly available malware into another coding language and adding encryption functions to existing code.

This research confirms our previous suspicions. Google has “shared best practices for implementing safeguards, evaluating model safety and red teaming to test and secure AI systems.” They are also actively sharing threat intelligence that will assist all users of AI tools to understand and mitigate risks of threat actors misusing AI.

Stemming from Colorado’s Concerning Consumer Protections in Interactions with Artificial Intelligence Systems Act (the Act), which will impose obligations on developers and deployers of artificial intelligence (AI), the Colorado Artificial Intelligence Impact Task Force recently issued a report outlining potential areas where the Act can be “clarified, refined[,] and otherwise improved.”

The Task Force’s mission is to review issues related to AI and automated detection systems (ADS) affecting consumers and employees. The Task Force met on several occasions and prepared a report summarizing their findings:

  • Revise the Act’s definition of the types of decisions that qualify as “consequential decisions,” as well as the definition of “algorithmic discrimination,” “substantial factor,” and “intentional and substantial modification;”
  • Revamp the list of exemptions to what qualifies as a “covered decision system;”
  • Change the scope of the information and documentation that developers must provide to deployers;
  • Update the triggering events and timing for impact assessments as well as changes to the requirements for deployer risk management programs;
  • Possible replacement of the duty of care standard for developers and deployers (i.e., consider whether such standard should be more or less stringent);
  • Consider whether to minimize or expand the small business exemption (the current exemption under the Act is for businesses with less than 50 employees);
  • Consider whether businesses should be provided a cure period for certain types of non-compliance before Attorney General enforcement under the Act; and,
  • Revise the trade secret exemptions and provisions related to a consumer’s right to appeal.

As of today, the requirements for AI developers and deployers under the Act go into effect on February 1, 2026. However, the Task Force recommends reconsidering the law’s implementation timing. We will continue to track this first-of-its-kind AI law. 

After several months of delays, the U.S. Copyright Office has published part two of its three-part report on the copyright issues raised by artificial intelligence (AI). This part, entitled “Copyrightability,” focuses on whether AI-generated content is eligible for copyright protection in the U.S.

An output generated with the assistance of AI is eligible for copyright protection if there is sufficient human contribution. The report notes that copyright law does not need to be updated to support this conclusion. The Supreme Court has explained that individuals can receive copyright protection when they translate an idea into a fixed and tangible medium. When an AI model supplies all creative effort, no human can be considered an author, thus no copyrightable work. However, when an AI model assists a human’s creative expression, the human is considered an author. The Copyright Office analogizes this to the principle of joint authorship because a work is copyright-eligible even if a single person is not responsible for creating the entire work.

The contribution level is determined by what a person provides to the AI model. The Copyright Office reasoned that inputting a prompt by itself is not a sufficient contribution to be considered an author. The report analogizes this to a person hiring an artist, where the person may have a general artistic vision, but the artist produces the creative work. Additionally, because AI models generally operate as a black box, a user is cannot exert the necessary level of control to be considered an author. 

However, when a user inputs a prompt in combination with their original work, the resulting AI-generated output is copyrightable for the material that is perceivable from their expression. The author’s own work helps provide the AI model with a starting point and limits the range of outputs.

Finally, AI-generated content can be copyrightable when arranged or modified with human creativity. For example, while an AI-generated image is not copyrightable, a compilation of the images and a human-authored story can be protected by copyright. The Copyright Office is currently working on the third part of its report, which should be published later this year and will focus on the implications of using protected works to train AI models.

If you are a GrubHub customer, read carefully. The app has confirmed a security incident involving a third-party vendor that allowed an unauthorized threat actor to access user contact information, including some customer names, email addresses, telephone numbers, and partial payment information for a subset of campus diners.

GrubHub’s response states, “The unauthorized party also accessed hashed passwords for certain legacy systems, and we proactively rotated any passwords that we believed might have been at risk. While the threat actor did not access any passwords associated with Grubhub Marketplace accounts, as always, we encourage customers to use unique passwords to minimize risk.”

If you are a GrubHub customer, you may want to change your password and ensure it is unique to that platform. 

On January 22, 2025, the Federal Bureau of Investigation (FBI) and the Cybersecurity & Infrastructure Security Agency (CISA) issued a joint advisory related to previous vulnerabilities in the Ivanti Cloud Service Appliance, including an administrative bypass, a SQL injection, and remote code execution vulnerabilities – previously listed as CVE-2024-8963, CVE-2024-9379, CVE-2024-8190 and CVE-2024-9380.

The alert advises that “threat actors chained the listed vulnerabilities to gain initial access, conduct remote code execution (RCE), obtain credentials, and implant webshells on victim networks. The actors’ primary exploit paths were two vulnerability chains… In one confirmed compromise, the actors moved laterally to two servers.”

According to CISA:   

“CISA and FBI strongly encourage network administrators to upgrade to the latest supported version of Ivanti CSA. Network defenders are encouraged to hunt for malicious activity on their networks using the detection methods and indicators of compromise (IOCs) within this advisory. Credentials and sensitive data stored within the affected Ivanti appliances should be considered compromised. Organizations should collect and analyze logs and artifacts for malicious activity and apply the incident response recommendations within this advisory.”