A class action complaint was filed against the International Brotherhood of Electrical Workers (IBEW) labor union for a data breach that occurred between March 31 and April 5, 2024. IBEW represents individuals who work in a wide variety of fields, including utilities, construction, telecommunications, broadcasting, manufacturing, railroads, and government. The security incident resulted in unauthorized access to the names and social security numbers of current and former members of IBEW.

This incident resulted from a ransomware attack led by the cybercriminal group BlackSuit, which was part of a larger attack on many other businesses in March 2024. Most recently, this group attacked CDK Global,] a major automobile software vendor, which affected car manufacturers nationwide.

IBEW notified affected individuals of the breach on or about August 5, 2024; the incident was initially discovered on July 3, 2024. The class action complaint, filed in the U.S. District Court for the Eastern District of Missouri, alleges that this “delay” caused harm to the affected individuals.

The lead plaintiff, a retired electrician from Illinois, claims that IBEW’s delay resulted in the loss by the class “of the opportunity to try and mitigate injuries in a timely matter.” Further, in the notification to the affected individuals provided by IBEW, the union disclosed that the incident “created a present, continuing and significant risk of suffering identity theft.”

The complaint further alleges that IBEW failed to follow the Federal Trade Commission’s 2016 guidelines for businesses regarding fundamental data security principles, and that the incident constituted a violation of the federal unfair trade practices act. The complaint also includes allegations that the IBEW acted negligently, breached its implied contract with its current and former members, and violated the Illinois Consumer Fraud Act.

Last year, the American Hospital Association (AHA) sued the U.S. Department of Health and Human Services (HHS) in the U.S. District Court of the Northern District of Texas, requesting that HHS be barred from enforcing a new rule adopted by the Office for Civil Rights entitled “Use of Online Tracking Technologies by HIPAA Covered Entities and Business Associates.” The guidance prevented health care entities from deploying third-party web technologies that capture IP addresses.

The federal district court ruled in favor of the AHA, holding that the new rule was “promulgated in clear excess of HHS’s authority under HIPAA.” HHS appealed on August 19, 2024, but shortly thereafter rescinded it. Ten days after filing its notice of appeal, HHS withdrew it. The effect of this withdrawal is that the district court order remains in place, and the Office of Civil Rights is prohibited from enforcing the rule. Despite this development, hospitals and health care systems continue to get mired in litigation surrounding the use of pixel technology and continue to grapple with the use of online tracking technology.

Recently, the National Institute of Standards and Technology (NIST) released its second public draft of Digital Identity Guidelines (Draft Guidelines). The Draft Guidelines focus on online identity verification, but several provisions have implications for government contractors’ cybersecurity programs, as well as contractors’ use of artificial intelligence (AI) and machine learning (ML). 

Government Contractor Cybersecurity Requirements

Many government contractors have become familiar with personal identity verification standards through NIST’s 2022 FIPS PUB 201-3, “Standard for Personal Identity Verification (PIV) of Federal Employees and Contractors,” which established standards for contractors’ PIV systems used to access federally controlled facilities and information systems. Among other things, FIPS PUB 201-3 incorporated biometrics, cryptography, and public key infrastructure (PKI) to authenticate users, and it outlined the protection of identity data, infrastructure, and credentials.

Whereas FIPS PUB 201-3 set the foundational standard for PIV credentialing of government contractors, the Draft Guidelines expand upon these requirements by introducing provisions regarding identity proofing, authentication, and management.  These additional requirements include:

Expanded Identity Proofing Models. The Draft Guidelines offer a new taxonomy and structure for the requirements at each assurance level based on the means of providing the proofing, whether the means are remote unattended proofing, remote attended proofing (e.g., videoconferencing), onsite unattended (e.g., kiosks), or onsite proofing.

Continuous Evaluation and Monitoring. NIST’s December 2022 Initial Public Draft (IPD) of the guidelines required “continuous improvement” of contractors’ security systems. Building upon this requirement, the Draft Guidelines introduced requirements for continuous evaluation metrics for the identity management systems contractors use. The Draft Guidelines direct organizations to implement a continuous evaluation and improvement program that leverages input from end users interacting with the identity management system and performance metrics for the online service. Under the Draft Guidelines, organizations must document this program, including the metrics collected, the data sources, and the processes in place for taking timely actions based on the continuous improvement process pursuant to the IPD.

Fraud Detection and Mitigation Requirements. The Draft Guidelines add programmatic fraud requirements for credential service providers (CSPs) and government agencies. Additionally, organizations must monitor the evolving threat landscape to stay informed of the latest threats and fraud tactics. Organizations must also regularly assess the effectiveness of current security measures and fraud detection capabilities against the latest threats and fraud tactics.

Syncable Authenticators and Digital Wallets. In April 2024, NIST published interim guidance for syncable authenticators. The Draft Guidelines integrate this guidance and thus allow the use of syncable authenticators and digital wallets (previously described as attribute bundles) as valid mechanisms to store and manage digital credentials. Relatedly, the Draft Guidelines offer user-controlled wallets and attribute bundles, allowing contractors to manage their identity attributes (e.g., digital certificates or credentials) and present them securely to different federal systems.

Risk-Based Authentication. The Draft Guidelines outline risk-based authentication mechanisms, whereby the required authentication level can vary based on the risk of the transaction or system being accessed. This allows government agencies to assign appropriate authentication methods for contractors based on the sensitivity of the information or systems they are accessing.

Privacy, Equity, and Usability Considerations. The Draft Guidelines emphasize privacy, equity, and usability as core requirements for digital identity systems. Under the Guidelines,  “[O]nline services must be designed with equity, usability, and flexibility to ensure broad and enduring participation and access to digital devices and services.” This includes ensuring that contractors with disabilities or special needs are provided with identity solutions. The Draft Guidelines’ emphasis on equity complements NIST’s previous statements on bias in AI.

Authentication via Biometrics and Multi-Factor Authentication (MFA). The Draft Guidelines emphasize the use of MFA, including biometrics, as an authentication mechanism for contractors. This complements FIPS PUB 201-3, which already requires biometrics for physical and logical access but enhances the implementation with updated authentication guidelines.

Continue Reading NIST Proposes New Cybersecurity and AI Guidelines for Federal Government Contractors

A new report by Graphika, as reported by Cyberscoop, has identified a Chinese-linked group that is “creating American personas online and spreading content designed to denigrate both parties and candidates.”

The disinformation group, known as Dragonbridge, Taizi Flood, and Empire Dragon, “produces high-volumes of spammy, inauthentic content online in an effort to influence political and public opinion.”

Graphika has identified accounts on X, TikTok, Instagram, and YouTube that “use AI-generated profile pictures, patriotic imagery and American identities to pose as disaffected U.S. voters. …. The accounts highlighted divisive topics like the war in Gaza, homelessness, gun control and racial inequality as examples of how the U.S. political system had failed, intending to discourage voter turnout.”

This is consistent with warnings from U.S. intelligence officials who have noted that the People’s Republic of China has collaborated with a China-based technology company to create the fake content, or repurposing controversial content to create viral content.

It is anticipated that this effort by adverse foreign nations will continue and escalate during the election season. It is important to be able to recognize what information is true and accurate from content that has been created by a foreign adversary to encourage malcontent or voter apathy.

Dragos issued its Industrial Ransomware Analysis for Q2 on August 14, 2024. The analysis shows that ransomware attacks significantly increased in Q2, with many ransomware groups disrupted by law enforcement rebranding themselves into new groups. For instance, BlackCat became inactive in March 2024 after being targeted by law enforcement in late 2023 but “recalibrated their strategies, substantially increasing incidents.” In addition, the Knight ransomware group rebranded itself as RansomHub and Royal ransomware was rebranded to BlackSuit.

Critical industrial operations were the prime target of the ransomware groups. According to Dragos, “[T]his quarter saw a significant rise in the frequency and severity of attacks, reflecting the evolving threat landscape and the persistent risk posed by ransomware groups.” The report notes that these attacks have caused significant operational disruptions to this important sector.

For the manufacturing sector, the construction industry was the most affected, representing 67% of all ransomware incidents in Q2. The most prominent culprits were: BlackBasta; 8Base; Akira; BlackSuit; MedusaLocker; Hunters International; Cactus; RansomHub; and Qilin. New threat actors on the scene that attacked victims in Q2 that were not observed in Q1 include: RA Group; Dragonforce; Ransomhouse; Team Underground; Brain Cipher; Red Ransomware; MetaEncryptor; Cloak; D_Nut_Leaks; BlackByte, Everest; and Monti.

The bad news from the report is that ransomware continues to be a significant threat to the industrial sector, and “ransomware groups demonstrated a significant capacity for adaptation, with some groups rebranding and others emerging with new tactics and techniques.” This will lead to “the introduction of new ransomware variants and increasing coordinated campaigns targeting industrial sectors” despite law enforcement disruptions. The battle against ransomware groups and their ever-evolving tactics is far from over. The relentless efforts of staying ahead of these groups is akin to a game of Whac-A-Mole.

Last week, the U.S. Department of Defense (DoD) released a proposed amendment to the Defense Acquisition Regulations Supplement (DFARS) that would require a Cybersecurity Maturity Model Certification (CMMC) program to become a required part of the DoD’s contracting process. The CMMC program is a DoD program that helps businesses meet security requirements for their work with the DoD. The program aims to protect sensitive information shared with contractors and subcontractors and to ensure that industries meet cybersecurity requirements for systems that process Controlled Unclassified Information (CUI).

The proposed DFARS amendment would create a provision in all DoD solicitations that notify contractors of CMMC requirements. The amendment would require contractors to either self-assess that they comply with cybersecurity requirements or obtain a third-party certification, depending on the sensitivity of the data involved in the contract. The self-assessment or certification would be submitted to the DoD upon the awarding of a contract.

The DoD had previously considered requiring certification after the contract award, but the DoD determined that such a timeline would cause “increased risk to DoD with respect to the schedule and uncertainty due to the possibility that the contractor may be unable to achieve the required CMMC level in an amount of time given their current cybersecurity posture.”

The proposed rule also includes a 3-year phased rollout of the CMMC requirements in order to minimize the financial impact on businesses and disruptions to DoD supply chains. The rollout could begin as early as the Summer of 2025.

Of note, DoD program managers will have discretion during the phase-in period as to the CMMC requirements in contracts with contractors.

At the end of the rollout period, the DoD estimates the following:

  • 35% of contractors that handle CUI will need to obtain a Level 2 CMMC third-party certification.
  • 65% of contractors will require a Level 1 CMMC self-assessment.

While most DoD contractors only have federal contract information, some do receive and maintain CUI. However, contractors that only sell commercial off-the-shelf items won’t be implicated by this amended rule, nor will contractors that conduct mundane tasks for the DoD, such as landscaping or other work on DoD premises. The comment period on the proposed DFARS rule will close on October 14, 2024. To learn more about CMMC and review the proposed rule, click here

We have previously suggested that conducting cybersecurity tabletop exercises are an important part of testing your incident response program and response to different scenarios.

A scenario that we strongly recommend including in your next scenario toolbox is one that focuses on the use of AI in your organization. If you have not yet developed and implemented an AI Governance Program, or if you have been facing resistance from executives to address the risk, an AI tabletop will underscore the urgency and significance of this need within your organization.

Everyone thinks they can spot a phishing email. If true, we would not see so many security incidents, data breaches, and ransomware attacks. The statistics are overwhelming that phishing emails are a significant cause of data breaches.

If everyone was able to spot a phishing email, threat actors would stop using them. It wouldn’t be worth their time, and they would use other methods to attack victims. However, because of their effectiveness, phishing attacks actually surged 40% in 2023, according to research by Egress.

One theory about why this is true is because of the use of artificial intelligence (AI). Threat actors are using generative AI to draft phishing emails that look and sound like they are in the victim’s native language. There are no grammatical errors or misspellings in the message, which used to make detection easier. In addition, AI-generated deepfake videos or voiceovers are used by threat actors in phishing attacks to lure victims into believing that the threat actor is someone they know, trust, or love. Further, AI can assist threat actors with actually writing the malware code for the attack.

Threat actors are also hiring other attackers to carry out phishing campaigns, which is known as Phishing-as-a-Service (PhaaS). This allows threat actors to conduct more campaigns to a wider net of potential victims.

According to The Hacker News, “While AI and PHaaS have made phishing easier, businesses and individuals can still defend against these threats. By understanding the tactics used by threat actors and implementing effective security measures, the risk of falling victim to phishing attacks can be reduced.”

Recognize that phishing (and smishing, vishing, and qrishing) campaigns are increasing. Stay abreast of the new tactics used, and stay vigilant in identifying and protecting yourself against them.

We have previously outlined the risks of using TikTok, the federal and state governments’ ban on it, and the national security risks it presents.

In doing so, we primarily focused on data privacy and security threats to TikTok users. Recently, Nebraska and the U.S. Department of Justice each sued TikTok directly for different allegations relevant to the use of TikTok by children. The allegations made in the complaints are heartbreaking and detailed below.

State of Nebraska Complaint

The complaint against TikTok in Nebraska, led by the state Attorney General’s consumer protection division, details how TikTok is marketed to young children and self-acclaimed as “addicting.” Internal documents obtained show that the owners of TikTok purposefully market its use to children under the age of 13 as they lack the executive  decision-making ability to limit its use. This excessive use of TikTok is precisely what the app’s owners are striving to achieve. Young TikTok users admit they are addicted, and statistics show that they use TikTok into the wee hours of the morning. Even more disturbing is the content that TikTok appears to be purposefully pushing to young children—harmful content, including “mature and inappropriate content, content related to eating disorders, sadness and suicide, and pornography.” On top of that, the complaint alleges that it is “incredibly difficult” to delete an account.

The complaint alleges that TikTok is harmful and dangerous to children and youth, including “increased rates of depression, anxiety, loneliness, low self-esteem, and suicide, interfering with sleep and education, fueling body dysmorphia and eating disorders, and contributing to youth addiction.” The complaint alleges that TikTok: is misrepresenting that it is safe for use; is engaging in deceptive and unfair acts and practices in violation of the Nebraska Consumer Protection Act and Deceptive Trade Practice Act; and has false or misleading statements in its privacy policy. The complaint seeks injunctive relief, civil fines and penalties, and disgorgement of all profits made in Nebraska.

We will be watching this litigation closely, including other states that may follow suit.

Department of Justice Complaint

The Department of Justice (DOJ) has also recently filed suit against TikTok. The DOJ’s suit concentrates on TikTok’s alleged violations of the federal statute known as the Children’s Online Privacy Protection Act (COPPA). The allegations in the complaint detail how TikTok allows children under the age of 13 to register for a TikTok account that is not in “Kids Mode” and can easily evade the platform’s processes to determine the user’s age. Further, a user under the age of 13 can open an account without parental consent or through Instagram or Google, which the DOJ alleges is a violation of COPPA.

Similar to allegations outlined in Nebraska’s complaint, the DOJ complaint outlines how TikTok makes it very difficult to delete accounts and has “obstructed and failed to honor” parents’ request for deletion and return of their children’s account, personal information, and data collected by the app. The complaint alleges that TikTok failed to delete and continued to collect data from children despite deletion requests from parents. According to the complaint, TikTok retains users’ data “long after purportedly deleting their accounts.”

The complaint alleges that TikTok has insufficient internal policies to flag underage users and that TikTok does not follow its own policies to monitor the platform for underage users, thereby allowing “millions of children” under the age of 13 to be able to use the platform without parental consent in violation of COPPA.

The DOJ complaint further alleges that TikTok has violated the 2019 consent order entered into by TikTok and the Federal Trade Commission by not maintaining records evidencing compliance with its terms. It further alleges that TikTok misrepresented its remedial conduct by failing to ensure that all U.S.-based accounts were routed through an age gate and that it had deleted children’s data in May of 2020. TikTok later admitted that the representation was false.

The DOJ seeks injunctive relief, fines, and penalties for TikTok’s violation of COPPA. If you are a parent whose child is using TikTok, take a look at the complaints. They will give you details of how harmful using TikTok is for your child, and TikTok’s lack of adherence to its own processes to minimize its use by young children.

Candid Color Systems Inc., based in Oklahoma, faces a class action lawsuit for its alleged violations of the Illinois Biometric Information Privacy Act (BIPA). Candid Colors offers marketing services to photographers, including photo-matching technology that allows consumers to identify all of the photos taken of a particular student at a graduation ceremony.

The complaint, filed in the U.S. District Court for the Western District of Oklahoma, alleges that Candid Color collected and used biometric information of individuals collected at high school and college graduations without consent in violation of BIPA. The complaint states that Candid Color used students’ biometric identifiers to identify students without first informing the individuals and obtaining their consent before collection as required by BIPA.

The complaint further alleges that Candid Color profited from the biometric data collected from the students in violation of BIPA and did not make available its biometric data collection and destruction policies.

This is an interesting lawsuit: it was filed a few days after a similar lawsuit against Candid Color was dismissed by the U.S. District Court for the Southern District of Illinois, which found that Candid Color did not have enough contacts with Illinois to support jurisdiction. The plaintiffs seek to represent a class of Illinois residents whose biometric data was collected by Candid Color. The plaintiffs seek statutory damages of $5,000 per reckless or intentional BIPA violation and $1,000 per negligent violation. We’ll see if this suit proceeds and how the court applies the recent amendments made to BIPA by the Illinois Governor’s bill amending BIPA.