The FBI, CISA and the Multi-State Information Sharing and Analysis Center (MS-ISAC) recently released a joint cybersecurity advisory, warning organizations about indicators of compromise, and tactics, techniques, and procedures that have been associated with LockBit 3.0 ransomware.

The Advisory, #StopRansomware: LockBit 3.0, states that LockBit 3.0 is an affiliate-based ransomware variant that functions as a Ransomware-as-a-Service model that is a continuation of its predecessors, LockBit and LockBit 2.0

LockBit 3.0, also known as LockBit Black, is more evasive than its predecessors, and “shares similarities with Blackmatter and Blackcat ransomware.” The attackers using LockBit 3.0 use remote desktop protocol, drive-by compromise, phishing campaigns, abuse of valid accounts, and exploitation of public-facing applications to access networks. Once inside the victim’s network, the attackers escalate privileges, and then move through the victim’s network. Once inside the network, the attackers exfiltrate data using Stealbit,  use publicly-available legitimate file sharing services, then encrypt the files, and finally send a ransom note to the victim.

The Alert outlines the indicators of compromise, and suggestions for mitigation.  Those suggestions include:

  • Prioritized remediating known exploited vulnerabilities
  • Train users to recognize and report phishing attempts
  • Enable and enforce phishing-resistant multifactor authentication.

The New York City Department of Consumer and Worker Protection will delay enforcement of Local Law 144, until April 15, 2023. The law requires companies operating in the City to audit automated employment decision tools for bias prior to use, and to post these audit reports publicly. The bill would also require that companies notify job candidates (and employees residing in the city) that they will be evaluated by automated decision-making tools and disclose the qualifications and characteristics that the tool considers. The AI bias law still has an effective date of January 1, 2023, and violations of it are be subject to a civil penalty.

The City is delaying enforcement due to a “substantial volume of thoughtful comments” from concerned parties. Most of these comments likely came from NYC-area businesses, many of which use AI tools in hiring. These tools generally rank resumes and filter out low-quality applicants.

Bias in AI is difficult to isolate. These technologies tend to be black boxes, and the companies that use third-party AI services may not have access to the ins-and-outs of a system. Even if a business develops an AI with the purest of intentions, bias can creep in. AI bias derives from programming, baselines, and inputs established by people, and people are inherently biased.

For example, suppose a company trains its AI hiring system by feeding it past resumes and hiring decisions to teach the AI what a “successful” resume looks like. The AI then categorizes and scores new applicants’ resumes based on how well they compare to the baselines set by the training. The company has been historically white-dominated and has hired fewer qualified candidates from Historically Black Colleges and Universities (HBCU’s). The AI picks up on this trend as one of several factors that predict whether a candidate is “hirable” to the company. Even though the company’s leadership is dedicated to increasing diversity, the AI system filters out many qualified Black candidates.

While New York City’s law is on ice for now, some states are beginning to address AI bias as well. For example, the California Consumer Privacy Act (CCPA, as amended by the California Privacy Rights Act) requires businesses to allow consumers to opt-out of automated decision-making technologies and the California Privacy Protection Agency is expected to propose additional regulations in this area.

Additionally, employees are beginning to challenge allegedly biased AI tools in court. HR technology giant Workday is currently facing a class-action suit alleging that its system is biased against Black and older applicants. (Mobley v. Workday, Inc., Docket No. 3:23-cv-00770 (N.D. Cal. Feb 21, 2023)). The regulation of AI will almost certainly continue to develop as this technology becomes increasingly integrated in everyday life. For the time being, businesses can look to the U.S. Equal Employment Opportunity Commission’s guidance statement on AI hiring tools and the Americans With Disabilities Act.

Hackers are always looking for the next opportunity to launch attacks against unsuspecting victims. According to Cybersecurity Dive, researchers at Proofpoint recently observed “a phishing campaign designed to exploit the banking crisis with messages impersonating several cryptocurrencies.”

According to Cybersecurity Dive, cybersecurity firm Arctic Wolf has observed “an uptick in newly registered domains related to SVB since federal regulators took over the bank’s deposits…” and “expects some of those domains to serve as a hub for phishing attacks.”

This is the modus operandi of hackers. They use times of crises, when victims are vulnerable, to launch attacks. Phishing campaigns continue to be one of the top risks to organizations, and following the recent bank failures, everyone should be extra vigilant of urgent financial requests and emails spoofing financial institutions, and take additional measures, through multiple levels of authorization, when conducting financial transactions.

We anticipate increased activity following these recent financial failures attacking individuals and organizations. Communicating the increased risk to employees may be worth consideration.

Chinese company ByteDance faces growing concerns from governments and regulators that user data from its popular short video-sharing app TikTok could be handed over to the Chinese government. The concern is based on China’s national security laws, which give its government the power to compel Chinese-based companies to hand over any user data. More than 100 million Americans have reportedly downloaded this popular short video-sharing app on their devices.

In its defense, ByteDance maintains TikTok is operated independently of ByteDance, that all TikTok app user data is held on servers outside of China and further that it doesn’t share data with the Chinese government. ByteDance also claims other social media companies collect far more user data than does TikTok, yet aren’t being threatened with bans.

Concerns about TikTok have existed for years. Since 2017, the Committee on Foreign Investment in the United States (CFIUS), which investigates foreign investments in U.S. companies which have a potential national security risk, has been reviewing ByteDance’s practices, as a result of ByteDance’s acquisition of U.S. company Musical.ly. CFIUS’ investigation into the Bytedance/Musical.ly transaction remains open because of unresolved concerns about ByteDance’s use of user data, the potential data could be passed on to the Chinese government and concerns about the inability to monitor or enforce whatever restrictions ByteDance might even agree even to. However, CFIUS has suggested ByteDance should divest the TikTok’s American operations.

Meanwhile, more than 30 states and now the Biden Administration have banned government employees from using the TikTok app on government-owned devices. In Congress, the House Foreign Affairs Committee voted to advance a bill, known as the Deterring America’s Technology Adversaries Act (DATA Act) to ban anyone in the United States from accessing or downloading the TikTok app on their phones. If enacted into law, this would mean that Apple and Google would no longer be able to offer the TikTok app in their app stores. ByteDance is reportedly talking with Apple and Google about a data security plan that ByteDance has proposed to CFIUS to be sure the plan would also be acceptable to Apple and Google. The plan purportedly includes having Oracle host TikTok’s U.S. user data on its servers, as well as vet TikTok’s software and updates before they are sent to the app stores.

The U.S. is not alone in raising security concerns over the TikTok app. Canada, The European Parliament, European Commission and the EU Council have banned the TikTok app from being loaded onto government or organization owned devices. Some require employees and staff ban the TikTok app on personal devices with access to government or organization systems. Most have also recommended lawmakers and employees remove the TikTok app from their personal devices, even if they don’t access government or organization systems. Pakistan and Afghanistan have also imposed bans on TikTok, but because of its content, not because of security concerns.

Some countries have gone even further to impose outright bans on the TikTok app. In 2021, India imposed a permanent ban on the TikTok app and several other Chinese apps. In December 2022, Taiwan imposed a public sector ban on the TikTok app after the FBI warned that the TikTok app posed a national security risk. 

While TikTok is the current focus of legislators and regulators, some say security developments at other social media platforms should also be kept under constant review. The DATA Act bill would also require Biden to impose a ban on companies transferring sensitive personal data to an entity subject to the influence of China, although the details of this provision are not completely clear from the bill. 

It used to be that one of the sure ways to identify a phishing email was to notice grammatical errors or broken English in the text of the communication. Thanks to new translation tools like Google Translate, which are available worldwide, threat actors can translate a phishing email into any language, so it sounds authentic to the recipient and pull off a business email compromise attack (BEC) effortlessly.

Unfortunately, that is exactly what two threat actor groups are doing as we speak. According to Abnormal Intelligence, threat groups Midnight Hedgehog, “which engages in payment fraud,” and Mandarin Capybara, “a group that executes payroll diversion attacks” have “launched BEC campaigns in at least 13 different languages.”

According to Abnormal Intelligence, threat actors are using the same legitimate commercial tools that sales and marketing teams use to launch BEC campaigns, including collecting “leads” in a state or country. Using translation tools, they can launch multiple campaigns in different countries using the same text translated into the native language.

Midnight Hedgehog launches payment fraud attacks by targeting finance personnel and executives involved in financial transactions by spoofing the CEO. Before doing so, they “thoroughly research their target’s responsibilities and relationship to the CEO and then create spoofed email accounts that mimic a real account.”

The Mandarin Capybara group also impersonates executives and targets human resources personnel to carry out payroll diversion schemes to change direct deposit information to divert the executive’s pay to a fraudulent bank account. To combat these attacks, Abnormal Intelligence suggests that companies “put procedures in place to verify outgoing payments and payroll updates and keep your workforce vigilant with security awareness training.” It also suggests beefing up security through behavioral analytics.

A recent study found that some data brokers are selling highly sensitive data relating to consumers’ mental health conditions on the open market with minimal vetting of their customers and few controls on how these purchasers use the data. The study, conducted by a researcher at Duke University’s Technology Policy Lab, found that 11 out of 37 data brokerage firms contacted by a potential purchaser of the data were willing to sell the requested mental health data with little to no knowledge of the potential use of that data. The study also found that ten brokers advertised sensitive mental health data for sale, including data on consumers with depression, insomnia, anxiety, ADHD, and bipolar disorder. The brokers additionally sold data targeted by ethnicity, age, gender, zip code, religion, children in the home, marital status, net worth, credit score, date of birth, and single-parent status.

One of the significant concerns addressed in this study was the lack of clarity on whether the data is de-identified or aggregated; unfortunately, many data brokers imply that they can provide identifiable data even related to this sensitive subject. The pricing for mental health information varied, with some data brokers charging by the record, while others offered subscriptions.

Some brokers asked their potential customers about the purpose of the purchase and the intended uses for the data. However, after receiving the requested information, those brokers did not appear to have additional controls for data or customer management thereafter. Through the use of emails and telephone calls requesting information from these brokers, there was no indication by the brokers that they had conducted separate background checks to confirm the purchaser’s credentials. This study corroborates the findings of researchers at the Mozilla Foundation, which also raised red flags around mental health mobile app privacy policies . Data brokering is in its infancy as an industry, and studies like these highlight the growing need for effective regulation and consumer protection.

HIPAA requires that covered entities notify the Office for Civil Rights (OCR) of any breaches of unsecured protected health information that affects less than 500 individuals in a calendar year within 60 days following the end of the calendar year.

Therefore, all breaches that affected less than 500 individuals that occurred in 2022 and have not already been reported to the OCR must be reported no later than March 1, 2023.

These breaches can be reported to the OCR through its online portal:  https://ocrportal.hhs.gov/ocr/breach/wizard_breach.jsf

ChatGPT is amazing! It is all the rave. Its capabilities are awe-inspiring (except to educators who are concerned their students will never write a term paper again). It has reportedly passed a bar exam and a physician board exam and has written sermons, research papers, and more.

But all amazing technology has its ups and downs. Not wishing to take anything away from ChatGPT, it is important to understand that since it is so amazing, not only do we want to use it, but so do the bad guys.

Without getting into a much longer discussion on the ethical considerations of using AI (do a little research yourself on that topic), there are some concerns being raised about the use of AI products, including ChatGPT, that are worth keeping an eye on.

According to Axios, researchers at Check Point Research recently discovered that hackers were using ChatGPT to “write malware, create data encryption tools and write code creating new dark web marketplaces.” It is also being used to generate phishing emails.

Similarly concerning is that some software code developers are using AI to write code and are “creating more vulnerable code.” Those using AI were “also more likely to believe they wrote secure code than those without access.”

ChatGPT and other AI assistants are extremely helpful when used for everyday purposes but can also be used maliciously by threat actors. It is just another tool in their toolbox to use to attack victims. Being aware of how new technology can be used maliciously is an important way to stay vigilant and prevent becoming a victim.

Sorry to be the bearer of bad news but remember that I am only the messenger. According to the World Economic Forum’s Global Cybersecurity Outlook 23 Insight Report (published in collaboration with Accenture), although business leaders are more aware of the risk of cyber issues to their organizations, there remain challenges on how organizations are addressing and mitigating that risk.

According to the Report, “business and cyber leaders believe global geopolitical instability is moderately or very likely to lead to a catastrophic cyber event in the next two years.” Respondents understand the changing landscape of cyber attacks and they “now believe that cyberattackers are more likely to focus on business disruption and reputational damage. These are the top two concerns among respondents.”

In addition, 43 percent of respondents believe that it is “likely that in the next two years, a cyberattack will materially affect their own organization.” They also recognize that their organization’s cybersecurity risk is related to their supply chain partners’ security posture. Executives “see data privacy laws and cybersecurity regulations as an effective tool for reducing cyber risks across a sector.” Not that they want to see more regulations, but they recognize that such rules can incentivize organizations to have basic cybersecurity measures in place. Although the news is bleak that sophisticated cybersecurity attacks will increase and become more disruptive, it appears that organizations are becoming more aware of the risk, are trying to build a more robust cybersecurity posture, and are seeking ways to communicate more clearly across the organization. All of these measures are positive, but challenging, particularly in the face of a dearth of cybersecurity talent worldwide.

The Office of the California Attorney General recently announced that it will initiate an investigative sweep and will start sending letters to businesses about their mobile apps for failure to comply with the California Consumer Privacy Act (CCPA). There is also a new online tool that allows consumers to directly notify a business of an alleged CCPA violation, so we may see an influx of direct-from-consumer complaints.

The Attorney General’s office will focus its investigation on popular apps in the retail, travel, and food services industries. The goal is to determine whether these apps are complying with consumer opt-out requests and do not sell or share requests under the CCPA. The investigation will also focus on the apps’ failures to process consumer requests submitted through an authorized agent under the CCPA. For example, Consumer Reports’ app, Permission Slip, acts as an authorized agent for consumers to submit requests under the CCPA such as opt-outs and deletion requests.

Attorney General Rob Bonta said in the office’s recent press release, “[B]usinesses must honor Californians’ right to opt out and delete personal information, including when those requests are made through an authorized agent. [The] sweep also focuses on mobile app compliance with the CCPA, particularly given the wide array of sensitive information that these apps can access from our phones and other mobile devices. I urge the tech industry to innovate for good — including developing and adopting user-enabled global privacy controls for mobile operating systems that allow consumers to stop apps from selling their data.” Businesses that are subject to the CCPA – and the newly effective amendments under the California Privacy Rights Act (CPRA) – should continue to update and implement their policies, procedures, and processes to ensure compliance with the requirements of these regulations and to hopefully avoid being caught up in this investigative sweep.