Not only is the People’s Republic of China (PRC) a threat with its use of TikTok, but it also supports threat actors that have for years attacked U.S. based companies as well as the governments of the U.S. and Japan. According to a Joint Advisory published on September 27, 2023, by the National Security Agency, the FBI, CISA, the Japan National Police Agency and the Japan National Center of Incident Readiness and Strategy for Cybersecurity, “BlackTech has demonstrated capabilities in modifying router firmware without detection and exploiting routers’ domain-trust relationships for pivoting from international subsidiaries to headquarters in Japan and the U.S.—the primary targets.”

In addition to targeting entities that support the U.S. and Japanese governments and militaries, BlackTech has targeted “industrial, technology, media, electronics, and communications sectors.” Its custom malware, dual-use tools, and living off the land tactics, such as disabling logging on routers, to conceal their operations.”

The Advisory provides detailed detection and mitigation techniques for organizations and recommends “monitor[ing] network devices for unauthorized downloads of bootloaders and firmware images and reboots. Network defenders should also monitor for unusual traffic destined to the router, including SSH.”

There is a lot of chatter out there around the uses of artificial intelligence (AI) for cybersecurity.  For example, Applied Sciences published a paper on how AI can be used for mobile malware detection, and Gartner has published on AI Security Management.

According to an article published in Forbes, entitled “A Primer on Artificial Intelligence and Cybersecurity,” AI “acts as a powerful catalyst and enabler for cybersecurity in our connected ecosystem.” The article provides an infographic by Chuck Brooks which visually outlines the uses of AI in cybersecurity in a clear and concise manner.

AI can “facilitate more effective decision-making, particularly in bigger networks with numerous users and factors…, can be used to keep an eye on network anomalies, spot emerging dangers…and detect them. …, and may be able to assist identity management,” among others.

On the other hand, “while AI and machine learning might be useful tools for cyber defense, they can also be double-edged swords that criminal hackers can utilize for bad intentions.” Some of these uses are outlined in Forrester’s Research Paper “Using AI for Evil: A Guide to How Cybercriminals Will Weaponize and Exploit AI to Attack Your Business.”

Not only can AI be used for access to networks and systems, but it can also be used for deep fakes, and to “conceal malware in commonly downloaded programs.”

In using AI for cybersecurity, according to Forbes, “For the near future, AI will have a disruptive effect on operational cybersecurity models. Risk management approaches and technology implementation will have to be continually adapted at the speed of smart algorithms. In the coming years, addressing novel and increasingly complex threats will be essential to maintaining business continuity and cyber-resilience. A thorough understanding of AI’s potential uses, benefits, and drawbacks is necessary for the future of cybersecurity.”

Retool, a software development firm offering modular code for customizable enterprise software, recently notified 27 customers that a threat actor had accessed their accounts. The attacker was able to navigate through multiple layers of security controls after taking advantage of an employee through an SMS-based phishing attack. The attacker then used this access to target customers in the crypto industry. This was no ordinary phishing attempt, though: the attacker used real time AI voice modulation, as well as deep background intelligence on the office and floorplans, to successfully impersonate a member of the IT team. The employee thought they were talking to someone they knew – they recognized the IT personnel’s voice from having met in person. The authentication happened through a virtual private network, single sign-on, and a final one-time passcode system.

This attack illustrates the raw effectiveness of social engineering, but attackers do not need AI to con many employees and management. Social engineering is the oldest and most persistent attack vector, and anyone can be targeted. Layered and multifactor security is essential, but it’s not enough to secure your company. Anything a person can unlock, they can be duped into unlocking. Organizations are often drawn to technological solutions and the next big thing, but studies show that security awareness training is one of the most effective things an organization can do to reduce its risk of a cyberattack or to mitigate its impact. Realistic tabletop simulations are the most effective training method for learner retention.

Tabletop exercises are used to engage senior leadership and incident response teams in an important exercise in cybersecurity resilience in order to be well prepared for a security incident. These exercises, which may be led by counsel, provide a unique opportunity to be open and transparent about preparedness, and are valuable to prepare for these critical, and sometimes difficult and chaotic, situations before an incident happens.

On September 26, 2023, Windows released a configuration update on Windows 11 version 22H2  (all editions) that is worth reading and applying, particularly if you use Windows Copilot.

According to Microsoft, it has identified that when using Copilot in preview:

  • Narrator does not work as you expect with challenge–response tests, such as Captcha.
  • Narrator fails to correctly state the name of the “remove an image” button. It also fails to say the name of the dialog or buttons for a skill.
  • When you are in the chat input box, pressing Tab does not change the keyboard focus. If you add an image to the chat input box, Narrator does not announce the addition.

Although Microsoft issued the update, it continues to work on a resolution.

The FBI and CISA issued a Joint Cybersecurity Advisory “#StopRansomware: Snatch Ransomware” on September 20, 2023. The Advisory outlines the indicators of compromise and observed tactics, techniques, and procedures of Snatch so organizations can identify, mitigate, and respond to an attack using the Snatch ransomware variant.

Snatch has been hitting the Defense Industrial Base (DIB), Food and Agriculture and Information Technology sectors. “Snatch threat actors conduct ransomware operations involving data exfiltration and double extortion. After data exfiltration often involving direct communications with victims demanding ransom, Snatch threat actors may threaten victims with double extortion, where the victims’ data will be posted on Snatch’s extortion blog if the ransom goes unpaid.”

The malicious email domains used by Snatch are: sezname[.]cz; cock[.]li and airmail[.]cc. The legitimate emails domains used by Snatch are: tutanota[.]com / tutamail[.]com / tuta[.]io; mail[.]fr; keemail[.]me; protonmail[.]com / proton[.]me; and swisscows[.]email.

FBI and CISA provide recommendations to mitigate a Snatch attack, including:

  1. Secure and closely monitor Remote Desktop Protocol (RDP).
  2. Maintain offline backups of data.
  3. Enable and enforce phishing-resistant multifactor authentication (MFA).

Google’s Workspace for Education will require school admins to independently approve all integrated third-party applications students use. Users under 18 cannot use their Google accounts to access third-party applications without consent configured in user settings. Access will terminate automatically on October 1, 2023. Google Workspace for Education’s Terms of Service does not cover third-party applications and may collect user information according to their privacy policies. Enabling third-party applications is as easy as having the account admin click “Confirm.” However, the legal requirements may not be as simple. Schools with students K-12 and any extracurricular, daycare, and other childcare organizations may need parental consent before allowing their children to access these services.

Privacy laws, including the Children’s Online Privacy Protection Act (COPPA), California’s Online Privacy Protection Act (CalOPPA), and the Family Educational Rights and Privacy Act (FERPA) restrict how organizations collect, process, and sell children’s data without parental consent. Additionally, third-party applications have historically been risky in terms of privacy practices. For instance, the Federal Trade Commission (FTC) famously fined a coloring book application for mining children’s data in violation of COPPA, and the Mozilla Foundation routinely calls out applications with substandard privacy practices in their Privacy Not Included series (which we’ve enjoyed before.)

K-12 schools and any other organization with users under the age of 18 may consider taking this opportunity to audit the third-party applications used by their students. Often, organizations add these applications on an as-needed basis without going through a regular review process. While handy in the moment, this ad-hoc approach may allow apps with substandard privacy practices to creep in. With children’s data – and hefty fines – at stake, wise institutions may take the time to be proactive rather than reactive.

We have been keeping a keen eye on the explosion of the use of artificial intelligence (AI) tools and generative AI. We are assisting clients with Governance Programs to formulate a process to evaluate the use of AI in their organizations, encourage safe and reliable use of AI tools by employees, evaluate appropriate uses of AI tools, and develop a process to mitigate legal issues that arise from the use of AI tools, including educating employees on the risks posed by the use of AI tools and how the organization is mitigating the risk. We find that many employees have no idea that their use of generative AI or other AI tools may have legal risks.

We are also dedicated to educating our readers on issues that arise that we think may be interesting for them to consider or to point you to articles we think are a worthwhile read.

A new article by eWeek, “AI and Privacy Issues: What You Need to Know” is one such example. The article outlines some of the privacy issues to consider “as AI becomes increasingly pervasive in our lives.”

Although the article is targeted to consumers, it is instructive for businesses using AI tools to be aware of what consumer facing publications are saying about business use of AI, and how consumers should be responding. This is an important part of an AI Governance Program: to consider how your employees and customers will react to your use of AI tools, including what data you are feeding into the AI tool, whose personal information or sensitive information may be used in the model for learning, and whether the business is disclosing personal or sensitive information to third party AI developers and what those developers are doing with the data.

This article outlines some of the concerns your employees and customers may have with your use of AI tools. You may wish to consider them when developing an AI Governance Program.

It is scary to think of cyber warfare and how it may affect us. But the reality is there, and we should be prepared. I was chatting with a colleague this morning who asked for the top two things to do to prepare for a massive cyber-attack. I started thinking about this when I was having lunch at a small restaurant during a Noreaster with no snow, but high winds. We ordered, our food arrived, and the electricity in the restaurant and surrounding area went out. The restaurant didn’t have a generator, and the restaurant’s credit card machine didn’t work. Of the six tables of people dining in the restaurant, I was the only one with enough cash (a mere $40) to pay for lunch. The owner went table to table and wrote people’s credit card numbers, including CVVs down on a piece of paper and said he would charge them when he was back up and running. I was cringing the whole time; it illustrated for me that most people have no cash and rely on plastic methods of payment.

I liken preparing for a massive cyber-attack to preparing for a massive electrical outage, like a natural disaster, but adding a disruption to everything else in the mix, like internet service and payment processing. What do you do to prepare for a hurricane or other natural disaster that disrupts both electricity and internet service? You want to have items that will assist you with activities of daily living while certain services and amenities may be closed or unavailable. A generator helps, but some food, water, energy backups, batteries, candles, and blankets come to most people’s minds when contemplating a natural disaster. To these items, for a cyber-attack, I add cash. Everyone needs some cash if there is a widespread attack on the energy grid, internet services providers, or the financial services industry. Think about if you can’t go to an ATM, can’t use a debit or credit card, or can’t go online or use payment apps. How will you purchase items you need to weather through the cyber storm? Think about everything that is connected to the internet and what you would need if the internet was down for a week.

Cash allows you to buy items such as food, water and gasoline for your car in the event you can’t use a credit or debit card or payment apps during an attack. I find that most people do not carry any cash on a daily basis, nor do they have any emergency cash on hand. To prepare, think about what you might need for a week or two, and get some emergency cash. Just don’t stash it under your mattress!

This week, Delaware Governor John Carney signed the Delaware Personal Data Privacy Act into law. The bill goes into effect on January 1, 2025, and a public outreach effort will begin by July 1, 2024. The outreach effort will inform Delaware consumers of their rights under the new law and describe businesses’ obligations. Delaware is yet another state to follow the trend of consumer privacy bills after the passing of the California Consumer Privacy Act in 2018.

The law applies to companies doing business in Delaware that control or process the personal data of 35,000 or more Delaware consumers, or, if the business derives more than 20 percent of its gross annual revenue from the sale of personal data, the threshold drops down to the processing or controlling of only 10,000 Delaware consumers. The Delaware Department of Justice intends to hire more staff to assist in educating the public about this new law and enforcing the legislation. To read the full text of the bill, click here.

This week the Federal Aviation Administration (FAA) announced that drone pilots who are unable to comply with the Remote ID Rule broadcast requirement will have until March 16, 2024, to equip their drone appropriately. If a drone pilot fails to comply with this requirement after this extended deadline, the pilot could be subject to fines or suspension or the revocation of their pilot certificate. The original compliance deadline was set for September 16, 2023.

The FAA’s decision to extend the deadline stems from the unanticipated issues that some drone operators are facing with some types of remote identification broadcast modules. To meet the Remote ID Rule requirements, pilots can purchase a standard Remote ID-equipped drone directly from a manufacturer or purchase the Remote ID module which must be affixed to existing drones that do not have the Remote ID equipment installed upon production. The goal of the Remote ID Rule is to create a digital license plate for drones to assist the FAA, law enforcement, and other federal agencies in monitoring drones for unsafe operation or operation in restricted areas. To view the full regulation, click here.