On June 2, 2024, cloud service provider Snowflake reported increased cyber threat activity targeting some of its customer’s accounts. Snowflake recommended that customers review unusual activity to detect and prevent unauthorized user access.

The Cybersecurity and Infrastructure Agency (CISA) then sent an alert on June 3, 2024, recommending that Snowflake customers “hunt for malicious activity, report positive findings to CISA, and review the Snowflake notice” on steps to take.  

On June 10, 2024, Mandiant provided additional information about the incident. If you are a Snowflake user, the Mandiant Alert is a mandatory read. According to Mandiant, it identified a campaign by threat actor UNC5537, targeting “Snowflake database instances with the intent of data theft and extortion.” The threat actor is suspected of having stolen records from Snowflake customers using stolen customer credentials and subsequently advertised the sale of customer data attempting to extort Snowflake customers. Mandiant has not found any evidence of a breach of Snowflake’s environment, but instead, the incidents stemmed from stolen customer credentials to access Snowflake’s system, in one instance, using infostealer malware. The credentials used by the threat actor were “available from historical infostealer infections, some of which data as far back as 2020.”

The three factors that allowed a successful compromise included:

1.         The impacted accounts were not configured with multi-factor authentication enabled, meaning successful authentication only required a valid username and password.

2.         Credentials identified in infostealer malware output were still valid, in some cases years after they were stolen, and had not been rotated or updated.

3.         The impacted Snowflake customer instances did not have network allow lists in place to only allow access from trusted locations.

Snowflake users may wish to confirm that these three factors are not applicable to them, and if so, take measures to address them.

According to Mandiant, it and Snowflake have notified 165 “potentially exposed organizations,” and Snowflake is working with customers to mitigate a potential compromise.

Google/Mandiant provided a helpful threat intelligence collection of indicators of compromise, which is worth a scan.

On October 30, 2023, President Biden issued Executive Order 14110, aiming to ensure the responsible and safe development and use of Artificial Intelligence (AI) in federal hiring. In compliance, the US Department of Labor’s Office of Federal Contract Compliance Programs (OFCCP) has released guidance for federal contractors to prevent discrimination in artificial intelligence-driven hiring practices. 

Although the OFCCP focuses on the government contractors’ duty to use AI in compliance with the law, this guidance is useful not only for federal contractors but any employer that uses AI for employment-related decisions. In April 2023, the Consumer Finance Protection Bureau, the Department of Justice Civil Rights Division, the EEOC, and the FTC issued a “Joint Statement on Enforcement Efforts Against Discrimination and Bias in Automated Systems.” The Joint Statement explained, literally in bold letters: “Automated Systems May Contribute to Unlawful Discrimination and Otherwise Violate Federal Law.” Since April, several more agencies have joined the pledge. The agencies “pledge[d] to vigorously use [their] collective authorities to protect individuals’ rights regardless of whether legal violations occur through traditional means or advanced technologies.” So, whether you have a federal contract or not, the OFCCP’s advice will help you avoid unwanted government attention.

Starting with the basics, AI is a machine-based system that can perform actions that typically require human intelligence. Simply put, AI can take in an enormous amount of data, detect patterns in that data, and offer suggestions that follow those patterns.. Every time your phone suggests how to complete a sentence, or you ask your GPS for directions, or Netflix suggests something for you to watch, you are using AI. 

For example, your phone might suggest how you want to respond to a text. Your phone is not “thinking” as we understand the word. Rather, it has recognized how most people respond to texts like the one you received, and it suggests a response based on that response. So, if you receive a text that says, “Thank you,” your phone will probably suggest “You’re welcome.” 

Or, to take another step, if you start typing, “I look forward,” your phone may suggest “to seeing you.” That is the pattern the phone’s AI has detected, and it will perpetuate unless it is told not to.

Since the advent of ChatGPT in December 2022, employers have seen the possibilities of AI in employment decisions. Employers widely use AI to streamline workflows and assist in decision-making processes. Just like AI can respond to a “thank you” text or complete a sentence, it can detect a pattern in employment decisions and perpetuate it. Using this pattern detection and perpetuation ability, AI can automate various HR tasks, from resume screening to performance evaluations. AI can also help HR professionals sort through resumes or determine criteria for employment decisions, such as hiring or promotion. 

Here comes the rub: AI’s eagerness to perpetuate patterns it recognizes can lure well-meaning but careless employers into actions that violate federal discrimination laws. Essentially, AI can recognize discriminatory content, even when humans may not. Having recognized a discriminatory pattern, AI will apply that pattern in its output, thus embedding and perpetuating the discrimination that it noticed. In other words, just as children are much better at imitating their parents than following their parents’ instructions, AI is much better at noticing and applying discriminatory patterns than complying with efforts to eliminate discriminatory outcomes. So, if an AI discriminates, it is generally because it was trained on existing data or modeled on behavior or goals set in the human world, and that data or behavior turns out to be discriminatory.

None of that will serve as an excuse for companies using AI for hiring or promotion purposes. They must ensure their AI systems do not perpetuate unlawful bias or discrimination.

To this end, the OFCCP outlines several compliance obligations:

  • Contractors must maintain records of AI system usage and ensure their confidentiality.
  • Contractors must provide necessary information about their AI systems during compliance evaluations.
  • Contractors must accommodate applicants or employees with disabilities in their AI-driven processes.

The OFCCP investigates the use of AI in employment decisions during compliance evaluations and complaint investigations. Contractors must validate AI systems that have an adverse impact on protected groups, ensuring these systems meet the Uniform Guidelines on Employee Selection Procedures (UGESP).

To protect your client against lawsuits or enforcement actions based on AI bias, you should advise your client in AI governance – that is, the ability to direct, manage, and monitor an organization’s AI activities.  The OFCCP suggests some AI governance guidelines for AI use, including:

  • Informing applicants and employees about the use of AI, including how the employer will capture, use, and protect the AI
  • Ensure human oversight in hiring and promotion decisions. AI is not a “set it and forget it” device. A human must ensure the AI is acting in compliance with the law, and the human should get involved in the process sooner rather than later. Imagine how much better off Amazon would have been if someone had noticed earlier the sudden influx of lacrosse-playing Jareds.
  • Contractors should regularly monitor AI system outputs for disparate impacts and take steps to mitigate any identified biases. This includes assessing whether the AI system’s training data may reproduce existing workplace inequalities.
  • Most companies buy a vendor-created AI system. If using an off-the-shelf AI system, contractors should verify that the vendor maintains records consistent with OFCCP requirements and ensure the system is fair, reliable, and transparent. Contractors remain responsible for compliance with third-party tools.

By implementing these guidelines, federal contractors and other employers can leverage AI’s benefits while safeguarding against its potential pitfalls. This proactive approach aligns with the broader goal of fostering an inclusive and equitable workplace in the age of AI.

TikTok has reported that it is responding to a cyber attack targeting a limited number of known brands and celebrity accounts. The BBC has identified that Paris Hilton’s account as being targeted, but TikTok says it was not compromised.

The BBC identified CNN as a victim whose account was successfully attacked. TikTok is working with CNN and other affected users to restore access to their accounts.

Since I hang out with a lot of CISOs, and understand their pain points, I urge readers to send a “thank you” and “you are the best” message to their CISO. You can’t imagine the pressure and stress they are under to try to protect the company’s data. To get a glimpse of why you need to appreciate your CISO, take a look at Proofpoint’s recently released “2024 Voice of the CISO: Global Insights into CISO challenges, Expectations and Priorities.”

Spoiler alert: the first sentence says, “CISOs are struggling with a jarring mix of challenges…” The statistics are grim but real:

  • Seventy percent of the 1600 CISOs surveyed “feel at risk of a material cyber attack over the next 12 months.”
  • Forty-three percent believe that their organization is “unprepared to cope with a targeted cyber attack in 2024.”
  • Forty-one percent believe that ransomware is the leading threat over the next 12 months, including double and triple extortion threats.
  • Seventy-four percent “consider human error to be their organization’s biggest cyber vulnerability.”
  • A substantial number of organizations in the education, health care, media, leisure, entertainment, financial services, and transport companies have lost data through employee theft.
  • Fifty-four percent of CISOs “believe generative AI poses a risk to their organization.”
  • Eighty-four percent of CISOs believe that “cybersecurity experts should be required at the board level.”
  • Sixty-six percent “believe there are excessive expectations on the CISO/CIO.

The survey shows that CISOs will face unmatched challenges in 2024, and need support from executives, the board, and employees. If you are in any of those categories, be cognizant of the work your CISO is doing, support that work, do your part as a team player, and say “thanks.” Give your CISO a little love after you read this. I am going to right now.

The issue of bias in artificial intelligence is assuming increased urgency in courtrooms around the country. Organizations that use AI to scan resumes can be sued for employment discrimination. Companies using facial recognition on their property might face premises liability. And numerous government agencies have announced their focus on companies that use AI in ways that violate federal antidiscrimination laws. Avoiding the inadvertent use of AI to implement or perpetuate unlawful biases requires thoughtful AI governance practices.

Basically, AI governance describes the ability to direct, manage, and monitor an organization’s AI activities. Put simply, your clients should no more uncritically accept mass-produced AI output than you would uncritically believe a salesperson you had just met. 

The U.S. National Institute for Standards and Technology (NIST) has recently offered AI governance protocols to minimize bias. Those protocols include the following:

1.         Monitoring.  AI is not “set it and forget it.” Organizations will want to  monitor their AI systems for potential bias issues and have a procedure for alerting the proper personnel when the monitoring reveals a potential problem. Through appropriate monitoring, organizations can know about a potential liability before a lawsuit, or a government enforcement action tells them about it.

2.         Written Policies and Procedures.  Robust written policies and procedures for all important aspects of the business are important, and AI is no exception. Absent effective written policies, managing AI bias can easily become subjective and inconsistent across business sub-units, which can exacerbate risks over time rather than minimize them. Among other things, such policies should include an audit and review process, outline requirements for change management, and provide details of any plans related to incident response for AI systems.

3.         Accountability.  Having a person or team in place who is responsible for protecting against AI bias will maximize your AI governance efforts. Ideally, the accountable person or team will have enough authority to command compliance with proper AI protocols implicitly – or explicitly if need be. And accountability mandates can also be embedded within and across the teams involved in the use of AI systems. Implementing effective AI governance to minimize biases requires careful thought. However, this implementation is crucial to protect against AI bias lawsuits or enforcement actions.

The UK’s data privacy regulator, the Information Commissioner’s Office (ICO), is investigating Microsoft over potential privacy concerns with its recently announced AI-powered “Recall” feature for Windows PCs. Microsoft Recall is designed to continuously capture screenshots of a user’s PC activity and use AI to create a searchable computer usage history. While these screenshots would be stored locally, Microsoft has stated that sensitive information like passwords, addresses, and health data would not be filtered out.

The ICO has initiated discussions with Microsoft to gain a comprehensive understanding of the measures in place to safeguard user privacy. The regulator has underscored the necessity for organizations to be forthright with users about the utilization of their data and to process personal data only to the extent that it is essential.

Cybersecurity experts have sounded the alarm about the potential for hackers and malicious entities to exploit this vast collection of personal data if devices are compromised. The continuous operation of Recall in the background also raises concerns about users’ lack of awareness regarding the exact nature of the data being collected and stored, further exacerbating the potential privacy risks.

Microsoft has not yet provided full details on how it will protect Recall data or what user controls will be available. As the powerful new feature draws scrutiny from regulators and critics alike, the tech giant may need to adjust to allay privacy fears. As Microsoft’s innovative Recall feature comes under the microscope, it is a critical reminder of the profound ethical responsibilities accompanying technological advancements. While the promise of enhanced productivity through AI-generated usage histories represents a notable stride in personal computing, user privacy and data security must remain at the forefront of such developments. Users and watchdogs will closely observe how Microsoft balances innovation with privacy.

Wow! It’s hard to believe this blog marks the 400th Privacy Tip since I started writing many years ago. I hope the tips have been helpful over the years and that you have been able to share them with others to spread the word. 

I thought it would be fun to pick 10 (ok—technically, a few more than 10) Privacy Tips and re-publish them (in case you missed them) in honor of our 400th Privacy Tip milestone. We have published tips that are relevant to the hot issues of the time, but some are time-honored. It was really hard to pick, but here they are:

Continue Reading Privacy Tip #400 – Best of First 400 Privacy Tips

Tennessee Governor Bill Lee signed legislation on May 22, 2024, that will shield private entities from class action lawsuits stemming from a cybersecurity event unless the event was caused by willful, wanton, or gross negligence.

The bill, as introduced, “declares a private entity to be not civilly liable in a class action resulting from a cybersecurity event unless the cybersecurity event was caused by willful, wanton, or gross negligence on the part of the private entity. The bill amends TCA Title 29 and Title 47.”

This bill will be a blow to class action plaintiffs’ law firms that have routinely filed suit against companies that are victims of criminal cybersecurity attacks, alleging that the companies were negligent in protecting consumer data. The bill provides a high bar for plaintiffs to overcome to pursue class action litigation in Tennessee.

It will be very interesting to see how other states follow. We will be following this closely.

This week Marriott Hotel Services was hit with a class action lawsuit for alleged violations of the Illinois’ Biometrics Information Privacy Act (BIPA). The lawsuit alleges that the hotel violated BIPA by requiring workers to scan their fingerprints as a means to clock in at work without proper notice or consent.

BIPA prohibits businesses from:

  • Collecting biometric data without written consent;
  • Collecting biometric data without informing the person in writing of the purpose and length of time the data will be used; and
  • Selling or profiting from consumers’ biometric information.

The complaint states that the fingerprint scanner is connected to the timekeeping and payroll system and then stored on a third-party platform (Kronos, Inc.). The plaintiff alleges that Marriott did not inform employees of the system or how long the data would be retained. The proposed class includes all employees who worked for Marriott in Illinois since 2019.

BIPA permits plaintiffs to seek statutory damages between $1,000 and $5,000 per violation.

Illinois is not the only state with this type of biometric privacy law: the states of Texas and Washington also have regulations that address the collection and use of biometric data. Other states have narrower biometric regulations, such as industry-specific laws and certain provisions under state consumer privacy rights statutes (e.g., California, Colorado, Connecticut, Utah, and Virginia). Additionally, many other states have introduced biometric privacy laws, such as Massachusetts and Missouri. Companies should be on the lookout for new laws and regulations in this space and confirm that their actions related to biometric data collection and use are in compliance with applicable laws.

Intercontinental Exchange, Inc. (ICE), the owner of the New York Stock Exchange, has agreed to settle with the Securities and Exchange Commission (SEC) for $10 million over allegations that it failed to timely notify the SEC of the cybersecurity incident it experienced in 2021 involving its virtual private network.

The SEC alleged that ICE should have notified it immediately of the incident, but ICE contends that “[t]his settlement involves an unsuccessful attempt to access our network more than three years ago…The failed incursion had zero impact on market operations. At issue was the time frame for reporting this type of event under Regulation SCI.”

Apparently, the SEC alleges that it should have been notified immediately, and ICE contends that the incident was not material and did not rise to the level of significance that ICE believed  obligated it to notify the SEC “immediately.”

A settlement does not indicate fault. The lesson here is that the SEC takes a conservative approach to reporting obligations and will use its muscle if reporting is not provided in what it deems is a timely manner.