TikTok has reported that it is responding to a cyber attack targeting a limited number of known brands and celebrity accounts. The BBC has identified that Paris Hilton’s account as being targeted, but TikTok says it was not compromised.

The BBC identified CNN as a victim whose account was successfully attacked. TikTok is working with CNN and other affected users to restore access to their accounts.

Since I hang out with a lot of CISOs, and understand their pain points, I urge readers to send a “thank you” and “you are the best” message to their CISO. You can’t imagine the pressure and stress they are under to try to protect the company’s data. To get a glimpse of why you need to appreciate your CISO, take a look at Proofpoint’s recently released “2024 Voice of the CISO: Global Insights into CISO challenges, Expectations and Priorities.”

Spoiler alert: the first sentence says, “CISOs are struggling with a jarring mix of challenges…” The statistics are grim but real:

  • Seventy percent of the 1600 CISOs surveyed “feel at risk of a material cyber attack over the next 12 months.”
  • Forty-three percent believe that their organization is “unprepared to cope with a targeted cyber attack in 2024.”
  • Forty-one percent believe that ransomware is the leading threat over the next 12 months, including double and triple extortion threats.
  • Seventy-four percent “consider human error to be their organization’s biggest cyber vulnerability.”
  • A substantial number of organizations in the education, health care, media, leisure, entertainment, financial services, and transport companies have lost data through employee theft.
  • Fifty-four percent of CISOs “believe generative AI poses a risk to their organization.”
  • Eighty-four percent of CISOs believe that “cybersecurity experts should be required at the board level.”
  • Sixty-six percent “believe there are excessive expectations on the CISO/CIO.

The survey shows that CISOs will face unmatched challenges in 2024, and need support from executives, the board, and employees. If you are in any of those categories, be cognizant of the work your CISO is doing, support that work, do your part as a team player, and say “thanks.” Give your CISO a little love after you read this. I am going to right now.

The issue of bias in artificial intelligence is assuming increased urgency in courtrooms around the country. Organizations that use AI to scan resumes can be sued for employment discrimination. Companies using facial recognition on their property might face premises liability. And numerous government agencies have announced their focus on companies that use AI in ways that violate federal antidiscrimination laws. Avoiding the inadvertent use of AI to implement or perpetuate unlawful biases requires thoughtful AI governance practices.

Basically, AI governance describes the ability to direct, manage, and monitor an organization’s AI activities. Put simply, your clients should no more uncritically accept mass-produced AI output than you would uncritically believe a salesperson you had just met. 

The U.S. National Institute for Standards and Technology (NIST) has recently offered AI governance protocols to minimize bias. Those protocols include the following:

1.         Monitoring.  AI is not “set it and forget it.” Organizations will want to  monitor their AI systems for potential bias issues and have a procedure for alerting the proper personnel when the monitoring reveals a potential problem. Through appropriate monitoring, organizations can know about a potential liability before a lawsuit, or a government enforcement action tells them about it.

2.         Written Policies and Procedures.  Robust written policies and procedures for all important aspects of the business are important, and AI is no exception. Absent effective written policies, managing AI bias can easily become subjective and inconsistent across business sub-units, which can exacerbate risks over time rather than minimize them. Among other things, such policies should include an audit and review process, outline requirements for change management, and provide details of any plans related to incident response for AI systems.

3.         Accountability.  Having a person or team in place who is responsible for protecting against AI bias will maximize your AI governance efforts. Ideally, the accountable person or team will have enough authority to command compliance with proper AI protocols implicitly – or explicitly if need be. And accountability mandates can also be embedded within and across the teams involved in the use of AI systems. Implementing effective AI governance to minimize biases requires careful thought. However, this implementation is crucial to protect against AI bias lawsuits or enforcement actions.

The UK’s data privacy regulator, the Information Commissioner’s Office (ICO), is investigating Microsoft over potential privacy concerns with its recently announced AI-powered “Recall” feature for Windows PCs. Microsoft Recall is designed to continuously capture screenshots of a user’s PC activity and use AI to create a searchable computer usage history. While these screenshots would be stored locally, Microsoft has stated that sensitive information like passwords, addresses, and health data would not be filtered out.

The ICO has initiated discussions with Microsoft to gain a comprehensive understanding of the measures in place to safeguard user privacy. The regulator has underscored the necessity for organizations to be forthright with users about the utilization of their data and to process personal data only to the extent that it is essential.

Cybersecurity experts have sounded the alarm about the potential for hackers and malicious entities to exploit this vast collection of personal data if devices are compromised. The continuous operation of Recall in the background also raises concerns about users’ lack of awareness regarding the exact nature of the data being collected and stored, further exacerbating the potential privacy risks.

Microsoft has not yet provided full details on how it will protect Recall data or what user controls will be available. As the powerful new feature draws scrutiny from regulators and critics alike, the tech giant may need to adjust to allay privacy fears. As Microsoft’s innovative Recall feature comes under the microscope, it is a critical reminder of the profound ethical responsibilities accompanying technological advancements. While the promise of enhanced productivity through AI-generated usage histories represents a notable stride in personal computing, user privacy and data security must remain at the forefront of such developments. Users and watchdogs will closely observe how Microsoft balances innovation with privacy.

Wow! It’s hard to believe this blog marks the 400th Privacy Tip since I started writing many years ago. I hope the tips have been helpful over the years and that you have been able to share them with others to spread the word. 

I thought it would be fun to pick 10 (ok—technically, a few more than 10) Privacy Tips and re-publish them (in case you missed them) in honor of our 400th Privacy Tip milestone. We have published tips that are relevant to the hot issues of the time, but some are time-honored. It was really hard to pick, but here they are:

Continue Reading Privacy Tip #400 – Best of First 400 Privacy Tips

Tennessee Governor Bill Lee signed legislation on May 22, 2024, that will shield private entities from class action lawsuits stemming from a cybersecurity event unless the event was caused by willful, wanton, or gross negligence.

The bill, as introduced, “declares a private entity to be not civilly liable in a class action resulting from a cybersecurity event unless the cybersecurity event was caused by willful, wanton, or gross negligence on the part of the private entity. The bill amends TCA Title 29 and Title 47.”

This bill will be a blow to class action plaintiffs’ law firms that have routinely filed suit against companies that are victims of criminal cybersecurity attacks, alleging that the companies were negligent in protecting consumer data. The bill provides a high bar for plaintiffs to overcome to pursue class action litigation in Tennessee.

It will be very interesting to see how other states follow. We will be following this closely.

This week Marriott Hotel Services was hit with a class action lawsuit for alleged violations of the Illinois’ Biometrics Information Privacy Act (BIPA). The lawsuit alleges that the hotel violated BIPA by requiring workers to scan their fingerprints as a means to clock in at work without proper notice or consent.

BIPA prohibits businesses from:

  • Collecting biometric data without written consent;
  • Collecting biometric data without informing the person in writing of the purpose and length of time the data will be used; and
  • Selling or profiting from consumers’ biometric information.

The complaint states that the fingerprint scanner is connected to the timekeeping and payroll system and then stored on a third-party platform (Kronos, Inc.). The plaintiff alleges that Marriott did not inform employees of the system or how long the data would be retained. The proposed class includes all employees who worked for Marriott in Illinois since 2019.

BIPA permits plaintiffs to seek statutory damages between $1,000 and $5,000 per violation.

Illinois is not the only state with this type of biometric privacy law: the states of Texas and Washington also have regulations that address the collection and use of biometric data. Other states have narrower biometric regulations, such as industry-specific laws and certain provisions under state consumer privacy rights statutes (e.g., California, Colorado, Connecticut, Utah, and Virginia). Additionally, many other states have introduced biometric privacy laws, such as Massachusetts and Missouri. Companies should be on the lookout for new laws and regulations in this space and confirm that their actions related to biometric data collection and use are in compliance with applicable laws.

Intercontinental Exchange, Inc. (ICE), the owner of the New York Stock Exchange, has agreed to settle with the Securities and Exchange Commission (SEC) for $10 million over allegations that it failed to timely notify the SEC of the cybersecurity incident it experienced in 2021 involving its virtual private network.

The SEC alleged that ICE should have notified it immediately of the incident, but ICE contends that “[t]his settlement involves an unsuccessful attempt to access our network more than three years ago…The failed incursion had zero impact on market operations. At issue was the time frame for reporting this type of event under Regulation SCI.”

Apparently, the SEC alleges that it should have been notified immediately, and ICE contends that the incident was not material and did not rise to the level of significance that ICE believed  obligated it to notify the SEC “immediately.”

A settlement does not indicate fault. The lesson here is that the SEC takes a conservative approach to reporting obligations and will use its muscle if reporting is not provided in what it deems is a timely manner.

On May 9, 2024, Governor Wes Moore signed the Maryland Online Data Privacy Act (MODPA) into law. MODPA applies to any person who conducts business in Maryland or provides products or services targeted to Maryland residents and, during the preceding calendar year:

  1. Controlled or processed the personal data of at least 35,000 consumers (excluding personal data solely for the purpose of completing a payment transaction); or
  2. Controlled or processed the personal data of at least 10,000 consumers and derived more than 20 percent of its gross revenue from the sale of personal data.

MODPA does not apply to financial institutions subject to Gramm-Leach-Bliley or registered national securities associations. It also contains exemptions for entities governed by HIPAA.

Under MODPA, consumers have the right to access their data, correct inaccuracies, request deletion, obtain a list of those who have received their personal data, and rights to opt-out of processing for targeted advertising, the sale of personal data, and profiling in furtherance of solely automated decisions. The data controller must provide this information to the consumer free of charge once during any 12-month period unless the requests are excessive, in which case the controller may charge a reasonable fee or decline to honor the request.

MODPA prohibits a controller – defined as “a person who determines the purpose and means of processing personal data” – from selling “sensitive data.” MODPA defines “sensitive data” to include genetic or biometric data, children’s personal data, and precise geolocation data.  “Sensitive data” also means personal data that includes data revealing a consumer’s:

  • Racial or ethnic origin
  • Religious beliefs
  • Consumer health data
  • Sex life
  • Sexual orientation
  • Status as transgender or nonbinary
  • National origin, or
  • Citizenship or immigration status

A MODPA controller also may not process “personal data” in ways that violate discrimination laws. Under MODPA, “personal data” is any information that is linked or can be reasonably linked to an identified or identifiable consumer but not de-identified data or “publicly available information.” However, MODPA contains an exception if the processing is for (1) self-testing to prevent or mitigate unlawful discrimination; (2) diversifying an applicant, participant, or customer pool, or (3) a private club or group not open to the public.

MODPA has a data minimization requirement as well. Controllers must limit the collection of personal data to that which is reasonably necessary and proportionate to provide or maintain the specific product or service the consumer requested.

A violation of MODPA constitutes an unfair, abusive, or deceptive (UDAP) trade practice, which the Maryland Attorney General can prosecute. Each violation may incur a civil penalty of up to $10,000 for each violation and up to $25,000 for each repeated violation. Additionally, a person committing a UDAP violation is guilty of a misdemeanor, punishable by a fine of up to $1,000 or imprisonment of up to one year, or both. MODPA does not allow a consumer to pursue a UDAP claim for a MODPA violation, although it also does not prevent a consumer from pursuing any other legal remedies. MODPA will take effect October 1, 2025, with enforcement beginning April 1, 2026.

Anthropic has achieved a major milestone by identifying how millions of concepts are represented within their large language model Claude Sonnet, using a process somewhat akin to a CAT scan. This is the first time researchers have gained a detailed look inside a modern, production-grade AI system.

Previous attempts to understand model representations were limited to finding patterns of neuron activations corresponding to basic concepts like text formats or programming syntax. However, Anthropic has now uncovered high-level abstract features in Claude spanning a vast range of concepts – from cities and people to scientific fields, programming elements, and even abstract ideas like gender bias, secrets, and inner ethical conflicts.

Remarkably, they can even manipulate these features to change how the model behaves and force certain types of hallucinations. Amplifying the “Golden Gate Bridge” feature caused Claude to believe it was the Golden Gate Bridge when asked about its physical form (Claude normally responds with a variation on, “I have no form, I am an AI model.”) Intensifying the “scam email” feature overcame Claude’s training to avoid harmful outputs, making it suggest formats for scam emails.

Other features corresponding to malicious behavior or content with the potential for misuse included code backdoors and bioweapons, as well as problematic behaviors like bias, manipulation, and deception. Normally, these features activate when the user asks Claude to “think” about one of these concepts, and Claude’s ethical guardrails keep it from drawing from these sources when generating content. This validates that these features don’t just map to parsing user input but directly shape the model’s responses. It also points to the exact kind of malicious capability that hackers and other unauthorized users will undoubtedly exploit on pirate models.

While much work remains to fully map these large models, Anthropic’s breakthrough seems like an extremely promising step forward in the burgeoning field of AI auditing. And, given that researchers were able to directly tweak the features to influence Claude’s output, this research may also open the door to the sort of under-the-hood tinkering that has eluded generative AI developers for years. Of course, it may also open the door to direct, feature-level regulation as well as creative plaintiff’s arguments as the standard of care for AI developers takes shape.

Read the full blog post from Anthropic here.