I hang out with a lot of Chief Information Security Officers (CISOs), so this piece is for them. Of course, it will be of interest to all security professionals struggling with assessing the risk of large language models (LLMs).

According to DarkReading, Berryville Institute of Machine Learning (BIML) recently issued a report entitled “An Architectural Risk Analysis of Large Language Models: Applied Machine Learning Security,” which is designed “to provide CISOs and other security practitioners with a way of thinking about the risks posed by machine learning and artificial intelligence (AI) models, especially LLMs and the next-generation large multimodal models so they can identify those risks in their own applications.”

The core issue addressed in the report is that users of LLMs do not know how the developers have collected and validated the data to train the LLM models. BIML found that the “lack of visibility into how artificial intelligence (AI) makes decisions is the root cause of more than a quarter of risks posed by LLMs….”

According to BIML, risk decisions are being made by large LLM developers “on your behalf without you even knowing what the risks are…We think that it would be very helpful to open up the black box and answer some questions.”

The report concludes that “[s]ecuring a modern LLM system (even if what’s under scrutiny is only an application involving LLM technology) must involve diving into the engineering and design of the specific LLM system itself. This architectural risk analysis is intended to make that kind of detailed work easier and more consistent by providing a baseline and a set of risks to consider.”

CISOs and security professionals may wish to dive into the report by requesting a download from BIML. The 28-pager is full of ideas.

Last week, California Attorney General Rob Bonta announced a new enforcement focus on streaming apps’ failure to comply with the California Consumer Privacy Act (CCPA). This investigation will examine whether streaming services are complying with the opt-out requirements for businesses that sell or share consumers’ personal information as required by the CCPA. Specifically, the agency will examine those services that do not offer an easy mechanism for consumers to exercise this opt-out right.

Attorney General Bonta said that he “urge[s] consumers to learn about and exercise their rights under the [CCPA], especially the right to tell these businesses to stop selling their personal information.” He also warned that the agency will be “taking a close look at how these streaming services are complying with requirements that have been in place since 2020.”

Under the CCPA’s right to opt-out, companies that sell or share personal information for targeted advertising purposes are required to provide consumers with the right to opt-out of such sales or sharing. Not only must the opt-out be available, but the ability to exercise the right must be easy and involve minimal steps. The agency provided an example: on your SmartTV, you should be able to enable a “Do Not Sell My Personal Information” setting in a streaming service’s app. Further, you should not have to opt-out on different devices if you are logged into your account once the opt-out request has been submitted. Lastly, a streaming service’s privacy policy should also be easily accessible to the consumer and include details on individual CCPA rights. Letters of non-compliance are forthcoming.

Mercedes-Benz reportedly suffered a security incident that exposed confidential source code on an Enterprise Git server. The incident occurred due to a compromised GitHub exposed by an employee. Although the incident occurred on September 29, 2023, it wasn’t discovered until January 11, 2024. A cybersecurity firm discovered the token during an internet scan and informed Mercedes-Benz, which quickly revoked it.

The exposure of proprietary source code can be a nightmare. The worst-case scenario is when malicious code is injected into an application and then shipped to consumers, as happened in the SolarWinds data breach.

This incident emphasizes the importance of embedding security in the development of code to prevent leakage of data and intellectual property by developers. Reviewing processes used by developers is essential to minimize the risk of inadvertent disclosure of confidential company information.

On January 29, 2024, the Italian Data Protection Authority (Garante) notified OpenAI of breaches of data protection laws involving its ChatGPT platform.

In March 2022, Garante temporarily banned OpenAI from processing data. Following its investigation, Garante “concluded that the available evidence pointed to the existence of breaches of the provisions contained in the EU GDPR.”

According to Garante’s website, “OpenAI may submit its counterclaims concerning the alleged breaches within 30 days.” The case may have widespread implications for the popular generative AI tool, so we will be watching this one closely.

On the morning of the New Hampshire primary a robocall was launched spoofing a New Hampshire Democrat’s cell phone number with a deepfake of President Joe Biden telling voters not to vote in the primary, but instead to vote in November.

The New Hampshire Attorney General is investigating what it is calling an “unlawful attempt at voter suppression” and is warning consumers that the message was “artificially generated” and should be disregarded. The New Hampshire Secretary of State said the calls “reinforce a national concern about the effect of artificial intelligence on campaigns.”

Fake recordings in Slovakia’s elections last year and the fake robocall of President Biden show that the proliferation of AI and deepfakes will be used during elections, which worries misinformation researchers. According to panelists from the University of Washington’s Center for an Informed Public, which studies the spread of strategic misinformation, “When multiple pieces of fake content related to the same subject are pushed out, it can create a more believable narrative.”

The panelists noted that deepfakes and AI-generated content quality will improve and get harder to detect, and “educating the general public about how to decipher authentic information from fake content will be a challenge.”

Spotting deepfakes is like spotting a phishing email. Most people think they can spot them, but a study by iScience, “Fooled Twice: People Cannot Detect Deepfakes but Think They Can,” shows the majority cannot. The highlights of the study show:

  • “People cannot reliably detect deepfakes;”
  • “Raising awareness and financial incentives do not improve people’s detection accuracy;”
  • “People tend to mistake deepfakes as authentic videos (rather than vice versa);”
  • “People overestimate their own detection deepfake abilities.”

It’s not great news in an election year.

Science, in its “How to Spot a Deepfake-and Prevent It from Causing Political Chaos,” spoke with researchers and experts about the dangers of deepfakes. Science noted that “deepfakes are cheaper and easier to produce than ever, and we’re likely to see many more during the election season.”

The key is to educate people on the existence of deepfakes and to teach them how to spot one. Science has some tips to spot deepfakes.

We all need to be ready to receive, spot, and stop deepfakes. Here are some additional useful resources: 

Mozilla recently released security updates to address known vulnerabilities in their Thunderbird and Firefox products. The Cybersecurity & Infrastructure Security Agency (CISA) is recommending that the patches be applied because “a cyber threat actor could exploit one of these vulnerabilities to take control of an affected system.”

The updates to the Thunderbird product are designed to fix three high impact and seven medium vulnerabilities that would allow an attacker to “corrupt memory leading to a potentially exploitable crash…a bug in popup notifications delay calculation could have made it possible for an attacker to trick a user into granting permissions…a malicious devtools extension could have been used to escalate privileges,” and memory corruption “could have been exploited to run arbitrary code.”

The updates to the Firefox ESR product fix three high and seven medium impact vulnerabilities similar to those outlined above and the updates to the Firefox 122 product fixed six high and ten medium impact vulnerabilities.

All of these vulnerabilities, if exploited, could cause disruption to business units, so it would be prudent to follow the recommendations of Mozilla and CISA is prudent.

In a matter of weeks, the Federal Trade Commission (FTC) has settled another case against a company it alleges tracks consumers and sells their “precise location data” to third parties. This continues the FTC’s aggressive approach toward location-based consumer data.

According to the FTC’s complaint, Texas-based InMarket offered two apps to consumers: shopping rewards app CheckPoints, and shopping list app ListEase. According to the FTC’s press release, the FTC alleged in its complaint that when InMarket requested consent to use a consumer’s location data, it told the customer that it was only using the data “for the app’s function, such as to provide shopping reward points or to remind consumers about items on their shopping list.” The FTC alleges that InMarket “fail[ed] to inform users that the location data will also be combined with other data obtained about those users and used for targeted advertising.”

Frankly, I don’t understand why my location would need to be shared to provide me with points or remind me what’s on my list. If I received that popup, I would think twice about the transparency and accuracy of the popup. At any rate, other consumers allowed access to precise location data for this alleged purpose, and the FTC intervened on behalf of consumers to stop the practice. According to the FTC, InMarket was combining precise location data with other data to profile consumers and then categorize them as “parents of preschoolers,” “Christian church goers,” and “wealthy and not healthy.” Ouch.

The settlement prohibits InMarket from selling or licensing any precise location data and from “selling, licensing, transferring or sharing any product or service that categorizes or targets consumers based on sensitive location data.”  If this settlement doesn’t tell you that the FTC has location-based services on its radar, nothing will. The clear messages from this settlement are: 1) if you are a business that is collecting and using precise location data of consumers, transparency with consumers about why you are collecting and how you are using that data is critical; 2) be mindful of the FTC’s message that “firms do not have free license to monetize data tracking people’s precise location”; and 3) read the popups and consider how your data will be used before clicking “I agree.” If the collection and use doesn’t make sense, consider not downloading it and find a better alternative.

Last week, the California Privacy Protection Agency (CPPA) launched a new website dedicated to providing resources to California residents about their privacy rights under the California Consumer Privacy Act (CCPA). The purpose of this new website is to serve as a central resource for residents to understand their rights and the actions that they can take related to a variety of privacy issues.

One of the website’s features includes information on a resident’s rights under the CCPA and how to submit a complaint against a business. On the flip side, it also includes resources for businesses to understand their obligations under the CCPA.

Other resources include guidance on what to do if you are a victim of a data breach, identity theft, financial privacy, children’s privacy, and civil rights violations.

The CPPA intends to update the website frequently with additional resources for residents and make it as helpful as possible to users. While the website is geared towards educating the consumer population, businesses should also check out its content so that they can understand what may be most important to the agency that enforces violations of the CCPA.

Last week, the Cybersecurity and Infrastructure Security Agency (CISA) and the Federal Bureau of Investigation (FBI) released Cybersecurity Guidance: Chinese-Manufactured Unmanned Aircraft Systems (UAS), which outlines the risks and threats posed by Chinese-manufactured unmanned aerial systems (UAS or drones) and provides cybersecurity safeguards to reduce these risks to networks and sensitive data.

The biggest issue: the People’s Republic of China enacted laws that allow the government to use a variety of legal grounds to access data collected by Chinese businesses. Chinese-manufactured drones used for critical infrastructure operations potentially risk exposure of such information to the Chinese government. The CISA/FBI guidance provides the following mitigation recommendations:

  • PLAN/DESIGN: Ensure secure, organization-wide development of the goals, policies, and procedures for the UAS program.
  • PROCURE: Identify and select the UAS platforms that best meet the operational and security requirements of the organization.
  • MAINTAIN: Perform regular updates, analysis, and training in accordance with the organization’s plans and procedures.
  • OPERATE: Ensure proper operational and security policies are followed during operational usage.

While the guidance offers cyber safeguards and recommendations, critical infrastructure organizations are encouraged to utilize drones that are secure-by-design and manufactured by U.S. companies.

Last week, a federal complaint was filed against Andrew Hernandez in federal District Court in California charging him with the unsafe operation of an unmanned aircraft (or drone). The allegations against Hernandez include hindering a police investigation of a burglary at a pharmacy. While a Los Angeles Police Department helicopter was heading to the scene, it tried to avoid Hernandez’s drone, but it could not get around the drone’s flight path and the drone struck the bottom of the LAPD helicopter. The helicopter had to make an emergency landing and suffered damage to the nose, antenna, and bottom cowlings.

A LAPD officer interviewed a witness nearby who indicated that Hernandez frequently flew his drone near the pharmacy. Portions of the drone were found around the pharmacy, including a portion of the drone that contained the serial number. A warrant was issued to search the drone’s camera and SD card. Among the photos there was a picture of Hernandez holding a drone controller.

When police interviewed Hernandez, he told them that he was curious because of the helicopter noise so he flew his drone to see what was happening. He also stated that it was hard to see in the darkness, but that suddenly he saw his drone being ‘smacked’ by the helicopter.

18 USC § 39B(a)(2) makes it a crime for any person who operates a drone and “recklessly interferes with, or disrupts the operation of, an aircraft carrying one or more occupants operating in the special aircraft jurisdiction of the United States, in a manner that poses an imminent safety hazard to such occupants[.]” A violation shall be punished by a fine and/or imprisonment for not more than one year; however, if the person causes serious bodily injury or death during the commission of an offense, they can be fined and/or imprisoned for a term of up to 10 years.

This case should warn drone pilots everywhere, and remember, the images and video collected on your drone could be used against you in a case like this. Operate and record wisely.