The U.S. Transportation Security Administration (TSA) issued its second Security Directive to the pipeline industry on July 20, 2021, following the Colonial Pipeline cybersecurity incident. The first Directive on May 27, 2021, required pipeline owners and operators to notify CISA of cyber incidents, designate a cyber coordinator for the company, and review their cybersecurity program.

According to TSA, the second Directive “requires owners and operators of TSA-designated critical pipelines that transport hazardous liquids and natural gas to implement a number of urgently needed protections against cyber intrusions.”

TSA further stated, “[T]his Security Directive requires owners and operators of TSA-designated critical pipelines to implement specific mitigation measures to protect against ransomware attacks and other known threats to information technology and operational technology systems, develop and implement a cybersecurity contingency and recovery plan, and conduct a cybersecurity architecture design review.”

On July 19, 2021, the Federal Bureau of Investigations issued a Private Industry Notification to service providers and “entities associated with the Tokyo 2020 Summer Olympics that cyber actors who wish to disrupt the event could use distributed denial of service (DDoS) attacks, ransomware, social engineering, phishing campaigns, or insider threats to block or disrupt live broadcasts of the event, steal and possibly hack and leak or hold hostage sensitive data, or impact public or private digital infrastructure supporting the Olympics.”

According to the Notification, “Malicious activity could disrupt multiple functions, including media broadcasting environments, hospitality, transit, ticketing, or security.”

The Notification points out that large events attract extra attention from cyber criminals and nation-state actors such as the attacks during the 2018 PyeongChang Winter Olympics. The FBI indicted Russian-based actors for intrusions during the Winter Olympics, including one that disrupted the Opening Ceremony.

The FBI encourages “service providers and other relevant partners to maintain business continuity plans to minimize essential service interruptions, as well as preemptively evaluate potential continuity and capability gaps…the FBI encourages regularly monitoring networks and employing best practices.” The Notification then provides details on what those best practices are.

Frankly, the list of best practices provided by the FBI are best practices for all companies, including those supporting the Tokyo Olympics.

This week, the Department of Homeland Security’s inspector general said in an oversight report that U.S. Customs and Border Protection (CBP) officials have failed to use adequate cybersecurity measures and safeguards to protect travelers’ data. The report says that from July 2017 to December 2019, personal data was left vulnerable to hackers in the Mobile Passport Control (MPC) app used by over 10 million U.S. and Canadian citizens. Specifically, the agency did not conduct security and privacy reviews/assessments, nor implement protective hardware/ software settings.

The report surmises, “Unless CBP addresses these cybersecurity vulnerabilities, MPC apps and servers will remain vulnerable, placing travelers’ [personal information] at risk of exploitation.”

The Office of the Inspector General made the following eight recommendations, which the CBP agreed to implement:

1: Update policies and procedures to ensure CBP scans all app update versions and that they are scanned prior to release by developers.

2: Update policies and procedures to codify scan processes and define the roles and responsibilities necessary to ensure scans are complete as required, and review those scan results for vulnerabilities.

3: Update the policies and procedures to include processes to conduct required security and privacy compliance reviews on a specific schedule and timeframe, track reviews completed, and centrally store review documentation.

4: Receive all necessary information from developers to complete an adequate privacy and security assessment.

5: Develop a capability to review access logs, define the periodic review time frame, and perform the required reviews according to the defined time frame.

6: Complete the required privacy evaluation review.

7: Update the policies and procedures to include a process to conduct internal audits and perform the required audits.

8: Adhere to DHS policy and fully implement the Defense Information Systems Agency Security Technical Implementation Guide control categories for the servers supporting the MPC program, request waivers as appropriate, or fully document any exception obtained when deviating from policy requirements.

View the full report here.

This week, a North Carolina federal judge denied Filters Fast LLC’s motion to dismiss a proposed data breach class action, ruling that the plaintiffs demonstrated adequate harm to satisfy Article III standing.

The class action stems from a data breach that occurred between July 2019 and July 2020 through Filters Fast’s shopping website. Plaintiffs claim that the breach occurred as a result of Filters Fast’s negligence.

Filters Fast moved to dismiss the proposed class action, arguing that the customers could not establish standing to sue in federal court because they did not assert any concrete harm. However, the court agreed with the plaintiffs who had alleged misuse of their payment cards as a result of the breach. While the plaintiffs did not allege an economic injury as a result of this breach, the plaintiffs did show misuse of their personal data, the court said in its decision.

The court wrote, “These allegations of actual misuse bring the ‘actual and threatened harm’ alleged by Plaintiffs ‘out of the realm of speculation and into the realm of sufficiently imminent and particularized harm.’” This is a lower standard than some other data breach class action cases currently being litigated so we will watch to see how the case proceeds.

In addition to facing this proposed class action, Filters Fast entered into a $200,000 settlement agreement with the New York Attorney General as a result of an investigation by the state related to the same breach.

An Urban Air Mobility (UAM) company, Wisk, announced its new partnership with NASA to assist with safely integrating autonomous aircraft systems at a national level. Wisk joins NASA as part of NASA’s Advanced Air Mobility National Campaign strategy in order to assist with the preparation and development of guidance for UAM operations. Wisk aims to assist NASA in addressing some of the biggest challenges like certification and standards development. Without industry stakeholders as active participants in this process, it will halt this automated aviation technology from national expansion and implementation.

The first goal of this partnership is to address critical National Campaign safety scenarios. This will include autonomous flight and contingency management, collision avoidance and flight path management. Additionally, NASA and Wisk aim to evaluate architectures, perform simulation studies, and develop a validation framework that others can use for assessments of autonomous flight. In order to build a safe, effective, and efficient system, NASA and Wisk will work with industry standards organizations for guidance on airspace structure, flight procedures, minimum performance requirements, and other standards that may influence the future of autonomous systems. Be on the lookout for these guidelines and standards.

This is not the first post discussing location-based services on mobile phones [see posts here]. And it won’t be the last. After reading my colleague’s post on the priest who resigned from his high-profile position after his location was tied to Grindr, I thought it would be useful to remind readers to think about that privacy setting a bit more.

In sum, when you download an app, the Privacy Policy of that app will tell you what type of data that app is collecting from your phone. When you click “I agree” after downloading the app, you have just agreed to everything the app developer said it would collect in the Privacy Policy. This could include access to your microphone, camera, movement, contacts, photos and location. The app could literally be tracking everything you do.

Unfortunately, many people don’t understand how location services can be used and disclosed. If the app Privacy Policy says it will collect your location when you have your location services on, and also says it will sell it and disclose it to others, and you agree, that is exactly what they are doing. The information is no longer private and the app developer can use and disclose it to others freely (and legally) because you consented to the collection and use of the location based data.

Tips for the week with location-based services:

  • understand which apps are tracking your location and how they are tracking it (read the description under “Location Alerts” in Privacy Settings under Location Services);
  • consider only allowing your location to be tracked when using a specific app;
  • turn location services off when not using specific apps or after using the app ;
  • check Privacy Settings frequently to see which apps have access to location (and other) services and frequently reset them;
  • Read Privacy Policies of apps you have already downloaded or are about to download to see what data they are collecting from you and how they are using and disclosing it to others;
  • Read the disclaimers when they pop up to ask for specific consent and make an educated decision on whether to allow the access and collection to your data;
  • Make an educated decision on whether you will allow others to have access to your location by reading and understanding the “Share My Location” section of the Location Services under Privacy Settings; and
  • Delete any apps that you are not comfortable with the Privacy Policies.

Like the unfortunate situation with the priest who resigned from his position because he was reportedly associated with Grindr based on location services, people are often surprised to find out how their location is tracked and used. Now is the time to re-check your privacy settings and reset them as necessary.

Location data is data that marks the longitude/latitude location of a smartphone or other device at a particular time, or over a period of time. It works like this: each day our device, which has a unique identifier or ID, uses or connects to multiple location signals, like GPS, Wi-Fi, Bluetooth, cell towers or other external location signals. Each location signal combined with an identifier permits you to plot the location of the device at a particular time, and the movement of the device over time. Carriers, private companies, and apps collect users’ location data, usually automatically and often even when you aren’t using the app. You can literally track a device’s physical location over the course of a day by the monitoring of the external location signals, tracking from a home to an office, to the grocery store, to the gym, to the beach. As you use your device to look up information, data is collected that flags your interests, such as vacation spots, new mattress models, restaurants, etc.

Your location data is then sold to aggregators, advertisers and marketers, sometimes in real time and usually without your express consent. Advertisers then use the location data to target relevant ads to your device. Ever wonder why the special offer for airline fare pops up into your social media app while you are looking up hotels in Hawaii? Law enforcement and government agencies are also interested in location data as it can be used to put a suspect near a crime scene. Using location data, they can determine whether a particular device owned by the suspect was used to make a phone call near a particular cell tower at a particular time. Given this value and interest, it is no surprise that location data market continues to grow. Lots of data brokers, aggregators and marketing companies are profiting from these currently legal transactions, which are based on our tracked movements and activities as we go about our day. The New York Times 2019 piece has an interesting visual view of location data.

These purveyors of this widely available location data claim it is anonymized. By that they mean while the ads are delivered to your device and your apps based on your location data, the advertisers don’t know your name. While the data usually doesn’t include your name or phone number, it can contain other information, such as your gender, your age and your unique device ID. It is also very easy to combine location data with other purchased or acquired data, such as real estate records or office location, which can permit the identification of individuals by name. There are many examples where location data has been used against specific individuals.

The most recent example involves a Catholic priest who was confronted with location data that showed the use of gay social “hook-up” app Grindr almost daily over multiple years from locations near his office and his work-owned home, as well as trips to gay bars in other cities during timeframes he was known to have been there for work events. After being confronted, the priest resigned his high profile position. Some of the details are still murky as to how the data was acquired and tied to a specific person. Nonetheless, this story is likely to further concerns about the collection, sharing and sale of location data.

Ransomware attacks are frequent and escalating as we speak. Double extortion scams are hitting companies at a dizzying pace, and catching companies, large and small, off-guard. U.S. President Joseph Biden warned Russian President Vladimir Putin to knock it off during their first summit [view related post]. Nonetheless, and not surprisingly, the attacks continue, particularly out of Russia.

The White House announced today that it is intent on combating ransomware, and appointed a task force to specifically address ransomware threats to government agencies and private businesses. One of the goals of the task force will be to determine how to choke the ransomware threat actors from access to their cryptocurrency. According to a senior official in the White House, “The exploitation of virtual currency to launder ransomware proceeds is without question, facilitating ransomware…[T]here’s inadequate international regulation of virtual currency activity which is a key factor in how cybercriminals are able to launder their funds, demand ransomware payments, and fuel sophisticated cybercrime as a service business model.”

The Treasury Department will take the lead on developing money laundering requirements for virtual currency exchanges and will develop a public-private partnership that will share information to combat the use of cryptocurrency for money laundering purposes.

The White House’s ransomware task force will also focus on information sharing between public and private enterprises to assist with resilience against ransomware and require mandatory reporting of ransomware incidents and payments.

Ransomware continues to cripple our government, national security, and private businesses. We look forward to following the task force’s efforts and, hopefully, see some positive results.

While smart toys can be useful educational tools for children, they also present some potential privacy risks and could invade what is traditionally considered a private space. Think about it—the thought of your child’s toy listening in on your family 24/7/365 is disturbing. So how do we balance these risks with the benefits?

Smart toys that are made with artificial intelligence (AI) capabilities can collect different forms of data from children. For example, an AI-enabled toy may collect data to enable it to personalize lessons on how fast your child constructs a shape on the device or a doll may know your child’s favorite color or song so that it can “converse” with your child during playtime.

AI toy concerns vary based on toy type and the capabilities it has in terms of collecting data. Generally, most of these AI-enabled toys learn from children and provide adaptive, responsive play. Within this category of AI-enabled toys there are two subcategories: smart companions (i.e., toys that “learn” from their interaction with the child); and programmable toys (i.e., designed with machine learning to assist children in educational learning by moving and performing tasks). While there are regulations to protect children’s privacy and data collected from minors (the Children’s Online Privacy Protection Act (COPPA)) on the internet and through mobile applications by requiring prior express written consent from a parent or guardian, new smart devices hitting the market are not necessarily complying with COPPA according to the Federal Trade Commission (FTC).

Alan Butler of the Electronic Privacy Information Center (EPIC) said, “For any new device coming onto the market, if it’s not complying with COPPA, then it’s breaking the law. There’s a lot of toys on the market [using AI] and there’s a need to ensure that they’re all complying with COPPA.” One of the problems is that there is no pre-clearance review of toys before they are sold to consumers. While the FTC continues to enforce COPPA and historically has done so, it becomes difficult for it to stay ahead of privacy  issues when the toys are manufactured outside of the U.S. With a pre-clearance process in place, issues like invasion of privacy and collection of data from children without consent could be addressed before the toy ends up in a child’s playroom.

Whether we like it or not, smart toys and AI capabilities will only continue to grow. AI can in fact be helpful and effective in aiding children’s learning and experiences. However, we may need to examine this trend now (and the legislation related to these smart toys) to stay ahead of some of the big issues that could arise if this space is not adequately regulated and monitored.

With the signature of Governor Jared Polis last week on the Colorado Privacy Act, Colorado became the third state (following California and Virginia) to adopt a comprehensive consumer privacy law.

We will provide you with a more comprehensive summary of the new Virginia and Colorado laws in the coming weeks, but for now, the highlights of the Colorado law, similar to provisions in the California Consumer Privacy Act, include the right for consumers to access their data delete the information, and opt-out of the sale of their data.  The law also requires some companies to show why they are collecting data, how they use it, and a requirement to minimize the use of personal data.

The law gives the Attorney General regulatory enforcement jurisdiction. It does not provide a private right of action for consumers to assert in the event of a violation.

The Colorado consumer privacy law goes into effect on July 1, 2023. Stay tuned and we will provide more details.