This is not the first post discussing location-based services on mobile phones [see posts here]. And it won’t be the last. After reading my colleague’s post on the priest who resigned from his high-profile position after his location was tied to Grindr, I thought it would be useful to remind readers to think about that privacy setting a bit more.

In sum, when you download an app, the Privacy Policy of that app will tell you what type of data that app is collecting from your phone. When you click “I agree” after downloading the app, you have just agreed to everything the app developer said it would collect in the Privacy Policy. This could include access to your microphone, camera, movement, contacts, photos and location. The app could literally be tracking everything you do.

Unfortunately, many people don’t understand how location services can be used and disclosed. If the app Privacy Policy says it will collect your location when you have your location services on, and also says it will sell it and disclose it to others, and you agree, that is exactly what they are doing. The information is no longer private and the app developer can use and disclose it to others freely (and legally) because you consented to the collection and use of the location based data.

Tips for the week with location-based services:

  • understand which apps are tracking your location and how they are tracking it (read the description under “Location Alerts” in Privacy Settings under Location Services);
  • consider only allowing your location to be tracked when using a specific app;
  • turn location services off when not using specific apps or after using the app ;
  • check Privacy Settings frequently to see which apps have access to location (and other) services and frequently reset them;
  • Read Privacy Policies of apps you have already downloaded or are about to download to see what data they are collecting from you and how they are using and disclosing it to others;
  • Read the disclaimers when they pop up to ask for specific consent and make an educated decision on whether to allow the access and collection to your data;
  • Make an educated decision on whether you will allow others to have access to your location by reading and understanding the “Share My Location” section of the Location Services under Privacy Settings; and
  • Delete any apps that you are not comfortable with the Privacy Policies.

Like the unfortunate situation with the priest who resigned from his position because he was reportedly associated with Grindr based on location services, people are often surprised to find out how their location is tracked and used. Now is the time to re-check your privacy settings and reset them as necessary.

Location data is data that marks the longitude/latitude location of a smartphone or other device at a particular time, or over a period of time. It works like this: each day our device, which has a unique identifier or ID, uses or connects to multiple location signals, like GPS, Wi-Fi, Bluetooth, cell towers or other external location signals. Each location signal combined with an identifier permits you to plot the location of the device at a particular time, and the movement of the device over time. Carriers, private companies, and apps collect users’ location data, usually automatically and often even when you aren’t using the app. You can literally track a device’s physical location over the course of a day by the monitoring of the external location signals, tracking from a home to an office, to the grocery store, to the gym, to the beach. As you use your device to look up information, data is collected that flags your interests, such as vacation spots, new mattress models, restaurants, etc.

Your location data is then sold to aggregators, advertisers and marketers, sometimes in real time and usually without your express consent. Advertisers then use the location data to target relevant ads to your device. Ever wonder why the special offer for airline fare pops up into your social media app while you are looking up hotels in Hawaii? Law enforcement and government agencies are also interested in location data as it can be used to put a suspect near a crime scene. Using location data, they can determine whether a particular device owned by the suspect was used to make a phone call near a particular cell tower at a particular time. Given this value and interest, it is no surprise that location data market continues to grow. Lots of data brokers, aggregators and marketing companies are profiting from these currently legal transactions, which are based on our tracked movements and activities as we go about our day. The New York Times 2019 piece has an interesting visual view of location data.

These purveyors of this widely available location data claim it is anonymized. By that they mean while the ads are delivered to your device and your apps based on your location data, the advertisers don’t know your name. While the data usually doesn’t include your name or phone number, it can contain other information, such as your gender, your age and your unique device ID. It is also very easy to combine location data with other purchased or acquired data, such as real estate records or office location, which can permit the identification of individuals by name. There are many examples where location data has been used against specific individuals.

The most recent example involves a Catholic priest who was confronted with location data that showed the use of gay social “hook-up” app Grindr almost daily over multiple years from locations near his office and his work-owned home, as well as trips to gay bars in other cities during timeframes he was known to have been there for work events. After being confronted, the priest resigned his high profile position. Some of the details are still murky as to how the data was acquired and tied to a specific person. Nonetheless, this story is likely to further concerns about the collection, sharing and sale of location data.

Ransomware attacks are frequent and escalating as we speak. Double extortion scams are hitting companies at a dizzying pace, and catching companies, large and small, off-guard. U.S. President Joseph Biden warned Russian President Vladimir Putin to knock it off during their first summit [view related post]. Nonetheless, and not surprisingly, the attacks continue, particularly out of Russia.

The White House announced today that it is intent on combating ransomware, and appointed a task force to specifically address ransomware threats to government agencies and private businesses. One of the goals of the task force will be to determine how to choke the ransomware threat actors from access to their cryptocurrency. According to a senior official in the White House, “The exploitation of virtual currency to launder ransomware proceeds is without question, facilitating ransomware…[T]here’s inadequate international regulation of virtual currency activity which is a key factor in how cybercriminals are able to launder their funds, demand ransomware payments, and fuel sophisticated cybercrime as a service business model.”

The Treasury Department will take the lead on developing money laundering requirements for virtual currency exchanges and will develop a public-private partnership that will share information to combat the use of cryptocurrency for money laundering purposes.

The White House’s ransomware task force will also focus on information sharing between public and private enterprises to assist with resilience against ransomware and require mandatory reporting of ransomware incidents and payments.

Ransomware continues to cripple our government, national security, and private businesses. We look forward to following the task force’s efforts and, hopefully, see some positive results.

While smart toys can be useful educational tools for children, they also present some potential privacy risks and could invade what is traditionally considered a private space. Think about it—the thought of your child’s toy listening in on your family 24/7/365 is disturbing. So how do we balance these risks with the benefits?

Smart toys that are made with artificial intelligence (AI) capabilities can collect different forms of data from children. For example, an AI-enabled toy may collect data to enable it to personalize lessons on how fast your child constructs a shape on the device or a doll may know your child’s favorite color or song so that it can “converse” with your child during playtime.

AI toy concerns vary based on toy type and the capabilities it has in terms of collecting data. Generally, most of these AI-enabled toys learn from children and provide adaptive, responsive play. Within this category of AI-enabled toys there are two subcategories: smart companions (i.e., toys that “learn” from their interaction with the child); and programmable toys (i.e., designed with machine learning to assist children in educational learning by moving and performing tasks). While there are regulations to protect children’s privacy and data collected from minors (the Children’s Online Privacy Protection Act (COPPA)) on the internet and through mobile applications by requiring prior express written consent from a parent or guardian, new smart devices hitting the market are not necessarily complying with COPPA according to the Federal Trade Commission (FTC).

Alan Butler of the Electronic Privacy Information Center (EPIC) said, “For any new device coming onto the market, if it’s not complying with COPPA, then it’s breaking the law. There’s a lot of toys on the market [using AI] and there’s a need to ensure that they’re all complying with COPPA.” One of the problems is that there is no pre-clearance review of toys before they are sold to consumers. While the FTC continues to enforce COPPA and historically has done so, it becomes difficult for it to stay ahead of privacy  issues when the toys are manufactured outside of the U.S. With a pre-clearance process in place, issues like invasion of privacy and collection of data from children without consent could be addressed before the toy ends up in a child’s playroom.

Whether we like it or not, smart toys and AI capabilities will only continue to grow. AI can in fact be helpful and effective in aiding children’s learning and experiences. However, we may need to examine this trend now (and the legislation related to these smart toys) to stay ahead of some of the big issues that could arise if this space is not adequately regulated and monitored.

With the signature of Governor Jared Polis last week on the Colorado Privacy Act, Colorado became the third state (following California and Virginia) to adopt a comprehensive consumer privacy law.

We will provide you with a more comprehensive summary of the new Virginia and Colorado laws in the coming weeks, but for now, the highlights of the Colorado law, similar to provisions in the California Consumer Privacy Act, include the right for consumers to access their data delete the information, and opt-out of the sale of their data.  The law also requires some companies to show why they are collecting data, how they use it, and a requirement to minimize the use of personal data.

The law gives the Attorney General regulatory enforcement jurisdiction. It does not provide a private right of action for consumers to assert in the event of a violation.

The Colorado consumer privacy law goes into effect on July 1, 2023. Stay tuned and we will provide more details.

On June 16, and then on July 6, 2021, Connecticut Governor Ned Lamont signed into law a pair of bills that together address privacy and cybersecurity in the state. Cybersecurity risks continue to pose a significant threat to businesses and the integrity of private information. Connecticut joins other states in revisiting its data breach reporting laws to strengthen reporting requirements, and offer protection to businesses that have been the subject of a breach despite implementing cybersecurity safeguards from certain damages in resulting litigation.

Public Act 21-59 “An Act Concerning Data Privacy Breaches” (PA 21-59) modifies Connecticut law addressing data privacy breaches to expand the types of information that are protected in the event of a breach, to shorten the timeframe for reporting a breach, to clarify applicability of the law to anyone who owns, licenses, or maintains computerized data that includes “personal information,” and to create an exception for entities that report breaches in accordance with HIPAA. Public Act 21-119 “An Act Incentivizing the Adoption of Cybersecurity Standards for Businesses” (PA 21-119) correspondingly establishes statutory protection from punitive damages in a tort action alleging that inadequate cybersecurity controls resulted in a data breach against an entity covered by the law if the entity maintained a written cybersecurity program conforming to industry standards (as set forth in PA 21-119).

Both laws take effect October 1, 2021. Continue Reading Connecticut Enacts Legislation to Incentivize Adoption of Cybersecurity Safeguards and Expand Breach Reporting Obligations

This week, Governor Andrew Cuomo signed legislation that added text messaging to the state of New York’s definition of telemarketing communication for purposes of its no-call registry. The legislation, S.3941/A.6040, closes the loophole that previously exempted businesses from the no-call registry restrictions when the communication was sent via text. The goal is to increase protections for consumers against unwanted text solicitations, which has increased over the last year. Gov. Cuomo said in a media release, “Our consumer protections need to keep pace with technology and New Yorkers who have long been plagued by the nuisance of annoying calls from telemarketers now have to contend with unwanted texts attempting to sell them things they don’t want.” This new restriction will take effect on August 12, 2021. This is a reminder for businesses to keep up (or revamp) their compliance with state do-not-call registry requirements, as well as federal Telephone Consumer Protection Act (TCPA) restrictions.

We previously wrote about the proposed class-action lawsuit against Canon USA Inc. that resulted from a data breach of former and current employees’ personal information. This week, Canon argued in New York federal court that the plaintiffs lacked standing and that the case should be dismissed. Canon stated in its memorandum of law that lost or diminished value of personal information resulting from a ransomware attack is NOT a cognizable injury that confers Article III standing. Further, Canon argued in its memorandum that the plaintiffs’ allegations merely suggested a future risk of harm; again, not enough to meet the Article III requirements for standing.

In addition to Canon’s argument that the plaintiffs lacked standing, Canon also argued that the plaintiffs failed to state claims upon which relief can be granted. In the complaint, the plaintiffs alleged that Canon acted negligently. However, Canon argues that the complaint did not offer any facts to support that claim.

Further, Canon argued that the invasion of privacy claim also fails since the “intrusion upon seclusion” theory requires intent, evidence of which plaintiffs also failed to provide.

We will watch the plaintiffs’ response and the court’s decision on this issue.

A new report from Beyond Identity focuses on old, but very important issues—ending  access rights to network systems by terminated employees and the rampant sharing of passwords.

According to the report, it is estimated that almost 25 percent of previous workers still have access to their former employers’ networks through work accounts. This is concerning on many levels, including the ability for former employees (especially disgruntled ones) to have access to current company data to be able to use it, disclose it, and use it against the company.

The report also highlights that many employees continue to share their passwords with their co-workers. A whopping 41.7 percent of the 1,000 companies surveyed stated that passwords are shared with colleagues, contractors, family, and friends. This statistic blew my mind.

Take away from this report: 1) don’t share your corporate password with anyone else, and educate your employees to keep their passwords secure; and 2) tune up your processes around access controls, including by terminated employees.

Section 2209 of the Federal Aviation Administration Extension, Safety, and Security Act (the Act) requires the Federal Aviation Administration (FAA) to establish defined boundaries protecting “critical infrastructure” from unauthorized drones.  More specifically, the FAA is tasked with defining the precise sites where drones are prohibited from operating. It is likely that the FAA would have to work with state and local governments to make these determinations (e.g., which sites are considered “fixed site facilities”). However, the Act includes many types of “sites” from oil refineries to amusement parks as well as “other locations that warrant such restrictions.” This language allows for very broad interpretation.

Specifically, Section 2209 states:

DOT shall establish procedures for applicants to petition the FAA to prohibit or restrict the operation of drones in close proximity to a fixed site facility (an affirmative designation).

A “fixed site facility” is considered to be:

  • critical infrastructure, such as energy production, transmission, and distribution facilities and equipment;
  • oil refineries and chemical facilities;
  • amusement parks; and
  • other locations that warrant such restrictions.

The FAA shall publish designations on a public website.

Deadlines for the FAA’s implementation of Section 2209 according to the FAA’s Reauthorization Act of 2018 are as follows:

  • Publish a Notice of Proposed Rulemaking by March 31, 2019
  • Final Rule Due by March 31, 2020

To date, no NPRM on Section 2209 has been issued.

Recently, the Association of Unmanned Vehicle Systems International (AUVSI), the Commercial Drone Alliance, the Consumer Technology Association, and the Small UAV Coalition sent a letter to FAA Chief Administrator Steve Dickson pushing him to act as soon as possible on Section 2209. This group of industry stakeholders urged the FAA to  “publish a proposed rule to establish a process to designate airspace above and around fixed-site critical infrastructure facilities.” The U.S. Chamber of Commerce group also presented a letter to the FAA, signed by a significant list of drone and critical infrastructure stakeholders urging for the same action. The concern by these and other industry leaders is not simply that the failure to enact Section 2209 leaves ambiguity as to what infrastructure and facilities are considered “fixed site,” but a larger failure by the FAA to firmly establish that they hold sole authority to regulate the national airspace. Without the enactment of Section 2209, states have been enacting their own legislation to protect (and define) critical infrastructure sites, which has led to a patchwork unwieldy and inconsistent laws. The commercial drone industry seeks federal guidance on “fixed sites;” otherwise, without federal regulation, drone operators may not have a central source of information that defines these types of sites and leads to unknowing violations of state/local laws and inhibits the ability to integrate drones into the national airspace.