Russians Continue to Attack U.S. Energy and Power Sectors

Late last week, a joint statement by the Department of Homeland Security and the Federal Bureau of Investigation confirmed that the Russian government has been behind an ongoing targeted campaign to penetrate U.S. power plants and the electric grid.

Of course, this fact has been well known and has been reported on repeatedly in the past, including this blog. But in this alert, the government indicated that the Russian hackers have targeted key energy companies and have been successful in penetrating their systems and accessing and copying data that security experts say could be used to turn off power to customers. This is an unusual admission and warning by the government to the private sector.

Revelations over the past year indicate that the federal government has evidence that foreign hackers have infiltrated U.S. power companies, including a nuclear plan in Kansas. According to the alert “Since at least March 2016, Russian government cyber actors…targeted government entities and multiple U.S. critical infrastructure sectors, including the energy, nuclear, commercial facilities, water, aviation, and critical manufacturing sectors.” The Trump administration subsequently issued new sanctions against Russia, including for its threatening cyber activities.

The Russians are using phishing and spear-phishing campaigns directly against energy and power grid employees and inserting malware into the systems of critical infrastructure to gather information, then using vendors and other company partners to get access to more critical systems. This information gathering by the Russians is believed by many security experts to be a precursor for the ability for Russia to cause a power outage.

Facebook and the English Data Firm Cambridge Analytica (CA) Face Intense Scrutiny for Possible Misuse of Facebook User Data

Facebook and the English data analytics firm Cambridge Analytica (CA) are facing intense scrutiny in response to numerous reports about the possible misuse of data of 50 million Facebook accounts. The data was originally collected through a third party personality test app and later reportedly improperly transferred to CA and/or its parent company Strategic Communications Laboratories (SGL) and used to create target voters as part of CA’s political campaign consulting business.

The App: The “thisisyourdigitallife” app was created as a “research tool” by the English company Global Science Research (GSR), whose co-founder is Russian-American Aleksandr Kogan, a social psychologist and University of Cambridge professor. In 2013, GSR used Facebook’s application program interface (API) to offer the app to Facebook users. Some 270,000 users clicked a button to connect to the app and in doing so reportedly gave GSR permission to use their Facebook data for research purposes. The data collected included location information and users’” likes” on stories and posts on the social media network.

Harvesting Friends Data: Additionally, however, the app also “harvested” certain data from the “friends” of each app user, reportedly when the friend’s account profile was set to Facebook’s default privacy settings. Harvested” data is said to have been pulled from some 50 million “friends”. Kogan, GSR and Facebook allegedly maintain that the app accessed the “friends” data legitimately, because at the time, Facebook’s default privacy settings allowed this data to be collected from the “friends” when the “friends” agreed to Facebook’s terms and conditions.

Sharing of Data with Third Parties: Sometime in 2015, Facebook learned GSR had reportedly shared, perhaps sold, both the app user data and the harvested “friends” data with CA and/or SCL, for whom Dr. Kogan was now alleging working. Facebook claims it demanded the data be immediately destroyed by CA, GSR, Kogan and others who had access to it. CA, GSR and Professor Kogan claim they deleted the data, and CA claims it provided FB with a written certification of deletion. It appears Facebook did not audit or otherwise verify the deletion.

On March 16, Facebook abruptly suspended CA from its advertising platform, as multiple newspapers reported that CA not only hadn’t deleted GSR sourced data, but that it also used the data to build voter profiles to predict American voters’ behavior during the Donald Trump 2016 presidential campaign. CA denies knowing CSR sourced data could not be used for other than research purposes, denies using any GSR sourced data in the Trump campaign, and reaffirms GSR sourced data was deleted in 2015. However, recent reporting claims CA paid $800,000 for GSR to develop the “thisisyourdigitallife” app, which calls into question whether CA knew the CRS data was subject to the research use restrictions. One newspaper reported that Kogan also received Russian government funding for his research into the psychology of Facebook users.

Multiple Investigations: As a result of the allegations of misuse of Facebook user data, U.S. and E.U. authorities are investigating Facebook and CA. British and E.U. data protection authorities are investigating the adequacy of Facebook’s “harvesting” of personal data explanations in its privacy settings, as well as the company’s response to learning a third party received data from GSR through its API platform. The Irish data protection commissioner has also questioned Facebook and CA’s actions.

Whether Facebook’s default privacy settings provided enough information for users to consent has not yet been litigated in the United States or in Britain, although one German regional court recently held in the vzbv case that the company’s default privacy settings were inadequate.

In the U.S., the Federal Trade Commission is reviewing whether Facebook violated its 2011 consent decree. The FTC’s 2009 complaint claimed Facebook misrepresented the fact that installed third-party apps could access almost all of a user’s data, by saying the apps would only have access to user information needed to operate to app. Facebook settled the complaint without admitting any misrepresentation, but agreed to clear notices of privacy practices and to obtain express consent before a user’s data was shared beyond the user’s privacy settings. Attorneys General from Massachusetts and Connecticut are in early stages of investigating questions about the sharing and use of this data.

Additionally, Facebook is investigating GSR’s harvesting and reported improper use of user and “friends” data that was shared with, perhaps sold to, CA. In that effort, Facebook seeks to conduct a digital forensics audit of CA’s and SCL’s data and whether the GSR sourced data as promised.

There are several additional interesting aspects of this Facebook and CA developing news that are worth mentioning.

  • Some commentators question whether the Facebook and CA developments might lead to a broader discussion about data-collecting technology platforms and data privacy rights. A former Facebook employee claims she repeatedly expressed concerns to the company about its reportedly lax oversight of data harvested by third parties using the API platform.
  • GSR’s other co-founder Joseph Chancellor, works for Facebook as an in-house psychologist. It is unclear when Chancellor left GSR or started working for Facebook. To date, it is unclear if Facebook warned or took action against Chancellor for the reported improper GSR data sharing with CA and/or SCL.
  • Much of the recent news about FB, CA and the GSR sourced data comes from whistleblowers who were former employees of these companies, as well as a U.K. news station’s undercover investigation against CA. One of the undercover news stories shows CA’s CEO on video reportedly suggesting its political consulting work includes entrapping politicians with bribes and women. In response to the video, English authorities obtained a warrant to conduct an on-site investigation of CA’s offices, which may help determine what happened with GSR sourced data. CA also suspended its CEO indefinitely.
  • A U.S. professor sued CA in the U.K. courts demanding a copy of what personal data CA had collected on him. See more details about this lawsuit here.
  • The “thisisyourdigitallife” app reported collected enough data for CA to create psychographic and political profiles on American voters to assist in targeting messages to them. This news has rattled politicians and governments world-wide, given that CA has been involved in several close and pivotal political campaigns.
  • Facebook users are deactivating and deleting their accounts in response to this news. Additionally ,On March 20, one Facebook user recently sued Facebook and CA on behalf of 50 million Facebook users in federal court in California, alleging that her privacy was violated when her personal data was improperly disclosed to CA. Price’s lawsuit seeks damages for all U.S. Facebook users whose information was harvested without their consent, and it asserts various state law claims A judge will decide whether the lawsuit will be certified as a class action. The complaint can be found here.
  • Facebook has suffered a stock value loss of about $50 billion dollars from this news. Facebook shareholders sued the social media network in San Francisco in a class action, claiming they suffered losses after the disclosure about CA. The case is Yuan v. Facebook Inc., 3:18-cv-01725, U.S. District Court, Northern District of California (San Francisco).

U.S. Citizen Sues Cambridge Analytica in U.K. Courts for Violations Under U.K. Data Protection Act

On March 16, David Carroll, a New York based American professor sued Cambridge Analytica (CA) in the U.K. courts, after the data analytics firm allegedly failed to respond to his request made pursuant to the U.K. Data Protection Act for his file of personal data held by CA, CA’s purpose for processing his data, and the persons and countries outside the E.U. with whom his data was shared.

Carroll had heard CA’s CEO reportedly bragged about the company having about 5,000 data points on each of the 230 million U.S. voters. Therefore, Carroll was surprised when the response to him included only about 200 data points. Carroll reports the response also inadequately described for Carroll where or how data about him was sourced and with whom his data was shared. The response led Carroll to believe his personal data was being processed in the U.K. which is why Carroll brought suit under UK data protection laws, which are typically more protective of personal data than U.S. data protection laws. Carroll’s request and the response to his request are posted to his twitter account @profcarroll.

This lawsuit was filed on the same day Facebook cut CA off from its advertising platform, on grounds that CA violated Facebook’s terms and conditions. For more details on the Facebook and CA brewing scandal, see [view related post].

Northwell Health Seeks to Use Drones and Telehealth for Emergency Care

Northwell Health, a New York-based health system, is seeking to use a fleet of emergency drones, in combination with telehealth technology, to respond to accidents more quickly, treat opioid overdoses and even provide medical attention needed due to terrorists attacks. However, there are still a lot of barriers to burst through before Northwell Health can carry out these plans.

Purna Prasad, Ph.D., Chief Technology Officer at Northwell Health, said, “This is actually our next foray into telehealth. There may be places where there is no network connection, or there may be places where people just don’t have the wherewithal to have any of type of mobile phone for two-way video conferencing. Can the drone provide that last mile of connectivity not only for audio, video and data, but also in delivering emergency care?” Prasad envisions a drone equipped with two-way audio and video capabilities and a compartment containing a dose of pain medication for an individual who fell in a remote location and needs pain relief prior to the EMTs arriving, or a defibrillator for someone experiencing cardiac arrest, or even a dose of Narcan for an opioid overdose. All of this could be done at a fraction of the cost of a helicopter and in a more efficient timeframe for more lifesaving potential. Of course, using drones to support emergency care is still a relatively new, untested idea, especially here in the United States. While many health care systems and first responders believe that using drones to transport biological samples, deliver medical supplies or respond to large-scale disasters is on the horizon, drones for telemedicine in emergency situations is less widely discussed. Last year, researchers at William Carey University of Osteopathic Medicine in Mississippi built three drone prototypes exactly for this purpose, but widespread testing of these prototypes or this drone use has been completed at this point. But, following the memorandum signed by President Trump this year, the Federal Aviation Administration (FAA) plans to enter into agreements with state and local agencies to pilot test the use of drones for many different purposes, one of which is for public health, by May 7th. Prasad said that while this is certainly the direction that Northwell Health hopes to go, the “FAA is as nervous as we are” because “this is something that is very new, especially in a place like New York City where you have two international airports.” We will continue to track new use cases like this and how the FAA works with state and local governments to integrate drones into the National Airspace for emergency health care and public health matters.

FAA Releases FY 2018-2038 Aerospace Forecast; Drones in Our Future

The Federal Aviation Administration (FAA) released its Fiscal Years 2018-2038 Aerospace Forecast last week, indicating, among other things, that the FAA expects small model hobbyist unmanned aerial systems (UAS or drones) to more than double from 1.1 million in 2017 to 2.4 million by 2022, while the commercial drone fleet will grow from 110,604 in 2017 to 451,800 by 2022. This is a growth rate of over 16 percent over that five year period for hobbyist drones and a growth rate of over 32 percent for commercial drones. Additionally, it is important to note that this forecast is based on the assumption that the operating limitations for drones are the same as they are now; if the regulatory environment evolves and allows more flights beyond visual line of sight, over people and at night, the FAA predicts that the number of commercial drones in the National Airspace could be closer to 717,895 by 2022.

The report also indicates that the FAA expects the number of remote pilots to skyrocket from 73,673 in 2017 to 301,000 in 2022. This is about a 32 percent increase.

First U.S. Test Flights Completed by Drone Delivery Canada

Drone Delivery Canada (DDC) of Toronto completed a series of successful drone delivery test flights at the beginning of this month in Rome, New York at the Griffiss International Airport. These drone delivery test flights were the first conducted by DDC in the United States. DDC used its Transport Canada-compliant Sparrow Drone (with a lift capacity of about 11 lbs.), its proprietary FLYTE management system and its proprietary DroneSpot technology to conduct the flights. Chief Technology Officer of DDC, Paul Di Benedetto, said, “Testing at Griffiss was a natural extension for continued progress with our platform in [beyond visual line of sight], non-segregated airspace environment. An active runway with large aircraft, helicopters and general aviation aircraft is the latest advancement to our operations team airspace integration efforts and a progression from the knowledge learned during DDC’s [prior] operations.” These test flights had a 100 percent success rate.

Trump Policy on Unmanned Military Aircraft Expected to Allow Export of Lethal Drones

President Donald Trump is expected to ease up on the rules related to foreign sales under a new policy on unmanned military aircraft as part of a broader overhaul of arms export regulations as part of the “Buy American” initiative. The new policy could make it easier to export some types of lethal U.S.-manufactured drones to U.S. allies. This is good news to many U.S. drone manufacturers who are facing surging competition overseas from Chinese and Israeli manufacturers who often sell their drones under lighter restrictions. On the other side, human rights and arm control advocates worry that this policy change will only fuel violence and instability in regions such as the Middle East and South Asia.

A key aspect of this policy would be to lower barriers to sales of smaller hunter-killer drones that can carry a fewer number of missiles and travel shorter distances. This policy would also ease export regulations for surveillance drones of all sizes. Even though this policy stops short of completely opening up the ability to sell top-of-the-line lethal drones, it could mark a big step forward toward the U.S. tradition against selling armed drones to countries other than only the most trusted allies.

U.S. drone manufacturers are vying for a bigger share of the global military drone market in hopes of meeting the projected forecast from $2.8 billion in sales in 2016 to $9.4 billion by 2025.

This new policy is expected in the coming weeks, but no exact timeframe has yet been released.

Privacy Tip #131 – Bryant University Women’s Summit Follow-Up

I was so honored to be a presenter at the Bryant Women’s Summit last week. It is always an incredible event and I enjoy attending every year. But the bonus for me this year is that I also got to interact with a lively group of executive and professional women that were eager to learn about the topic: “Take Control of Your Personal Information: Understanding the Risks and Rewards of Using Smartphones, Mobile Applications, and Social Media.”

These powerhouse women kept me on my toes the entire time! They were eager to learn about the mine fields presented by the camera, microphone and location based services settings on their smartphones, about online banking risks, privacy settings with social media accounts, how digital personal assistants are collecting biometric data of children, the mining of data by internet service providers and email platform providers and the aggregation of personal information by companies and how they are using and monetizing personal information.

There were a couple of questions presented that I promised to address during the Privacy Tip this week, including providing sites to access related to specific questions asked during the session.

1.“How do I find out if my information has been compromised?”

I wrote about during a previous Privacy Tip [view here]. Check out the post and the site to see if your email address and/or other information has been compromised.

2. “How do I know if someone has stolen my identity?”

One of the first things to do to see if anyone has used your personal information to open up an account under your name is to obtain a copy of your credit report. Every individual in the U.S. is entitled by law to obtain a copy of his or her credit report for free from each of the credit reporting agencies (Experian, TransUnion, and Equifax) annually. That means you can get three free credit reports a year—one from each company.

In order to obtain your credit report, go to the Federal Trade Commission website ( and which outlines information for consumers on how to get their free credit report. Or go directly to or call 1-877-322-8228. Yes, you have to give your personal information, including your Social Security number so they can authenticate you. And yes, they already have it.

Note that there are scam websites out there that spoof, so don’t be fooled and go through the FTC website to be sure it’s the correct site. Here is a previous blog post about the importance of obtaining your credit report annually to keep a tab on all accounts that are in your name.

3. “How do I find out about scams before I become a victim?”

A great resource is the Federal Trade Commission. One of its missions is to protect consumers. The FTC issues scam alerts that you can subscribe to, which provide notice directly to you of the newest scams it is concerned about. Subscribe to the scam alerts by going to the ftc website I also frequently check the FBI and IRS websites. And of course, subscribe to Robinson + Cole’s Privacy + Cybersecurity blog. We frequently alert consumers to the latest attacks.

Orbitz Confirms Breach of Travel Records and Credit Card Information of 880,000 Individuals

Orbitz, the travel booking entity that is owned by Expedia, has confirmed that it has “identified and remediated a data security incident affecting a legacy travel booking platform.” This means that one of its older websites that are used by customers to book their travel plans was hacked.

The statement says that Orbitz uncovered evidence earlier this month that an attacker had access to the legacy system between October and December of 2017, and the information of travel booking customers was compromised if they made any purchases through the legacy website between January of 2016 and December of 2017. The compromised information included these customers’ names, addresses, dates of birth, email addresses, gender, telephone number, and payment card information.

Orbitz confirmed that 880,000 consumers were affected by the attack, although it says there is no evidence that the personal information of these customers was “downloaded” from the platform. Nonetheless, Orbitz is offering affected individuals free credit monitoring and identity protection services.

Self-Driving Uber Vehicle Kills Pedestrian in Arizona

This week, a self-driving SUV operated by Uber—and with an emergency backup driver behind the wheel—struck and killed a 49-year-old pedestrian as she walked her bicycle across a street in Tempe, Arizona. It is believed to be the first pedestrian death associated with self-driving technology.

In addition to the Tempe Police Department, the National Transportation Safety Board said it was sending a team of four investigators to determine “the vehicle’s interaction with the environment, other vehicles and vulnerable road users such as pedestrians and bicyclists.” Data from the vehicle’s many cameras and sensors will no doubt prove useful to the investigations.

Uber and other tech companies and automakers have recently begun to expand testing of their self-driving vehicles in cities across the country. The companies believe that the cars will be safer than regular cars because they take easily distracted human drivers out of the equation. But self-driving technology is still only about a decade old, and we are just now starting to see the unpredictable situations that such vehicles can face.

There’s an ongoing debate about legal liability when it comes to accidents where a self-driving vehicle harms someone else through no fault of that person. Would the blame lie with the self-driving car’s manufacturer, owner, a combination of the two, or someone else? In their quest to become the epicenter of self-driving cars, Arizona regulators have largely left those questions unanswered.

Currently, product liability law offers the best guidance for determining legal fault with an emerging technology like self-driving cars.