Crime-as-a-Service Targets Popular Platforms

It’s getting difficult to keep up with the jargon of all of the new digital scams. The SaaSes in the beginning became regular business terms, such as Software-as-a-Service (SaaS), and Business Processes-as-a-Service (BPaaS). But then the criminal enterprises came up with Malware-as-a-Service (MaaS), Ransomware-as-a-Service (RaaS) and now Crime-as-a-Service (CaaS).

A new Crime-as-a-Service offering is targeting PayPal, Apple, and Amazon accounts. The attack vector is a phishing campaign dubbed 16Shop, which is targeting victims through phishing emails with incentives to click on malicious links and attachments. Old tricks are still working, and the tools are being sold quite successfully on underground forums.

The most recent campaign, alleged to originate in Indonesia, is targeting PayPal customers in order to obtain usernames, passwords, credit card information, and other personal information. This phishing-kit-as-a-service (PkaaS) (I can’t even pronounce that acronym) boasts that it has induced over 23 million individuals to actually click malicious links in emails and provide personal information that can be sold for a profit. One scheme that is particularly successful is when the victim is told their email has been compromised and that they need to change their password for security purposes. Unfortunately, in a real cyber-incident, one of the first things we do is ask users to change their passwords. Criminals are leveraging this fact, using it to their advantage by duping users into believing that a false notice to change passwords is real and then stealing the credentials.

Security professionals continue to advocate that multi-factor authentication is critical to assist with combating these types of attacks. Employee education is also helpful, so that when employees receive an instruction to change their passwords, they reach out to confirm rather than blindly following blind instructions. Employees must understand that they cannot rely solely on digital instructions. Any and all instructions that come via emails regarding usernames and passwords must be confirmed face-to-face or verbally with a known source. It is sad, but true. Email communication should be just that—email communication. No personal information, sensitive information, critical business information or information allowing access to systems should ever be provided through email communication. It just can’t be trusted these days.

Changing the Conversation About Sharing and Using Health Information

Some app developers know more about our health than our doctors do. Take, for instance, FitBit, which is attached to our wrist and measuring in real time our temperature, our heart rate, our steps and whether we have had enough exercise for our age in a day.

Some people sleep with their phones on their pillows so they can monitor their sleep habits. Some people have apps, such as Bump, to determine when they are the most fertile and should do the thing that you still have to do to get pregnant. Some apps know when you are pregnant before your body even knows. 23andMe knows your entire DNA genome, and your family’s as well. None of this highly sensitive data is protected by law. When you consent to download the app, provide the information to the app developer, or send a DNA sample, that company has the right to do whatever it says it can do in its privacy policy.

Consumers are providing highly sensitive health information to app developers without a second thought. Millions and millions of health apps are downloaded by individuals for convenience so they can get immediate feedback on a specific data point. For some reason, individuals do not like their neighbors to know about certain things, but they have no problem with sharing intimate details with random app developers.

This health information is not protected by HIPAA, yet it is being shared willingly and freely by consumers (with consent through the Privacy Policy). Wouldn’t it be great if this information could be shared with health care providers to treat individuals and increase the quality of health care delivery for the entire population?

The paradigm must be shifted so consumers get the benefit of the newest technology, treatment is more convenient for patients, real-time data are being used for diagnostic purposes to provide the highest quality patient care possible, and the massive amount of information that consumers are freely giving to private companies could be used for population health, instead of the years and years it takes for Institutional Review Boards and research studies to get through the system. We need to figure out how to leverage technology and consumer convenience to drive research and outcomes. The medical community is getting left behind because consumers want answers in real time, are used to getting what they want in real time, and will bypass the medical community if it can’t provide that convenience and value in real time. Consumers’ behavior with health apps is instructive on how to engage patients for their own treatment and for research purposes. The paradigm is shifting, and looking at how consumers are behaving with health apps will shape how medical treatment is, and should be, provided in the future.

Privacy Tip #223 – Navigating Individual Data Privacy in a World with AI

The same week that the National Institute of Standards and Technology came out with its Privacy Framework [view related post], highlighting how privacy is basically a conundrum, news articles also highlighted a new technology, Clearview AI, that allows someone to snap a picture of anyone walking down the street and instantly find out that person’s name, address and “other details.” I want to know what that means? Does that mean they automatically know my salary, the number in my bank account, my prescription medication or health issues, my political affiliation, or what I buy at the drug store or grocery store? All of this information tells a lot about me. Some people don’t care, but I am not sure why. There just does not seem to be any respect or interest in the protection of individual privacy. It’s not that people have things to hide—it’s just that it is reminiscent of some darker days of humanity—such as World War II era Germany.

It is comforting to see that privacy advocates are warning us about Clearview AI. Clearview AI has obtained the information, including facial recognition information of individuals, by scraping common websites such as LinkedIn, Facebook, YouTube, and Venmo, and is storing that biometric information in its system and sharing it with others. According to Clearview AI, its database is for use only by law enforcement and security personnel, and has assisted law enforcement to solve crimes. That is obviously very positive. However, privacy advocates point out that the app may return false matches, could be used by stalkers and other bad actors, as well as for mass surveillance of the U.S. population. That is obviously very negative.

There has always been a tension between using technology for law enforcement and national security, which frankly, we all want, and using technology for uses that are less clear and may promote abuse, which we don’t want. Clearview AI is collecting facial images of millions of people without their consent, which may be used for good or bad purposes. This is where public policy and data ethics must play a part. The NIST Privacy Framework can help in determining whether the collection, use and disclosure of facial recognition on the spot is protecting the privacy and dignity of individuals. Technological capabilities must be used for good purposes, but in today’s world technology is moving fast, and data ethics, privacy considerations and abuse are not always being considered, including with facial recognition applications. Perhaps the Privacy Framework can help shape the discussion, which is why its release is so timely and important.

This article co-authored by guest contributor Victoria Dixon. Victoria is a Business Development & Marketing Coordinator at Robinson+Cole and is not admitted to practice law.

NIST Releases Privacy Framework: A Tool for Improving Privacy Through Enterprise Risk Management

The National Institute of Standards and Technology (NIST) released its first privacy framework tool  (the Privacy Framework) on January 16, 2020. In the Executive Summary, NIST states that with the unprecedented flow of data of individuals through a complex digital ecosystem, individuals may not be able to understand the potential consequences for their privacy as they interact with system and services, while at the same time, organizations “may not realize the full extent of these consequences for individuals, for society or for their enterprises…” For more on the background of the draft Privacy Framework, see our previous post.

NIST states in the Executive Summary that the tool will “enable better privacy engineering practices that support privacy by design concepts and help organizations protect individuals’ privacy. The Privacy Framework can support organizations in:

  • Building customers’ trust by supporting ethical decision-making in product and service design or deployment that optimizes beneficial uses of data while minimizing adverse consequences for individuals’ privacy and society as a whole;
  • Fulfilling current compliance obligations, as well as future-proofing products and services to meet these obligations in a changing technological and policy environment; and
  • Facilitating communication about privacy practices with individuals, business partners, assessors, and regulators.”

NIST considers the Privacy Framework to be “widely usable by organizations of all sizes and agnostic to any particular technology, sector, law, or jurisdiction” and able to provide flexibility to organizations by using a risk-and-outcome-based approach. The Privacy Framework’s “purpose is to help organizations manage privacy risks by:

  • “Taking privacy into account as they design and deply systems, products, and services that affect individuals;
  • Communicating about their privacy practices; and
  • Encouraging cross-organizational workforce collaboration—for example, among executives, legal and information technology *IT)—through the development of Profiles, selection of Tiers, and achievement of outcomes.”

The Privacy Framework is comprised of three parts: 1) The Core (a set of privacy protection activities and outcomes that can be communicated and developed throughout the organization); 2) A Profile (the organization’s current privacy activities or desired outcomes to focus on, which can change or be added depending on the organization’s needs; the Profiles can be used to conduct self-assessments and used for communication purposes within the organization); and 3) Implementation Tiers (Tiers reflect the current and changing posture of the organization to determine whether there is progress or whether processes and resources are in place to manage the risk.).

The Privacy Framework is designed to work with the NIST Cybersecurity Framework, recognizing the cybersecurity and privacy risk relationship and overlap.

As with the NIST Cybersecurity Framework, the Privacy Framework is easy to understand and user-friendly. It provides a roadmap for organizations to tackle privacy risk management, and it urges organizations to understand that privacy must be considered when developing new products and services, just as security must be considered. Further, with rapidly changing laws, privacy risk management is important to an organization’s overall risk management and compliance. But most of all, the Privacy Framework challenges organizations to consider the ethics of the collection, maintenance, use, disclosure and monetization of data, and to think about the consequences that the proliferation of data and the collection and use of data might have on individuals.

As stated in the Privacy Framework, “Privacy is challenging because not only is it an all-encompassing concept that helps to safeguard important values such as human autonomy and dignity, but also the means for achieving it can vary…human autonomy and dignity are not fixed, quantifiable constructs; they are filtered through cultural diversity and individual differences. This broad and shifting nature of privacy makes it difficult to communicate clearly about privacy risks within and between organizations and individuals.” The Privacy Framework is designed to provide a common language so that diverse privacy needs can be met and communication around those needs and expectations is clear. We applaud NIST on this needed assistance for organizations to tackle the ever-changing landscape of data privacy.

FBI Warns of Retaliatory Cyber-Attack from Iran

The Federal Bureau of Investigation (FBI) is warning of a heightened likelihood of Iranian cyber-attacks following the escalation of tension between the U.S. and Iran. This follows the warning last week by the Department of Homeland Security (DHS). The FBI and DHS issued a bulletin to law enforcement groups warning of potential physical and cyber-attacks on the U.S.

The FBI stated that “the FBI is aware of the continued possibility that retaliatory actions could be taken against the United States and its interests abroad” based upon increased scanning and reconnaissance.

According to Chris Krebs, head of DHS’ cybersecurity division, U.S. companies should be looking for Iranian cyber-activity, particularly related to industrial control systems and software that supports the systems. Krebs tweeted “[P]ay close attention to your critical systems, particularly ICS. Make sure you’re also watching third party accesses!”

Other security professionals have noted that companies should be proactively looking for threats to their systems that could have originated from Iran. According to Secretary of State Mike Pompeo, Iran has “a deep and complex cyber capability.”

Help with Yelp: Posting Personal Information in Response to a Negative Review Can Land You in Hot Water

Virtually every company that provides goods or services to the public will, at some point, have a negative review posted online by a dissatisfied consumer. While such reviews are understandably upsetting, a company should not respond in kind with negative comments about the reviewer and certainly should not reveal personal or sensitive information about them.

One California business owner learned this the hard way. According to allegations in a complaint filed on behalf of the Federal Trade Commission (FTC), a mortgage company (through its sole owner) allegedly responded to consumers who posted negative reviews on Yelp by revealing their credit histories, debt-to-income ratios, taxes, health information, sources of income, family relationships, and other personal data. Further, several of the responses revealed the first and last names of the reviewers. According to the FTC, this conduct violated the Fair Credit Reporting Act (FCRA), which places a legal obligation on credit reports users to keep that information confidential and disclose it to third parties only when there is a legitimate need to do so.

The FTC further alleged that the company and its owner violated the FTC Act and other federal law, including by their failure to implement an information security program until September 2017 and not subsequently testing the program.

To resolve the litigation, the broker and his company agreed to pay a $120,000 penalty to settle the alleged FCRA violation. In addition, the broker and company are prohibited from misrepresenting their privacy and data security practices, misusing credit reports, and improperly disclosing nonpublic personal information to third parties. The company also was ordered to implement a comprehensive data security program to protect personal information it collects. It must obtain third-party assessments of this program every two years (for a period of 10 years). Furthermore, the company must designate a senior corporate manager responsible for overseeing the data security program to certify compliance with the order every year.

As for those negative online reviews, rather than privately seething or engaging in a personal attack on the person who posted it, a better approach would be to acknowledge the customer’s concerns and apologize for their experience (even if you believe they are wrong), say something positive about your company and your willingness to try to resolve the issue, and move to take the conversation offline by providing contact information should the reviewer wish to continue the discussion.

GAO Says FAA Could Leverage Drone Test Site Data for More Effective Integration

The U.S. Government Accountability Office (GAO) wrote in a report published last week that the Federal Aviation Administration (FAA) has facilitated approximately 15,000 drone research flights since 2015, but that the FAA could make better use of the data it collects from these drone test flights.

Since 2015, both public and private entities have used the FAA test sites to assess technologies for numerous UAS activities, including inspecting utilities, carrying passengers, and package delivery. The GAO says this research provides a plethora of data on the performance of various drone capabilities and technologies that could greatly benefit the FAA’s drone integration efforts. The GAO further says that without a data analysis plan, the FAA could miss an opportunity to better use the data to inform and enhance overall integration and operational standards.

Additionally, the FAA reports only limited public information about how the research relates to its integration plans. The GAO believes more information on test sites’ research and results would be helpful for industry stakeholders’ own research efforts, which are typically used in conjunction with the FAA’s research to harmonize integration. The GAO further believes that if more information were available to stakeholders, then more stakeholders might use the FAA test sites to conduct their own research, which would, in turn, increase the data available to the FAA for the government’s integration efforts.

Overall, the GAO report recommended that the FAA:

  1. Develop a data analysis plan for test site data; and,
  2. Share more information on how this program informs integration, while protecting proprietary data.

The FAA has partially agreed to the first recommendation and fully agreed with the second recommendation. We will monitor the FAA’s efforts in these regards as the agency continues its drone integration efforts over the next year.

Privacy Tip #222 – The Dating App Privacy Secret

I don’t know much about dating apps. I met my husband decades ago, long before the Internet, and the old-fashioned way—in college. But I know people who have used them, have been happy with them, have found their life partner through them, have funny stories about using them and the people they met through them. I even know about swiping left and right.

I know there are different apps depending on your sexual orientation, sexual preferences, whether you are looking for a long-term relationship or just a hook up. I also wrote extensively on the blog when Ashley Madison experienced its notorious data breach. But the recent stories in the news about dating apps compelled me to make sure that those who are using dating apps are aware of how their information is being used.

It is clear that when someone decides to use a dating app, they have to provide a lot of personal information so the app’s algorithms can properly match them with others that may be of interest. I also know that most people who use dating apps do not believe their personal data are being shared, sold or used to profile them.

According to several news stories this week, the most popular dating apps are precisely tracking users and disclosing highly personal and sensitive user information to third parties, and there are allegations that this tracking and sharing violates privacy laws.

For instance, the New York Times (Times), citing a recent report released by the Norwegian Consumer Council, reported on January 15th that popular dating apps are disclosing “dating choices and precise location to advertising and marketing companies” and that “Grindr, the world’s most popular gay dating app, transmitted user-tracking codes and the app’s name to more than a dozen companies, essentially tagging individuals with their sexual orientation.” Another assertion was that OkCupid shared “ethnicity and answers to personal profile questions—like ‘have you used psychedelic drugs?’ to a firm that helps companies tailor marketing messages to users.” According to the Times, it found that “the OkCupid site had recently posted a list of more than 300 advertising and analytics ‘partners’ with which it may share users’ information.”

When these dating apps share this sensitive information with marketing and advertising companies, those companies are free to share it with lots of other businesses, which essentially means that this highly sensitive information can be shared well beyond what is intended by the user, and is being used to profile them.

In response to this proliferation of sensitive information, this week Forbruker Radet filed a complaint in Oslo against Grindr and five other tech companies alleging violation of the GDPR.

The 25-page Complaint lists in detail the tracking capabilities of Grindr and other apps, and provides a detailed and quite interesting tale of the data sharing between Grindr and Twitter’s MoPub, and MoPub’s sharing of the data with AppNexus and OpenX. If you have never heard of these companies, I recommend you read the Complaint. It is a detailed and easy to understand sordid trail of how personal information is shared in data dumps and the precise nature in which these data dumps then can aggregate data and identify the user with keywords such as “social network, gay, bi, bi-curious, chat, dating, nearby….”

In the U.S., a coalition of consumer advocacy groups has sent letters to U.S. regulators, including the California Attorney General, requesting investigations into these practices to determine whether they violate state or federal law. With the California Consumer Privacy Act now in effect as of January 1st, it will be interesting to see if the California AG takes the lead.

In the meantime, if you are using a dating app, pay close attention to the privacy policy of the app and what they say about sharing your data, exercise any rights you may have as provided by the app in the privacy policy, and choose the app you use carefully—with your personal privacy as a strong factor in that decision.

Knowledge is Power: California Attorney General Issues Advisory on the CCPA

California Attorney General Xavier Becerra said last week that “knowledge is power, and in today’s world knowledge is derived from data. When it comes to your own data, you should be in control…” These words came in an Advisory highlighting California consumers’ rights under the California Consumer Privacy Act (CCPA). The Advisory outlined several areas highlighting consumer rights under the CCPA, which went into effect on January 1, 2020, including a description of the new data broker registry law. We’ve written extensively on the CCPA this past year and what it means for California consumers and businesses.

The Attorney General’s office also released a CCPA fact sheet that stated that the CCPA will protect more than $12 billion worth of personal information that is used each year for advertising purposes in California according to estimates from the Standardized Regulatory Impact Assessment for the CCPA regulations. That’s a staggering estimate of the worth of our personal information.

It’s no secret our data are a valuable commodities in today’s world, and the Attorney General’s message is that the CCPA gives consumers some power over the use of their data. The states of Maine and Nevada have already enacted similar (but different) data laws. Maine enacted the Act to Protect the Privacy of Online Consumer Information, which requires internet service providers in Maine to obtain consumer consent before selling, using, or permitting access to a customer’s personal information. In Nevada, covered operators of Internet websites or on-line services must allow consumers the ability to opt out of the sale of their personal information. It will be interesting to see what additional legislation states will enact in 2020.

Health Information Sharing and Analysis Center Warns Health Systems to Be Wary of Iranian Cyber-Attacks

Following the escalation of tensions between the United States and Iran in the past week, the Health Information Sharing and Analysis Center (H-ISAC) is warning hospitals and health systems that Iran could attack health organizations, which are considered critical infrastructure, and that they make sure their systems are being updated with patches.

H-ISAC further recommended that healthcare organizations keep their backup data off-site in the event of an intrusion or data breach.

Heath systems may consider warning their employees of the increased risk and to be on heightened alert for Iranian-backed attacks, including through the use of more frequent phishing campaigns and social media [view related post]. Many are not thinking that Iran’s threat of retaliation could come in the form of a cyber-attack, so educating employees may assist in mitigating this substantial risk for them both professionally and personally.

LexBlog