CCPA Amendment Exempts Deidentified Medical Information

The California legislature recently passed AB 713 which is an amendment to the California Consumer Privacy Act of 2018 (CCPA). This bill will take effect immediately on September 30, 2020 once Governor Gavin Newsom signs the legislation. The effect of AB 713 is that it adds Section 1798.146 to the CCPA, and states that the CCPA shall not apply to medical information that is governed by the California Confidentiality of Medical Information Act (CMIA) or to protected health information that is collected by a covered entity or business associate governed by the federal Health Insurance Portability and Accountability Act (HIPAA) and the federal Health Information Technology for Economic and Clinical Health Act (HITECH).

Section 4 (A) of AB 713 states that to be exempt, the information must meet both of the following conditions:

  1. i) It is deidentified in accordance with the requirements for deidentification as set forth in Section 164.514 of Part 164 of Title 45 of the Code of Federal Regulations (HIPAA regulations).
  1. ii) It is derived from patient information that was originally collected, created, transmitted, or maintained by an entity regulated by HIPAA, CMIA, or the Federal Policy for the Protection of Human Subjects, also known as the Common Rule.

Additional provisions of the bill prohibit a business or other person from reidentifying information that was deidentified, unless a specific exception is met. Beginning January 1, 2021, the bill requires that contracts for the sale or license of deidentified information must include specific provisions relating to the prohibition of reidentification of information.

Specifically, Section 2 of the bill requires that businesses that sell or disclose medical information that was “deidentified in accordance with specified federal law, was derived from protected health information, individually identifiable health information, or identifiable private information to also disclose whether the business sells or discloses deidentified patient information derived from patient information and, if so, whether that information was deidentified pursuant to specified methods.”

So, what are the key takeaways from this amendment? Businesses that sell or license deidentified medical information will be required to update their privacy policies and to add specific provisions to contractual agreements regarding the prohibition of reidentification of medical information.

VA Alerting 46,000 Veterans of Compromise

The U.S. Department of Veterans Affairs Office of Management (VA) has announced that it is notifying approximately 46,000 veterans that their personal information was compromised when hackers were able to access an online application that allowed them to divert payments designated for community health care organizations that provide medical care to veterans to the hackers’ bank accounts.

It is being reported that the hacker(s) used social engineering methods to exploit user authentication protocols in order to access the application and change payment information to divert the payments to new bank accounts. The VA took the application offline and is investigating the incident.

The VA is mailing letters to the veterans (or, as applicable, their next of kin) whose information was compromised and is offering credit monitoring to those whose Social Security numbers may have been involved.

Cyber Claim Trends Outlined in Coalition Report

Cyber liability insurers are in a good position to provide insight into the types of cyber incidents that are hitting the industry. Coalition, a provider of cyber insurance globally, which “serves over 25,000 small and midsize organizations across every sector of the US and Canada,” issued its Cyber Claims Report this week about the claims trends it is experiencing and an analysis of cyber risk based upon those claims.

According to the report, after analyzing thousands of reported incidents, it found that “the majority of losses” fell under breach response coverage, cyber extortion costs coverage, and funds transfer fraud coverage. According to the report, “[T]hese three loss types accounted for 87 percent of reported incidents and 84 percent of claims payouts.”

It further confirmed what we are seeing in the industry—that “the types of attack techniques criminal actors used to target our policyholders are also highly concentrated. Phishing, remote access, and social engineering attacks accounted for 89 percent of all known attack techniques.”

If this doesn’t tell you where to put your resources in prevention and resiliency, I don’t know what does. According to the report, 54 percent of all claims came from email/phishing schemes, 29 percent of claims were the result of remote access, 6 percent were attributable to “other social engineering,” and 3 percent each or 9 percent total were attributable to third-party compromise, brute force authentication attacks and “other.”

The report notes that ransomware is becoming increasingly sophisticated, which we have repeatedly reported from our experience, and that it has increased 47 percent in severity from Q1 to Q2 in 2020. This means that the ransomware criminals are increasing their ransom demands and “the complexity and cost of remediation is growing. The average ransom demand amongst our policyholders increased 100 percent from 2019 through Q1 2020, and increased another 47 percent from Q1 to Q2 in 2020.”

The report and the reality that we are seeing is grim. Ransomware strains such as Maze, Ryuk, Sodinokibi and DoppelPaymer are taking ransomware attacks to a new level by exfiltrating data before requesting the ransom, and then showing proof of life that they have the data in their possession and then threatening to publish the data unless a ransom is paid for a certificate of destruction. According to Coalition, the average ransom demand ranges from a high of Maze at $420,000 down to Sodinokibi at $73,920.

The Coalition report paints a stark picture of reality that is necessary to confront in order to put practices in place to implement incident response planning, prevention and resiliency.

Privacy Tip #252 – Deepfakes Easy to Make and Can Be Used to Microtarget Specific Groups

It is sometimes surprising how gullible well-intentioned folks are, and how we all can be manipulated by social media. That is the basic conclusion of researchers at the University of Amsterdam’s School of Communications Research and Institute of Information Law, who recently completed a study on “deepfakes.”

Deepfakes are audio and video recordings that have been manipulated by software to appear to be real. According to the researchers, even really bad deepfakes can fool people who are unable to realize that they are fake. The researchers point out that if people can be fooled by really bad deepfakes, really good ones can be incredibly effective.

The researchers point out that the use of deepfakes in microtargeting specific groups on social media and other platforms is concerning. They came to their conclusion by creating a deepfake video of a Dutch politician that was filled with false statements. The researchers had 287 people view the video and then they asked them if they thought the video was credible. According to the researchers, “[I]n a short period of time and with relatively limited technical resources, we were able to construct a deepfake video that was unquestioningly accepted as genuine by most of the participants in our experiment.” They pointed out that there are apps that will help users make deepfakes.

The researchers stated that they are concerned about the use of deepfakes to microtarget specific groups of people on social media. If they are fake, mainstream news may point out the falsity of the claims, but the social media platform may not, and therefore, the people who are relying on social media for their information may be intentionally fed inaccurate information.

In addition, the researchers point out that deepfakes can be used for criminal behavior, including online scams, blackmail and cyberbullying, which “could lead to an undermining of trust across society as a whole, making it easier to cast doubt on any and all online information sources.”

Consider the use of deepfakes if you are accessing only social media platforms for information. Diversify your access to news, the media and social media platforms, and check the authenticity of the information from several sources before you carte blanche believe it or share it with others.

OCR Settles Five Investigations Under “Right of Access” Initiative

The Office for Civil Rights (OCR) announced yesterday that it has settled five investigations in its HIPAA “Rights to Access” Initiative (Initiative), which OCR had stated would be an enforcement priority for it starting in 2019. The Initiative is “to support individuals’ right to timely access to their health records at a reasonable cost under the HIPAA Privacy Rule.”

The addition of the five recent settlements brings to seven the total for OCR’s enforcement of the Initiative. The OCR’s press release states that the recent settlement involves five entities: Housing Works, Inc., All Inclusive Medical Services, Inc., Beth Israel Lahey Health Behavioral Sciences and King MD.

Housing Works has agreed to pay OCR $38,000 and to adopt a corrective action plan, resulting from a complaint by an individual that it failed to provide him with a copy of his medical records. OCR provided technical assistance to Housing Works and closed the complaint. A month later, the individual complained to OCR that Housing Works still had not provided the records to him. OCR started an investigation and determined that a violation had occurred. The individual received his records three months later.

All Inclusive Medical Services, Inc. (AIMS) settled the potential violations of HIPAA with a payment of $15,000 to OCR and agreed to adopt a corrective action plan. In that case, OCR received a complaint from an individual that AIMS refused to give her a copy of her records. As a result of the OCR’s investigation, AIMS sent the individual her medical records two years after the initial complaint.

Beth Israel Lahey Health Behavioral Service (BILHBS) has settled allegations of failing to provide access to records by paying $70,000 to OCR and adopting a corrective action plan. The allegations against BILHBS are that a personal representative of a patient requested the medical records of her father, and that BILHBS failed to provide the requested medical records, which OCR indicated was a potential violation of the HIPAA right of access standard. Following OCR’s investigation, the records were sent to the personal representative eight months after they were requested.

King MD, a small provider of psychiatric services, has agreed to pay OCR $3,500 and to adopt a corrective action plan. OCR received a complaint that King MD failed to respond to a request for access to medical records in August 2018. OCR provided technical assistance to King MD, but the individual complained in February 2019 that she still had not been provided with her medical records. OCR started an investigation and determined that the failure to provide access to the records was a potential violation of the HIPAA right-of-access standard. The patient received her medical records in July 2020.

Finally, Wise Psychiatry, PC, a small provider of psychiatric services, has agreed to pay OCR $10,000 and to adopt a corrective action plan. OCR received a complaint that Wise failed to provide a personal representative with access to his son’s medical records. OCR provided technical assistance and closed its investigation. Unfortunately, OCR received a second complaint from the individual that he had not received the records, so OCR initiated an investigation and found that the “failure to provide the requested medical records was a potential violation of the HIPAA right of access standard.” As a result of OCR’s investigation, Wise Psychiatry sent the personal representative his son’s medical records in May 2019.

Messages from these settlements:

  • Comply with the HIPAA right of access requirements.
  • If OCR provides technical assistance, listen, follow and comply with the HIPAA right-of-access requirements.
  • If the right-of-access requirement is not followed after OCR provides technical assistance, and the patient complains to OCR again, it is not likely to close the complaint again, and there is a high risk of having an investigation opened and an eventual monetary settlement made with OCR.

OCR publicly stated on multiple occasions that it would focus on enforcement of the right-of-access requirements starting in 2019, so covered entities may wish to review processes in place around patients’ access to records, as review of compliance is timely in light of these recent settlements.

OCR publicly stated on multiple occasions that it would focus on enforcement of the right-of-access requirements starting in 2019, so covered entities may wish to review processes in place around patients’ access to records, as review of compliance is timely in light of these recent settlements.

City of Hartford Hit with Ransomware Attack, Causing School Delay

Cyber-attackers know that city and town officials have been gearing up for the start of school and the potential for remote learning, in school or a hybrid model all summer. The daily monitoring of the coronavirus has kept officials alert and flexible as they focus on the start of school.

Cyber-attackers also know that cities and towns often have not devoted as much time and resources into cybersecurity as private companies. So it was a perfect time—over the Labor Day weekend–for cyber criminals to hit the City of Hartford, Connecticut with a ransomware attack right before the start of school, which was scheduled for  Tuesday, September 8.

As a result of the ransomware attack, city officials had to delay the start of school, which was a major disruption to the schools, teachers, administrators, parents and students. The attack affected a majority of the city’s servers, and was certainly a distraction from other priorities.

I often hear from information technology professionals in cities and towns that they have other priorities that are perceived as more important than incident response planning, and that they are challenged by the lack of funding prioritized for cybersecurity by city officials.

The planning for the start of school was certainly a priority over the summer, but if an incident response plan is not in place, the timing of a cyber attack can throw all that careful planning right out the window.

What happened to Hartford is happening repeatedly across the country and the attacks are coming faster and with more teeth. City officials may wish to consider making cybersecurity, including appropriate budgeting and implementing an incident response plan, a priority, because it is not a matter of if, but when that ransomware attack will occur.

ViSalus to Pay $925 Million Award for Alleged TCPA Violations

Last month, an Oregon federal judge refused ViSalus’ request to decrease the $925 million jury award against it for its alleged violations of the Telephone Consumer Protection Act (TCPA). ViSalus, a health supplement maker, allegedly made approximately 1.8 million unsolicited robocalls. This award came after ViSalus decided not to settle the class action and face statutory damages between $500 and $1,500 per unwanted text or call. This jury award should be a warning to other companies whose strategy in these TCPA class action cases is to bypass settlement negotiations and argue at trial that the Constitution’s due process clause should protect them from exposure to damage awards that are extensive and far-reaching beyond the harm actually incurred by the alleged conduct.

This case ended up in the hands of a jury in April, after the court certified a nationwide class of about 800,000 individuals. The judge did, however, determine that ViSalus did not need to pay more than $500 per call for the award to be a sufficient deterrent, but also did not want to go below the statutory minimum.

However, while the judge did issue a final judgment approving the $925 million award, ViSalus plans to move forward with post-trial motions and to appeal this case to the Ninth Circuit Court of Appeals. While other courts have weighed in on this issue in the past, the Ninth Circuit has yet to make a determination on TCPA awards. We will continue to watch this case as it moves through the appeals process.

New California Privacy Rights Act on the 2020 Ballot

The California Privacy Rights Act (CPRA) recently qualified for the November 2020 ballot, and if California voters approve this initiative, the CPRA will expand the rights of California residents under the current (stringent) California Consumer Privacy Act (CCPA), beginning on January 1, 2023.

So what will change under the CPRA?

  1. Creation of the California Privacy Protection Agency (CPPA): If the CPPA is created, it would be the first of its kind in the United States. The CPPA would be governed by a five-member board that would have full administrative power, authority and jurisdiction to implement and enforce the CCPA (instead of the California Attorney General).
  2. Stricter Definitions: CPRA defines “sensitive personal information” more strictly than “personal information;” “sensitive personal information” includes government-issued identifiers (i.e., Social Security numbers, driver’s license numbers, passport numbers), account credentials, financial information, precise geolocation, race or ethnic origin, religious beliefs, contents of certain types of messages (i.e., mail, e-mail, text), genetic data, biometric information, and other types of information.

The CPRA also would create new obligations for companies and organizations processing sensitive personal information. It also would allow consumers to limit the use and disclosure of their sensitive personal information.

The CPRA would also expand consumer rights under the CCPA. Specifically, under the CPRA, consumers would have the right to:

  1. Correct personal information;
  2. Know the length of data retention;
  3. Opt-out of advertisers using precise geolocation; and,
  4. Restrict usage of sensitive personal information.

The CPRA also would extend the moratorium related to employee data until January 1, 2023; currently, under the CCPA, employee data are not covered until January 1, 2021. Note that California AB-1281, which was enrolled on September 1, 2020, extends the current exemption for employee data to January 1, 2022 in the event that the CPRA is not voted into law.

Lastly, in addition to the private right of action for data breaches under the CCPA, the CPRA would expand this private right of action to include the unauthorized access or disclosure of an email address and password or security question that would permit access to an account if the business failed to maintain reasonable security safeguards.

While many companies are still grappling with the nuances of the CCPA, if the CPRA gets the green light from voters in November, it will bring yet another wave of compliance issues and implementation of new policies, procedures and processes for many businesses in and outside of the California. We will watch this ballot question closely as we near the November election.

Portland City Council Bans Use of Facial Recognition Technology

On September 9, 2020, the Portland, Oregon City Council voted unanimously to ban the use of facial recognition technology by the city government, including the police department, following similar actions by the cities of Boston and San Francisco. According to one Council member, “[T]his technology just continues to exacerbate the over-criminalization of Black and brown people in our community.” The ordinance requires the Bureau of Planning and Sustainability and the Office of Equity and Human Rights to make sure that all city agencies are aware of the ordinance.

The Council stated that the use of biased facial recognition algorithms by law enforcement may cause “irreversible damage due to false identification from a face recognition process,” and its use may prevent city residents from being able to access city services.

In a second ordinance approved on the same night, the Council also prohibited private entities from using facial recognition technology in public places, which would include grocery stores, shopping malls and security cameras on public streets. This is being reported as the first time a city in the United States has enacted such a measure.

In voting to prohibit the use of facial recognition technology by private companies in public areas, the Council noted that facial recognition technology often misidentifies women and people of color, and that the technology can be used contrary to civil liberties and rights of citizens. According to the Council, “there is a risk of discrimination and harm, because face recognition technologies collect sensitive personal information that may lead to different decisions about access for those people for which those technologies are biased against.” The ACLU of Oregon noted “Face surveillance is an invasive threat to our privacy, especially to Black people, Indigenous people, people of color and women, who frequently are misidentified by the technology.”

Privacy Tip #251 – DOJ Charge Four Men with Defrauding Thousands of Senior Citizens in Mail Schemes

The Department of Justice recently indicted four men—two of whom are located in Canada and two in New York—for a mass-mailing scheme that bilked thousands of senior citizens out of tens of millions of dollars.

According to the indictments, the accused Canadian fraudsters sent mail to thousands of elderly individuals whose names and addresses they obtained through mailing lists. The mailings promised cash prizes for a fee of $19.95 to $39.95 and included a return envelope to mailboxes across the United States that they rented “under false identities.”

The indictment against the U.S. citizens alleges that the accused fraudsters mailed hundreds of thousands of prize notices that would lead to a claim of millions of dollars in cash rewards if a fee of $19.99 and $24.99 were paid. Most of those who sent money and were swindled were elderly victims. These defendants also obtained the names and addresses of the victims through mailing lists. They allegedly netted $7.5 million per year from this scheme.

Fraudsters use any means by which to obtain personal information, including names and addresses, in order to perpetrate their crimes. The selling of our names and addresses is rampant in the marketing industry, and fraudsters can easily obtain that information in massive quantities and use it to launch a scheme.

Alert senior citizens in your life about these types of frauds so they do not fall victim to them and will recognize these types of frauds when they see them.

LexBlog