The rise of large language models (LLMs) such as ChatGPT has created novel legal implications surrounding the development and use of such artificial intelligence (AI) systems. One of the most closely watched AI cases currently is New York Times Co. v. Microsoft Corp., No. 1:23-cv-11195 (S.D.N.Y. filed Dec. 27, 2023), in which the New York Times (NYT) has alleged that OpenAI, the parent company of ChatGPT, impermissibly used NYT-copyrighted works to train the ChatGPT LLM. Though the case is centered on questions of intellectual property, a recent development in the case has raised significant data privacy concerns, as well.

The Preservation Order

In the course of the ongoing litigation, NYT asserted that, if ChatGPT saved its user data, such data could preserve evidence to support NYT’s position. In a May 13, 2025, preservation order, U.S. Magistrate Judge Wang for the Southern District of New York agreed with NYT and instructed OpenAI “to preserve and segregate all output log data that would otherwise be deleted on a going forward basis until further order of the Court.”

This is a sweeping demand, as ChatGPT receives and hosts a vast volume of data. Over 4.5 billion users visit ChatGPT per month, and the LLM receives approximately 2.5 billion prompts each day.  In OpenAI’s opposition argument, the company asserted that the order would require the retention of 60 billion conversations that would be near impossible to search, adding that less than 0.010% of the data would be relevant to NYT’s copyright assertions.

Moreover, OpenAI explained that ChatGPT users expect their deleted chats to be unavailable. Indeed, prior to receiving the preservation order, ChatGPT had touted its data retention policies, whereby if a user deletes a chat, the chat is removed from the user’s account immediately and scheduled for permanent deletion from OpenAI systems within 30 days (absent a legal or security reason to preserve it). However, after a late June hearing, U.S. District Judge Stein denied OpenAI’s objections to Magistrate Judge Wang’s preservation order. Judge Stein concluded that OpenAI’s terms of use allow for preservation of data for legal requests, under which this case falls.

The preservation order applies to ChatGPT Free (along with some other versions) but not to ChatGPT Enterprise. This means that the inputted data of individuals who use the free version of ChatGPT might be subject to NYT’s review. However, for organizations that use ChatGPT Enterprise, this data is outside of the preservation order’s scope. Still, if an employee has used the personal version of ChatGPT to input any employer information, that information could now potentially be subject to this order, too.

What’s Next?

OpenAI continues to raise concerns about privacy in response to the preservation order. In a June tweet, OpenAI CEO Sam Altman introduced the concept of an “AI privilege,” offering that AI conversations should receive the same privileges as conversations with lawyers and medical providers. Of course, this is not a legally recognized privilege, and OpenAI has not introduced it in any legal briefings. Even if the company did so, it is unlikely any court would be willing to create this new category of privilege for generative AI interactions.

While many people following the case are alarmed at larger constitutional privacy concerns, it is also unlikely that all 60 billion conversations subject to the preservation order will become available to the public at large. For now, NYT lawyers are expected to gain access to and begin searching OpenAI logs in support of NYT’s copyright case.

OpenAI will surely bolster its security practices in response to the preservation order, but the fact that data that would have otherwise been deleted is now being maintained is a heightened risk in itself.

Considerations for Organizations

From a data governance perspective, this preservation order raises several considerations as organizations think about data governance:

  • Review your own internal data retention clauses – OpenAI’s terms of use states: “We may share your Personal Data, including information about your interaction with our Services, with government authorities, industry peers, or other third parties in compliance with the law…if required to do so to comply with a legal obligation…”

Such language is common in most businesses’ privacy policies and terms of use. Companies often need data retention clauses that carve out legal exceptions for certain situations, but this case has demonstrated how such clauses could be turned on their head.

Organizations should be aware that “maintaining data for legal purposes,” could also include court orders such as the OpenAI preservation order, which in OpenAI’s case, ended up counter to its privacy promises to its users.

  • Segregate data where technically feasible – Organizations should segregate their data into distinct buckets based on the data’s sensitivity and/or purpose. For OpenAI, one hurdle in arguing against the preservation order was that the high volume of data was not flagged in any manner, so the company was unable to determine which output logs were relevant to the matter.

It is quite possible that OpenAI’s assertion is correct and only a small fraction of the total data is relevant to NYT’s case. Yet, the court had no viable means of making such a determination. If organizations can separate or flag data based on sensitivity and usage, this would help them to isolate relevant data for specific issues rather than having to include all organizational data in evaluating every issue that may arise.

  • Evaluate your vendor contracts – Though ChatGPT Enterprise user data is not affected by this preservation order, the matter serves as a reminder to all organizations to review their vendor contracts. Businesses might consider zero data retention agreements for certain vendors so that these vendors don’t store data – even for a legal purpose – after it has been used for its originally intended purpose. Generally, data minimization limits the likelihood of information exposure overall.
  • Further raise employee awareness – Organizations should remind their employees that personal ChatGPT conversations can no longer be deleted after 30 days and that the “temporary chat” feature is no longer operational because of the preservation order.

That means that any personal ChatGPT input could become a part of the discovery in this matter. Employees should maintain heightened caution in using tools such as ChatGPT to input any proprietary, business, confidential, or sensitive information.

  • Establish an AI Governance Program – It is also prudent for organizations’ legal and IT Teams to understand the AI use across their organizations. Often, individuals or departments are using AI without their legal/IT departments knowing. Usually, this is not nefarious activity, but people are simply unaware of the risks of AI tools and the need for organizational awareness regarding their usage.

Once businesses can wrap their heads around what departments are using what AI tools, they can circulate an AI use questionnaire to encourage responsible and informed use of such tools. In today’s age, employee AI use is likely to happen, but a strong AI governance program and institutionalized policies can increase employee awareness and serve as an organizational safeguard to mitigate AI-related risks.

The reality is that no business can entirely prevent all unauthorized AI use, but with a robust governance program and related AI policies, they can at least train their staff at the individual level and manage that risk holistically as an organization, too.

Enforcement of California’s Delete Act is accelerating. The California Privacy Protection Agency (CPPA) recently sent a clear message to data brokers: register, pay the required fee, and be prepared to defend your data practices, especially when they involve sensitive populations.

CPPA announced recent settlements with two data brokers totaling more than $100,000 for failing to register as required under the Delete Act:

  • Datamasters (Texas-based reseller): $45,000 settlement; and
  • S&P Global (New York-based market intelligence company): $62,600 settlement.

Datamasters was also ordered to stop selling all personal information about Californians, effectively preventing it from operating as a data broker in the state.

The Datamasters case was not only about a registration failure, but also about the nature of the data involved. According to the decision, in 2024, Datamasters:

  • Bought and resold names, addresses, phone numbers, and email addresses of millions of people with certain health conditions, including Alzheimer’s disease, drug addiction, and bladder incontinence;
  • Marketed audience segments for targeted advertising based on sensitive or potentially discriminatory categorizations, including “Senior Lists” and “Hispanic Lists”; and
  • Maintained additional lists based on political views, banking activity, and grocery- and health-related purchases.

Enforcement head Michael Macko framed the risk in terms of downstream misuse, not just advertising compliance: “Reselling lists of people battling Alzheimer’s disease is a recipe for trouble… History teaches us that certain types of lists can be dangerous.” The takeaway is that regulators are treating sensitive list-based targeting as high-risk because it can enable profiling, discrimination, manipulation, or the targeting of vulnerable individuals.

Similarly, S&P Global also failed to register and lacked certain controls. As a result, S&P Global must adopt registration and compliance auditing procedures.

The Delete Act’s core requirement is straightforward: it requires companies to register annually and pay a fee if they were data brokers in the previous year. These enforcement actions show that a failure to register can escalate quickly, particularly where the business model involves sensitive personal data or audience lists tied to health, demographics, or beliefs.

Data brokers should take note that:

  • Registration is not optional. Unintentional failures can still trigger penalties and mandated process changes;
  • Sensitive-data monetization invites scrutiny. Health, age, perceived race, and political views are treated as inherently higher risk;
  • Controls matter. Expect pressure for durable compliance systems such as internal audits and documented procedures; and
  • Enforcement can restrict operations. Consequences can extend beyond fines (like what happened to Datamasters).

The 2025 California legislative session ended without passing critical reforms to the California Invasion of Privacy Act (CIPA), leaving businesses vulnerable and scrambling to manage escalating compliance challenges and legal exposure on their own.

Why Was Reform Needed?

CIPA, originally enacted in 1967 to protect against telephone wiretapping, has recently been used to challenge how websites collect and process user data using tools like Google Analytics, Meta Pixel, and session replay software. Plaintiffs allege these tools “intercept” online communications without proper user consent, invoking CIPA’s provisions on eavesdropping and signal tracing even though the law predates the digital era by decades.

Despite the uncertainty, most courts have not dismissed these claims early, opening the door to expensive litigation. Each violation can mean statutory damages of $5,000 per violation, with potential exposure ballooning rapidly for businesses with significant web traffic.

What Happened with SB 690?

Senate Bill 690 (SB 690) was introduced as a modernization effort, aiming to exempt routine data collection for business operations or analytics from being treated as illegal wiretapping under CIPA. The bill cleared the Senate but stalled in the Assembly Judiciary Committee amid calls for further negotiation between privacy advocates, industry groups, and consumer-rights organizations.

With SB 690 in limbo, companies must continue to navigate the ambiguities and aggressive lawsuits that have become commonplace since plaintiffs’ firms began targeting legacy tracking technologies and years-old analytics integrations.

Essential Compliance Action Steps for Businesses

Until state lawmakers act, businesses should consider taking the following steps to mitigate risk and demonstrate good faith if challenged:

  1. Conduct a Comprehensive Privacy Audit
    • Inventory all data-collection tools including analytics, marketing pixels, session replay, chat, and plug-ins; and
    • Determine what information is being collected and who has access to it (including third parties).
  1. Obtain Clear and Affirmative Consent
    • CIPA requires explicit, affirmative opt-in consent before collecting user data. Use action-based consent banners (e.g., “By clicking Accept, you agree…”);
    • Passive consent such as “by continuing to browse” is insufficient; do not collect personal information before explicit consent; and
    • Some tools, like Google Analytics, now offer “consent mode” to restrict data collection until consent is given. This can be used for all California-based IP addresses visiting your website.
  1. Update Privacy Disclosures
    • Accurately describe all data practices and third-party tool usage in easy-to-understand language in your privacy policy and consent pop-ups; and
    • Ensure public disclosures match actual practices; discrepancies can increase liability.
  1. Strengthen Vendor Agreements
    • Technology vendors contracts must require compliance, limit data use, and include indemnification where possible.
  1. Implement Role-Based Data Controls
    • Restrict access to personal data to only necessary personnel and systems; retain records only as long as needed.
  1. Educate and Align Internal Teams
    • Ensure marketing and IT teams understand CIPA risks and consent requirements. Many issues stem from misunderstandings rather than intentionally ignoring these risks.
  1. Insurance, Indemnification, and Reputational Risk
    • Most general liability and cyber insurance policies exclude coverage for statutory privacy violations like CIPA claims. This gap may leave businesses financially exposed to high defense costs and settlements. Review policy language with brokers or counsel and seek possible amendments .

Beyond direct costs, reputational harm can be significant, as plaintiffs’ firms often publicize lawsuits to exert pressure on companies and attract copycat claims. Transparent, user-friendly communication about data practices is the best defense.

What’s Next?

Many expect SB 690 or similar reform efforts to reappear in the next legislative session, and California courts will continue grappling with conflicting interpretations of CIPA. Until then, regulatory uncertainty will persist, with plaintiffs’ firms actively exploiting it. Preparation and transparency remain businesses’ best shields: proactive audits, updated disclosures, and robust consent mechanics are essential. Audit before you’re accused. Legacy laws like CIPA now pose modern threats. With reforms delayed, compliance is a business-wide mandate, not just a legal question. Companies that act now to align practices, communications, and governance will be best positioned to avoid costly disputes and reputational damage.

A recent ruling from the U.S. District Court for the Northern District of California underscores the limits of state privacy statutes, particularly when plaintiffs reside outside the state and the alleged misconduct lacks a clear connection to California. The decision by Judge Jacqueline Scott Corley dismissed a proposed class action against California-based analytics company Samba TV Inc., clarifying the reach of both state and federal privacy protections. Steve Dellasala, et al., v. Samba TV, Inc., No. 3:25-CV-03470-JSC, 2025 WL 3034069 (N.D. Cal. Oct. 30, 2025).

Plaintiffs from North Carolina and Oklahoma claimed that Samba TV, whose technology is installed on certain Sony televisions, intercepted their private video-viewing data in real time and without consent. The data allegedly included unique device identifiers such as IP addresses.

These individuals brought their suit in federal court in California, asserting claims under:

  • The Comprehensive Computer Data Access and Fraud Act (CCDAFA)
  • The California Invasion of Privacy Act (CIPA)
  • The federal Video Privacy Protection Act (VPPA)
  • Intrusion upon seclusion under general privacy law

Judge Corley found that both the CCDAFA and CIPA specifically indicate the California legislature’s intent for the statutes not to apply extraterritorially; that is, they do not reach conduct occurring wholly outside California. Since the alleged collection and interception of data took place from televisions in North Carolina and Oklahoma and the complaint did not sufficiently allege that the conduct occurred within California, the court held that the California statutes are inapplicable.

The plaintiffs also invoked the federal VPPA, a law designed to protect video rental records from unauthorized disclosure. Judge Corley ruled these claims failed as well, holding that Samba TV does not qualify as a “video tape service provider” under the statute. Instead, Samba was deemed an analytics provider using information about video product usage, rather than distributing, renting, or selling video materials.

Lastly, the court evaluated whether the invasion of privacy was sufficiently “highly offensive” to sustain a claim for intrusion upon seclusion. Collecting an IP address alone, without more, the court held, does not meet the legal threshold for such a claim, especially without allegations showing exactly how the data was used or disclosed.

This decision provides a few key takeaways:

  • Geographic Limitations of State Laws: California’s privacy statutes are meant to protect California residents and activities occurring within the state’s borders. Out-of-state plaintiffs cannot easily reach for these statutes in California federal court if the alleged misconduct occurred elsewhere.
  • VPPA’s Narrow Scope: Plaintiffs should be cautious in applying the VPPA to technology or analytics firms unless those entities directly provide video rental, sale, or similar services.
  • Heightened Requirements for Privacy Claims: Simply collecting identifiers like IP addresses, without more “highly offensive” conduct or misuse, may not clear the hurdle for intrusion upon seclusion or comparable claims under tort law.

This decision highlights the continued challenges in holding technology and analytics companies accountable under a patchwork of state and federal privacy laws, especially for consumers outside states with robust data privacy protections. Plaintiffs seeking redress for alleged privacy violations must pay close attention to jurisdictional limits and the scope of relevant statutes, as courts may remain vigilant in enforcing these boundaries. As data privacy concerns continue to grow, both legislatures and courts will likely face ongoing pressure to clarify and expand the reach of these protections.

The Federal Trade Commission (FTC) announced on February 1, 2023 that it has settled, for $1.5M, its first enforcement action under its Health Breach Notification Rule against GoodRx Holdings, Inc., a telehealth and prescription drug provider.

According to the press release, the FTC alleged that GoodRx failed “to notify consumers and others of its unauthorized disclosures of consumers’ personal health information to Facebook, Google, and other companies.”

In the proposed federal court order (the Order), GoodRx will be “prohibited from sharing user health data with applicable third parties for advertising purposes.” The complaint alleged that GoodRx told consumers that it would not share personal health information, and it monetized users’ personal health information by sharing consumers’ information with third parties such as Facebook and Instagram to help target users with ads for personalized health and medication-specific ads.

The complaint also alleged that GoodRx “compiled lists of its users who had purchased particular medications such as those used to treat heart disease and blood pressure, and uploaded their email addresses, phone numbers, and mobile advertising IDs to Facebook so it could identify their profiles. GoodRx then used that information to target these users with health-related advertisements.” It also alleges that those third parties then used the information received from GoodRx for their own internal purposes to improve the effectiveness of the advertising.

The proposed Order must be approved by a federal court before it can take effect. To address the FTC’s allegations, the Order prohibits the sharing of health data for ads; requires user consent for any other sharing; stipulates that the company must direct third parties to delete consumer health data; limits the retention of data; and implement a mandated privacy program. Click here to read the press release.

In the top three of the list of highly sensitive personal data to be concerned about is our medical information. It’s so sensitive because it is so personal. It used to be that our medical information was located in paper charts at our doctor’s office, the hospital, the pharmacy and our health insurer. Now it’s digital and is accessible by any of our medical providers (which is good for our treatment), pharmacies, and wearable technology and ingestible device manufacturers. It’s not just our medical information that is protected by HIPAA, but also our medical information that is not protected by HIPAA, including the genetic information we voluntarily provide to companies like 23andMe, fitbit, and sleep monitors.

Our medical information is being sent by our medical providers to their business associates to have analytics performed, including utilization, predictive analysis of our health condition, aggregating the data to determine better ways to treat us, as well as with medical device companies in order to monitor our health. Although all of this data sharing is designed to make medical treatment more efficient, less costly and more comprehensive, it also means that our medical information is being transmitted digitally more than ever before. Add to that the fact that our non-HIPAA covered medical information can be aggregated with it, and, well, you get the picture. You can tell a whole lot about someone, and find out about their most personal information, if that information is aggregated and then compromised.

Unfortunately, April 2019 was the worst month ever since the Office for Civil Rights (OCR) has required covered entities and business associates to report data breaches (2010) when it comes to reportable data breaches. Last month, 44 data breaches were reported to OCR by covered entities and business associates.

Those data breaches included the compromised medical records of 686,953 people. These were not the largest breaches in history, but they are the most reported in one month since 2010. About two-thirds of the incidents were caused by hacking or IT incidents. This simply didn’t happen back in the day when all of our information was on paper, but medical providers have not implemented robust security measures to keep up with the sophisticated hacking schemes that we are seeing in the industry.

That is disappointing, but not surprising. We have been reporting for years about how the healthcare industry is a target, particularly of ransomware. The two largest breaches reported last month involved a medical billing company and a radiology provider.

So how do we protect our medical information? We probably can’t have any impact on the security practices our medical providers, health insurers, pharmacies and health insurer implement. However, we can put pressure on them by asking questions about data security when we go to a provider, to show that it is a priority and concern. (Although I will admit that when I ask my provider and dentist about data security, they look at me like I am crazy). But think about it—if we all start to ask our providers every time we go to the doctor, hospital or pharmacy about data security, maybe they’ll start talking about it, too, and look into their data security practices. I know it’s a long shot, but if it becomes the “buzz” of the rest of 2019, maybe we can have an impact so April 2019 goes down as the worst month in history. Of course, the OCR is the enforcement agency of HIPAA violations (including data breaches) and investigates these incidents, but we can help put pressure on providers, too, so data security becomes a top priority.

Other things to consider:

  • Shred all paper medical records
  • Be mindful that if any medical records are on a CD or thumbdrive that it is encrypted and destroyed when no longer needed
  • Avoid emailing medical records in an insecure way (use encryption)
  • Consider whether you want to share your medical records with genetic testing companies, health monitoring companies, or fitness apps, and read the privacy policy before you agree to participate
  • Research the privacy and security posture of medical device companies and whether they have had any recalls or reported any data breaches
  • Ask your provider about his/her data security processes and tell them it is a priority for you
  • If you are storing your medical information through apps or your personal email account, encrypt the data at rest
  • If you are given an option when sharing your information to refrain from disclosing it to others, take that option and limit the sharing
  • Consider requesting restrictions on the access and disclosure of your medical information when you present it to the provider
  • Consider requesting an accounting of disclosures from your medical provider so you can see who the provider has shared your information with (understand that under HIPAA the provider does not have to provide an accounting of disclosures if the disclosure was for treatment, payment or operations)
  • Be careful about sharing your medical information on social media sites.

The health care industry is getting attacked because medical records are worth more on the dark web than any other record. As patients, we can do our part to protect our medical information by using good data security practices, and also by pressuring our providers to do more when it comes to data security. Let’s ask questions about data security every time we go to the doctor, hospital or pharmacy to let our medical providers know that our medical information is important to us and that we expect them to protect it. If all patients do this, perhaps the message will get across to the healthcare industry to ramp up data security measures, and April will be behind us and remain as the worst medical information data breach month in history.

I hang out with Chief Information Security Officers (CISOs) and Chief Information Officers (CIOs). I support them because they have thankless jobs and have a mountain of responsibilities to protect an organization, most of the time without complete support from the organization. I try to help CISOs and CIOs get the budgeting they need to protect their organization and to bridge the gap between the IT folks and the C-Suite folks. It helps protect the organization from a security incident and potential reportable data breach. I prefer proactive guidance rather than after the fact reporting. Therefore, I always try to give them love and support.

You should hang out with your CISO and give him/her your love and support too. They are under incredible stress, are not feeling the love, and could use some support. They spend every day, day after day, trying to protect our organizations from malicious people all over the world whose only goal is to penetrate our organizations to steal our data or sabotage our systems. Our CISOs and IT professionals are silent cyber warriors. They battle every day but we don’t see their battles and are clueless about their constant fight behind the scenes. And they are not feeling the love.

Over the past six weeks, I have been involved in a rash of O365 intrusions because organizations have not implemented multi factor authentication. I have seen an increase in malicious and nasty intrusions targeted at CISOs. Phishing attacks are constant and sophisticated. It is taking a heavy toll, and I have heard anecdotally that the extreme pressure is effecting our CISOs’ mental health.

Protecting an organization is very difficult and stressful. It is causing CISOs, CIOs, and IT professionals to suffer mental health effects, including depression, anxiety and suicidal thoughts. We can’t begin to understand the pressure they are under, and we need to give our IT professionals our support. Listen to their guidance. Think before you click. Don’t download macros. Use multi-factor authentication. Follow your organization’s policies and procedures. Help them protect your organization. And hug and give them some love. They deserve it and need it.

There has been lots of talk about the ripple effects of the Trump travel ban. But here’s a new twist I hadn’t heard before—U.S. Customs and Border Control (CBP) agents are detaining U.S. citizens and requiring them to unlock their phones at the border.

According to The Verge, a U.S. born NASA scientist spent several weeks in South America partaking in his passion of racing solar-powered cars. Sounds like fun. He left for South America under the Obama administration, and came back two weeks later into the Trump administration. When he arrived from Chile at customs in Houston, he was detained. My recollection is that Chile was not on the Trump travel ban.

According to the scientist, although he was enrolled in Global Entry, and has worked for a NASA department for 10 years, he was detained and pressured by CBP agents to hand over his NASA phone and access PIN. (Did I mention that his last name was Bikkannavar?) He was not allowed by NASA policy to divulge the information on the phone, but the CBP waived a blue paper in front of him entitled “Inspection of Electronic Devices” saying they had authority to search the phone, and threatened that if he did not give them his PIN, he would not be able to leave. Although the law does not require passengers to give their PIN, reports are that if you do not divulge your PIN, you will be detained for hours at a minimum.

The scientist divulged his PIN and a border patrol officer took the device and came back in 30 minutes. This is what we have been warning U.S. companies about for years with foreign governments. But in the U.S.? With U.S. citizens?

When he brought his phone to his IT department at NASA, they were not happy. Nor would any other organization’s IT department. This is a problem for individuals and companies.

Although savvy security experts have taken extreme travel precautions when traveling to Russia or China, it appears that they are now providing advice for U.S. citizens traveling abroad and returning to their homeland in this environment.

Wired has issuedA Guide to Getting Past Customs With Your Digital Privacy Intact”, a step-by-step guide for U.S. citizens to protect their privacy when hitting the border of the U.S. from travel abroad. The experts interviewed posited that U.S. citizens should be as paranoid about CPB as traveling to Russia or China. The ACLU reports that customs agents are demanding passwords to phones and social media accounts. Department of Homeland Security Secretary John Kelly has stated that foreign travelers from those seven Muslim majority countries will be required to provide their social media account passwords or they will be denied entry. Many are lamenting about whether the 4th Amendment has disappeared.

Here are what the experts are saying we should consider when returning home from travel abroad to protect privacy of our own and our company data:

  • Lock Down Devices
  • Keep Passwords Secret
  • Phone Home to Have a Lawyer Ready to Help
  • Make a Travel Kit of a Device that Stores Minimal Information
  • Deny Yourself Access So They Can’t Get Access

What do we have to hide? Actually, nothing personal. So what’s the big deal? Like the NASA scientist, we are all under an obligation to protect the data of our companies and our clients, and handing over our phones and passwords with company data on it to CBP for 30 minutes is unsettling and intrusive. It is something that happens in other countries, not the United States. Other countries search their citizens without a warrant. We don’t. Well that’s just not true anymore.

Companies are under tremendous pressure to reduce IT costs. Cloud and Software as a Service (SaaS) offer significant potential cost reductions through the use of shared infrastructure and standardized software offerings. However, there are often significant concerns if the service or application stores or processes Personally Identifiable Information, important intellectual property, other sensitive information, the criticality of the system, or whether the solution opens avenues into a company’s core systems.

A new application of the technology software “containers” offers a potential approach that may reduce many of the risks in current SaaS offerings, while allowing for more security and control. Containers as a Service (CaaS), primarily using software from the open source Docker Project, allows for software to be embedded in a container and delivered to a party, without regard to the recipient’s particular infrastructure.  This would allow the purchaser of the software to choose between different models of software operation, from full hosted cloud, to on-premises behind a firewall.

As more software is developed using the Docker framework, there are expected to be increased choices for software deployment within and outside an organization. This will require software providers to develop new pricing models that better reflect the resources necessary to support a customer, and customers to understand the shifting risk issues that result from licensing and running software in a new manner. New licenses need to be developed, and the license compliance implications of adding existing software to containers must also be addressed. Using Docker security and trust services would provide an extra layer of protection, as would requiring SDLC controls and a SOC2 report as minimal requirements.

This article courtesy of guest blogger Prof. Peter Margulies of Roger Williams University School of Law and originally appeared in the Privacy blog of The Lawfare Institute.

If the devil is in the details, then the announcement early Monday of the inner workings of the new US-EU data-transfer agreement, Privacy Shield, may lack the granularity the deal needs to flourish. There is much to applaud in the new agreement, including extraordinary transparency from the US and a new safeguard to address EU privacy complaints in the form of a State Department Ombudsperson. Those virtues, however, may not be sufficient to ensure the viability of Privacy Shield, which replaces the Safe Harbor framework invalidated by the Court of Justice of the European Union (CJEU) in Schrems v. Data Commissioner.

The CJEU struck down Safe Harbor on the grounds that it lacked both substantive and independent procedural protections against US intelligence collection. The Privacy Shield roll-out is short on concrete information regarding the State Department Ombudsperson’s authority and is instead reliant on broad US “representations” regarding substantive limits on foreign intelligence collection. The CJEU may not be impressed, especially since the CJEU rarely provides European officials with the deference supplied by the European Court of Human Rights (ECHR).

First, the good in Privacy Shield: ODNI General Counsel Bob Litt’s letter reinforces a salutary trend toward transparency that ODNI has championed since the Snowden revelations. To my knowledge, no intelligence service has provided close to the level of detail about intelligence community (IC) structure and decision making that the ODNI letter provides, as it builds on the commitment announced by President Obama in his PPD-28 initiative. The ODNI letter painstakingly describes several layers of review within the IC, including the setting of priorities by the National Signals Intelligence Committee (SIGCOM). In comparison, most European states continue to keep mum about their own internal processes.

The ODNI letter also reaffirms substantive limitations in PPD-28. Bulk collection abroad, which ODNI says may sometimes be necessary to “identify new or emerging threats” concealed in the forest of global data, is limited to the grounds specified in PPD-28, including counterterrorism, combating weapons proliferation, addressing transnational illegality including sanctions evasion, detecting threats to US or allied forces, and learning about certain activities of foreign powers. The US also reiterates its PPD-28 pledge not to collect information in bulk for the purposes of suppressing dissent, disadvantaging individuals or groups based on criteria such as race, gender, or religion, or supplying US firms with a competitive advantage. Moreover, the IC cannot engage in the “arbitrary or indiscriminate collection” of data regarding “ordinary European citizens.”

The ODNI letter commits the IC to tailoring collection. Analysts will focus on “specific foreign intelligence targets or topics through the use of discriminants (e.g., specific facilities, selection terms and identifiers)” whenever that specific approach is “practicable.” Moreover, the IC has multiple layers of internal review, including the ODNI Civil Liberties and Privacy Office. I would add that my own conversations with ODNI and NSA privacy officials—who regularly engage with the public and the privacy community—reinforce my view that this internal control is indeed robust. Other constraints within the executive branch include inspectors general who report regularly to Congress, and the Privacy and Civil Liberties Oversight Board (PCLOB), which has authored well-received reports on U.S. surveillance. In addition, ODNI notes that the Foreign Intelligence Surveillance Court (FISC) now has statutory authority to appoint independent advocates, including noted privacy advocates. And, of course, Congress can also monitor the IC, exerting budgetary pressure if it sees something untoward. The FISC’s authority to appoint independent attorneys stems from statutory changes, including the USA Freedom Act, negotiated with the Administration in the wake of Snowden’s disclosures.

That’s the good in the Privacy Shield roll-out; now for the bad. First, the US representations that it won’t engage in “arbitrary or indiscriminate” collection on Europeans are described only in general terms. The European Commission (EC) statement that the new framework has “adequate” protections for Europeans relies on “explicit assurances” provided by the US. However, the EC statement shares nothing on what those assurances entail. Since the US and the EC have significant business interests dependent on a new privacy agreement, some may question whether those assurances are as robust as the CJEU or EU privacy regulators would prefer. There is simply no way to judge, based on the materials disclosed thus far.

Moreover, the ODNI letter does not address a central EU concern with the status quo: the vagueness of the “foreign affairs” basis for collection under section 702 of the Foreign Intelligence Surveillance Amendments Act (for more, see Tim Edgar’s analysis). I’ve written previously that the foreign affairs prong of section 702 is limited by language that confines such collection to matters concerning a “foreign power” or “territory.” I continue to believe that this language focuses the foreign affairs prong on collection relating to foreign officials and does not extend to monitoring of foreign persons’ routine activities. Perhaps the assurances that US officials provided to the EC confirm this view. Moreover, perhaps the FISC can provide a check to unduly broad interpretation of this provision, since the EC adequacy analysis states that the IC has agreed to a PCLOB recommendation to provide the FISC with a random sample of analysts’ tasked searches. However, the lack of public reassurance on this score underlines a concern of the EC Working Group that the CJEU highlighted in Schrems.

Furthermore, procedural safeguards outlined by ODNI may not be as robust as the CJEU wishes. The inspectors general, for example, are hampered by a recent Justice Department Office of Legal Counsel opinion that allows executive branch agencies to limit disclosure of data to inspectors general conducting investigations. Moreover, the FISC has no control over the United States’ biggest foreign collection program, which is based on Executive Order 12333. The State Department Ombudsperson may have the authority to address complaints that involve EO 12333, but the announcement is not clear on this point. The Ombudsperson description in Annex III of the roll-out says that this official will “work closely” with other government officials. Nevertheless, the description does not specify that the Ombudsperson will have full access to IC data and procedures.

Similarly, according to the EC statement, the Ombudsperson will have to “confirm” that each complaint received has been “properly investigated.” To confirm this, the Ombudsperson must ascertain that surveillance has complied with US law, including the “representations” and “explicit assurances” that the US has provided, or that any violation has been remedied. However, this confirmation brings us back to the lack of specificity in the public version of those US “representations.” It is difficult to see how robust the Ombudsperson’s review will be, when so much depends on assurances that are not accessible to the public, the CJEU, or European data regulators.

As Privacy Shield is implemented, the Ombudsperson may develop a course of dealing with the IC that addresses these concerns. Experience might demonstrate that the Ombudsperson has access to all the information that she needs, and uses that information to keep the IC honest. But that experience will be outside of the four corners of the Privacy Shield’s founding documents, making consideration of experience’s teachings a tougher sell with skeptical actors such as the CJEU.

That brings us to the ugly. The CJEU should provide some deference to the EC, particularly on matters involving national security. That deference is apparent in decisions of the ECHR on surveillance, such as Weber v. Germany, which upheld a substantial overseas surveillance program conducted by the German Republic. However, the CJEU has in practice diminished deference to near-microscopic levels in cases like Schrems and Kadi v. Council, which invalidated the EU’s implementation of the UN’s terrorist sanctions framework. Indeed, the framework invalidated in Kadi IIalso involved an ombudsperson, who had been effective in ensuring fairness to subjects of sanctions. This real-world efficacy made no difference to the CJEU.  Instead, the CJEU insisted on a more formal due process mechanism, which was unworkable because of states’ reluctance to disclose intelligence sources and methods supporting terrorist designations.

The CJEU may also have concerns about the independence of the State Department Ombudsperson for Privacy Shield. True, that official will not formally be part of the IC, and in this sense will be independent. Nevertheless, the State Department is also an executive branch department, and is a customer of the IC, making use of intelligence that the IC provides.  The President can fire the Ombudsperson, as he or she can fire IC officials. The Ombudsperson may as a practical matter retain independence, as inspectors general do, because of her different constituency. But that belief hinges on institutional culture more than formal legal guarantees. Institutional culture may be too weak a reed to support Privacy Shield, particularly for a court as activist as the CJEU.

In sum, Privacy Shield brings much to the table, including a welcome US candor that will hopefully rub off on our more reticent European allies. The Ombudsperson proposal has significant promise. However, it is too early to tell whether the Ombudsperson can develop a track record of effectiveness that persuades the CJEU and European regulators who found Safe Harbor wanting.