According to a new LayerX report, most users are logging into GenAI tools through personal accounts that are not supported or tracked by an organization’s single sign on policy. These logins to AI SaaS applications are unknown to the organization and are “not subject to organizational privacy and data controls by the LLM tool.” This is because most GenAI users are “casual, and may not be fully aware of the risks of GenAI data exposure.” As a result, a small number of users that can expose large volumes of data. LayerX concludes that “[a]pproximately 18% of users paste data to GenAI tools, and about 50% of that is company information.” LayerX’s findings include that 77% of users are using ChatGPT for online LLM tools.

We have outlined on several occasions the risk of data leakage with GenAI tools, and this report confirms that risk.

In addition, the report notes that “most organizations do not have visibility as to which tools are used in their organizations, by whom, or where they need to place controls.” Further, “AI-enabled browser extensions often represent an overlooked ‘side door’ through which data can leak to GenAI tools without going through inspected web channels, and without the organization being aware of this data transfer.”

LayerX provides solid recommendations to CISO’s including:

  • Audit all GenAI activity by users in the organization
  • Proactively educate employees and alert them to the risks of GenAI tools
  • Apply risk-based restrictions “to enable employees to use AI securely”

Employees must do their part as well. CISOs can implement operational measures to attempt to mitigate the risk of data leakage, but employees should follow organizational policies around the use of GenAI tools, collaborate with employers on the appropriate and authorized use of GenAI tools within the organization, and take responsibility for securing company data.

I always watch what the federal government requires of its employees’ use of technology to get a feel for risks and what is coming down the pike from a regulatory standpoint—this has been going on for years. That’s why I was one of the first to get a cover for my laptop camera, why I have been concerned about traveling to foreign countries with laptops, and why I was worried about the use of geofencing and location-based services long before it was commonly understood (and I would argue it is still NOT commonly understood).

A perfect example is that the federal government was the first to ban the use of TikTok by its employees. Then states followed, prohibiting state employees from using TikTok on state-issued phones. Why? Because it is spyware. Not long after, Congress passed a bipartisan bill banning TikTok entirely.

I predict the same will be true with GenAI tools. In April, the U.S. House “set a strict ban on congressional staffers’ use of Microsoft Copilot,” after restricting staffers’ use of ChatGPT last year. The reason? “The Microsoft Copilot application has been deemed by the Office of Cybersecurity to be a risk to users due to the threat of leaking House data to non-House approved cloud services.” Therefore, it “will be removed from and blocked on all House Windows devices.”

There are a several risks with using GenAI tools in the workplace, including the risk of exposing company data. Although using these tools will make work lives more efficient, it is essential to understand the risks and manage them personally and professionally. I thought this article by Wired did a good job of explaining the risks in a cogent and efficient way and is worth a read

A new US National Cybersecurity Alliance survey  shows that over one-third (38%) of “employees share sensitive work information with artificial intelligence (AI) tools without their employer’s permission.” Not surprisingly, “Gen Z and millennial workers are more likely to share sensitive work information without getting permission.”

The problem with employees sharing workplace data with chatbots is that if a worker inputs sensitive personal information or proprietary information into the model, that information is then used to train the model. If another user enters a query that the original information is responsive to, then the sensitive or proprietary data is provided in the response. That’s how generative AI works. The data disclosed is used to teach the model and is no longer private.

According to Dark Reading, several cases illustrate how significant the risk of employees sharing confidential information with chatbots is:

“A financial services firm integrated a GenAI chatbot to assist with customer inquiries, …Employees inadvertently input client financial information for context, which the chatbot then stored in an unsecured manner. This not only led to a significant data breach, but also enabled attackers to access sensitive client information, demonstrating how easily confidential data can be compromised through the improper use of these tools.”

Another real example of the inadvertent disclosure of proprietary and confidential information by a misinformed employee is:

“An employee, for whom English was a second language, at a multinational company, took an assignment working in the US…. In order to improve his written communications with his US based colleagues, he innocently started using Grammarly to improve his written communications. Not knowing that the application was allowed to train on the employee’s data, the employee sometimes used Grammarly to improve communications around confidential and proprietary data. There was no malicious intent, but this scenario highlights the hidden risks of AI.”

These examples are more common than we think, and the percentage of employees using generative AI tools is only growing.

To combat the risk of inadvertent disclosure of company data by employees, it is essential for companies to develop and implement an AI Governance Program, an AI Acceptable Use Program, and provide training to employees about the risks and appropriate uses of AI in the organization. According to the NCA survey, more than half of all employees have NOT been trained on the safe use of AI tools. According to the NCA, “this statistic suggests that many organizations may underestimate the importance of training.”

Employees’ use of unapproved generative AI tools by employees poses a risk to organizations because IT professionals are unable to adequately secure the environment from tools that are under their radar. Now is the time to develop governance over AI use, determine appropriate and approved tools for employees, and train them on the risks and safe use of AI in your organization.

The State of California, under the leadership of Governor Gavin Newsom, has taken the lead of its sister states in mobilizing resources to investigate the risks of the use of generative artificial intelligence (GenAI) tools and develop policies addressing them.

Following in the steps of Colorado, this week, the Governor signed into law an amendment to the California Consumer Privacy Act that includes neural data as protected data covered by the law. The law applies to any devices that can record or alter nervous system activity, including implants and wearables. The amendment provides protection to neural data collected through neurotechnologies and equates it to other sensitive data collected from companies, including fingerprints, iris scans, and other biometric information.

The bill was supported by Neurorights Foundation, which stated that the law sends a “clear signal to the fast-growing neurotechnology industry” to protect people’s mental privacy. This means that private companies collecting brain data have to provide notice of collection to consumers, provide consumers the opportunity to limit disclosure to third parties, and to request deletion.

The amendment provides privacy guardrails applicable to neurotechnologies when other laws, like HIPAA, may not apply in order to protect the data from unauthorized collection, use, and disclosure.  

In addition to signing the neuro data amendment into law, Governor Newsom announced that he has signed 17 bills “covering the deployment and regulation of GenAI technology…cracking down on deepfakes, requiring AI watermarking, protecting children and workers, and combating AI-generated misinformation.” He has convened experts in the field to study the threats of GenAI and develop “workable guardrails for deploying GenAI,” and “explore approaches to use GenAI technology in the workplace.”

The initiatives in California are designed to “protect Californians from fast-moving and transformative GenAI technology.” We have closely watched California’s efforts to tackle data privacy and security threats and issues over many years, as well as its response to them. California is usually at the forefront of the issues, and other states usually follow their lead (e.g., data breach notification, the California Online Privacy Protection Act, and the California Consumer Privacy Act). Watching California’s progress in responding to the risks of using GenAI is probably a good predictor of how other states will respond.. It would be preferable for Congress to take the lead on this issue, but as we have seen in the past, the hope of a national law in the face of fast-moving technology and its risks has never materialized. Because Congress is too slow to move, states are stepping in to protect their consumers, and we are poised to have a patchwork system of regulation for GenAI technology. This is not sound public policy for companies or consumers. Let’s hope Congress can get ahead of the curve, but for now, based on our long experience in watching the development of data privacy and security laws, we are going to continue to watch California’s efforts.

On July 29, 2024, the American Bar Association issued ABA Formal Opinion 512 titled “Generative Artificial Intelligence Tools.”

The opinion addresses the ethical considerations lawyers are required to consider when using generative AI (GenAI) tools in the practice of law.

The opinion sets forth the ethical rules to consider, including the duties of competence, confidentiality, client communication, raising only meritorious claims, candor toward the tribunal, supervisory responsibilities of others, and setting of fees.

Competence

The opinion reiterates previous ABA opinions that lawyers are required to have a reasonable understanding of the capabilities and limitations of specific technologies used, including remaining “vigilant” about the benefits and risks of the use of technology, including GenAI tools. It specifically mentions that attorneys must be aware of the risk of inaccurate output or hallucinations of GenAI tools and that independent verification is necessary when using GenAI tools. According to the opinion, users must evaluate the tool being used, analyze the output, not solely rely on the tool’s conclusions, and cannot replace their judgment with that of the tool.

Confidentiality

The opinion reminds lawyers that they are ethically required to make reasonable efforts to prevent inadvertent or unauthorized access or disclosure of client information or their representation of a client. It suggests that, before inputting data into a GenAI tool, a lawyer must evaluate not only the risk of unauthorized disclosure outside the firm, but also possible internal unauthorized disclosure in violation of an ethical wall or access controls. The opinion stressed that if client information is uploaded to a GenAI tool within the firm, the client data may be disclosed to and used by other lawyers in the firm, without the client’s consent, to benefit other clients. The client data input into the GenAI tool may be used for self-learning or teaching an algorithm that then discloses the client data without the client’s consent.

The opinion suggests that before submitting client data to a GenAI tool, lawyers must review the tool’s privacy policy, terms of use, and all contractual terms to determine how the GenAI tool will collect and use the data in the context of the ethical duty of confidentiality with clients.

Further, the opinion suggests that if lawyers intend to use GenAI tools to provide legal services to clients, lawyers are required to obtain informed client consent before using the tool. The lawyer is required to inform the client of the use of the GenAI tool, the risk of use of the tool and then obtain the client’s informed consent prior to use. Importantly, the opinion states that “general, boiler-plate provisions [in an] engagement letter” are not sufficient” to meet this requirement.

Communication

With regard to lawyers’ duty to effectively communicate information  that is in the best interest of their client, the opinion notes that—depending on the circumstances—it  may be in the best interest of the client to disclose the use of GenAI tools, particularly if the use will affect the fee charged to the client, or the output of the GenAI tool will influence a significant decision in the representation of the client. This communication can be included in the engagement letter, though it may be appropriate to communicate directly with the client before including it in the engagement letter.

Meritorious Claims + Candor Toward Tribunal

Lawyers are officers of the court and have an ethical obligation to put forth meritorious claims and to be candid with the tribunal before which such claims are presented. In the context of the use of GenAI tools, as stated above, there is a risk that without appropriate evaluation and supervision (including the use of  independent professional judgment), the output of a GenAI tool can sometimes be erroneous or considered a “hallucination.” Therefore, to reiterate the ethical duty of competence, lawyers are advised to independently evaluate any output provided by a GenAI tool.

In addition, some courts require that attorneys disclose whether GenAI tools have been used in court filings. It is important to research and follow local court rules and practices regarding disclosure of the use of GenAI tools before submitting filings.

Supervisory Responsibilities

Consistent with other ABA Opinions relevant to the use of technology, the opinion stresses that managerial responsibilities include providing clear policies to lawyers, non-lawyers, and staff about the use of GenAI in the practice of law. I think this is one of the most important messages of the opinion. Firms and law practices are required to develop and implement a GenAI governance program, evaluate the risk and benefit of the use of a GenAI tool, educate all individuals in the firm on the policies and guardrails put in place to use such tools, and supervise their use. This is a clear message that lawyers and law firms need to evaluate the use of GenAI tools and start working on developing and implementing their own AI governance program for all internal users.

Fees

The key takeaway of the fees section of Opinion 512 is that a lawyer can’t bill a client to learn how to use a GenAI tool. Consistent with other opinions relating to fees, only extraordinary costs associated with the use of GenAI tools are permitted to be billed to the client, with the client’s knowledge and consent. In addition, the opinion points out that any efficiencies gained by the use of GenAI tools, with the client’s consent, should benefit the client through reduced fees.

Conclusion

Although consistent with other ABA opinions related to the use of technology, an understanding of ABA Opinion 512 is important as GenAI tools become more ubiquitous. It is clear that there will be additional opinions related to the use of GenAI tools from the ABA as well as state bar associations and that it is a topic of interest in the context of adherence with ethical obligations. A clear message from Opinion 512 is that now is a good time to consider developing an AI governance program.