Many companies are exploring the use of generative artificial intelligence technology (“AI”) in day-to-day operations. Some companies prohibit the use of AI until they get their heads around the risks. Others are allowing the use of AI technology and waiting to see how it all shakes out before determining a company stance on its use. And then there are the companies that are doing a bit of both and beta testing its use.

No matter which camp you are in, it is important to set a strategy for the organization now before users adopt AI and the horse is out of the barn, much like we are seeing with the issues around TikTok. Once users get used to using the technology in day to day operations, it will be harder to pull them back. Users don’t necessarily understand the risk posed to organizations when they use AI while performing their work.

Hence, the need to evaluate the risks, set a corporate strategy around the use of AI in the organization, and disseminate the strategy in a clear and meaningful way to employees.

We have learned much from the explosion of technology, applications, and tools through our experience over the last few decades with social media, tracking technology, disinformation, malicious code, ransomware, security breaches and data compromise. As an industry, we responded to each of those risks in a haphazard way. It would be prudent to learn from those lessons and try to get ahead of the use of AI technology to reduce the risk posed by its use.

A suggestion is to form a group of stakeholders from the organization to evaluate the risk posed by the use of AI, how the organization may reduce the risks, set a strategy around the use of AI within the organization, and put controls in place to educate and train users on its use within the organization. Setting a strategy around AI is no different than any other risk to the organization and similar processes can be used to develop a plan and program.

There are myriad resources to consult when evaluating the risk of using AI. One I found to be helpful is: A CISO’s Guide to Generative AI and ChatGPT Enterprise Risks published this month by the Team8 CISO Village.

The Report outlines risks to consider and categorizes them into High, Medium, and Low, and then outlines how to make risk decisions. It is spot on and a great resource guide if you are just starting the conversation within your organization.