Businesses are increasingly using artificial intelligence (AI) to innovate. This trend brings about new risks to intellectual property (IP), including new challenges in procuring IP protection and new risks of IP infringement. Part I of this two-part post will focus on IP procurement.

Patenting an invention requires pinning down the correct inventorship. Yet, the U.S. Court of Appeals for the Federal Circuit recently determined that only humans can be named as inventors on a patent. Thaler v. Vidal, 43 F.4th 1207, 1210 (Fed. Cir. 2022), cert. denied, No. 22-919 (U.S. Apr. 24, 2023). This development has significant implications for any company seeking to protect AI-generated innovations. The U.S. Patent and Trademark Office is currently seeking public comments on AI inventorship.

Patents must also sufficiently describe the invention so as to enable a person of ordinary skill in the art to carry out the invention. This is uniquely challenging for AI inventions, due to the “black box” nature of some AI engines. There is potential for near-term evolution in this area of patent law. How can businesses ensure that patent applications filed today will meet future standards? Companies should be aware of these potential shifts and adapt their IP strategies accordingly.

Copyrighting AI-generated content is also topical. Presently, whether AI-generated subject matter is copyrightable may bear on the level of human contribution. Moreover, determining who owns the copyright may depend on contractual provisions (e.g., website terms of service).

Obstacles in IP procurement can pose significant competitive risks. Companies utilizing AI for innovation should develop IP procurement strategies that are adapted to their evolving business model.

While recently speaking at a conference hosted by Vanderbilt University, Jen Easterly, the Director of the Cybersecurity Infrastructure Security Agency (CISA) urged the development of regulations around the use of artificial intelligence (AI). According to reporting by Reuters, Easterly recalled the lessons learned from the lack of security in the design of the Internet and software, and the relationship between social media platforms and mental health issues. According to Reuters, Easterly commented that “the failure to identify risks before past technologies were widely deployed has left policy-makers and cyber defenders scrambling to address the worst threats from those developments.”

She further stated, “AI will be the most powerful capability of our time, and I believe it will also be the most powerful weapon of our time, and we cannot afford to make the same mistakes with this epoch-defining technology that we’ve made with the Internet and with software and with social media.”

She emphasized that China has already “established guardrails to ensure that AI represents Chinese values, and the U.S. should do the same” which provides the U.S. “an opportunity to govern AI in a way that embeds democratic values in its use and its deployment.”

There are so many lessons to be learned from the past few decades of the development of different technologies, and how the use of innovative technology has unintentional consequences. We need to learn from those lessons and not repeat them.

Not a moment goes by without receiving a new alert of some sort about artificial intelligence (AI). The proliferation of articles and comments about AI is astounding. It is a hot topic to say the least.

I am thinking this is a brilliant career path. Private industry, government agencies, consulting agencies—everyone is looking for talent in the AI space.

As an example, the Department of Defense just announced that it is preparing to release a new data and AI implementation strategy, allowing it to add 10 new data and AI related roles for the Pentagon’s cyber workforce. Sounds pretty awesome.

There is clearly a need to prepare students and young professionals with skills to address the complexities and governance of the use of AI in organizations.

I know I am preparing to devote significant time during my Privacy Law class at Roger Williams University School of Law to discuss AI to start that preparation of our future talent in this space. I urge others to do the same. There will be a tremendous need for talent with knowledge of AI to help organizations navigate the complexities of its use.  It is an exciting opportunity for those who seize it.

Researchers at WithSecure cybersecurity firm have seen two malware attacks against Veeam Backup and Replication servers believed to be initiated by cybercrime group FIN7, also known as Carbon Spider, which has also been linked to Darkside, BlackMatter, and BlackCat/ALPHV ransomware variants.

The WithSecure investigators believe that the attacks may be part of a larger campaign, but that the scope of the attack is limited. Nonetheless, because of the sophistication of FIN7, WithSecure recommends that companies using Veeam’s solutions follow Veeam’s recommendations and guidelines to patch and configure their backup servers against a recently discovered vulnerability as outlined in Kb4424: CVE-2023-27532 and watch for signs of compromise.

OpenAI, the developer of ChatGPT, stated that it has suffered a potential data breach in ChatGPT’s source code due to a vulnerability in the software. OpenAI “took ChatGPT offline…due to a bug in an open-source library which allowed some users to see titles from another active user’s chat history…the same bug may have caused the unintentional visibility of payment-related information” of some ChatGPT subscribers who were using ChatGPT on March 20, 2023. The information that may have been accessible included “name, email address, payment address, credit card type and the last four digits (only) of a credit card number, and credit card expiration date.”

According to OpenAI, “the bug was discovered in the Redis client open-source library,” that was used by OpenAI to cache user information.

Not only is there concern about the use of artificial intelligence tools by threat actors to attack victims, but the very companies that are developing them will be directly attacked, as OpenAI was, giving threat actors the ability to weaponize chatbots further. According to Security Intelligence, “Only time will tell if the technology will be the victim of attacks or the source.” 

Researchers at Meta, the owner of Facebook, released a report this week which indicated that since March 2023, Meta “has blocked and shared with our industry peers more than 1,000 malicious links from being shared across our technologies” of unique ChatGPT-themed web addresses designed to deliver malicious software to users’ devices.

According to Meta’s report, “to target businesses, malicious groups often first go after the personal accounts of people who manage or are connected to business pages and advertising accounts. Threat actors may design their malware to target a particular online platform, including building in more sophisticated forms of account compromise than what you’d typically expect from run-of-the-mill malware.”

In one recent campaign, the threat actor “leveraged people’s interest in Open AI’s ChatGPT to lure them into installing malware…we’ve seen bad actors quickly pivot to other themes, including posing as Google Bard, TikTok marketing tools, pirated software and movies, and Windows utilities.”

The Meta report provides useful tools to guard against these attacks and responses in the event a device is affected.

Bad actors will use the newest technology as weapons. According to Cyberscoop, Meta researchers have said “hackers are using the skyrocketing interest in artificial intelligence chatbots such as ChatGPT to convince people to click on phishing emails, to register malicious domains that contain ChatGPT information and develop bogus apps that resemble the generative AI software.”

With any new technology comes new risk. Staying abreast of these risks and understanding how threat actors can pivot from personal accounts to business accounts, may prevent attacks against individuals and their companies.

The Foundation for Defense of Democracies issued a Report late last week entitled Time to Designate Space Systems as Critical Infrastructure which cogently outlines the risks associated with space systems (which are basically the same as any other electronic system) in order to designate space systems as the seventeenth critical infrastructure sector.

Space systems are defined in the Report as “the ecosystem from ground to orbit, including sensors and signals, data and payloads, and critical technologies and supply chains.”  The Report outlines the reasons why designating space systems as critical infrastructure is a matter of national security as “the threat from Russia and China is growing. Both those authoritarian powers have placed American and partner space systems in their crosshairs, as demonstrated by their testing of anti-satellite (ASAT) capabilities.”

The Report lists recommendations for the Executive Branch, Congress Industry, and Industry and Government Together. Its conclusion is that “the United States needs a more concerted and coherent approach to risk management and public-private collaboration regarding space systems infrastructure.” Space systems have similar risks as any other system—it is just in a different location. These risks need to be addressed similarly to other critical infrastructure sectors.

Many companies are exploring the use of generative artificial intelligence technology (“AI”) in day-to-day operations. Some companies prohibit the use of AI until they get their heads around the risks. Others are allowing the use of AI technology and waiting to see how it all shakes out before determining a company stance on its use. And then there are the companies that are doing a bit of both and beta testing its use.

No matter which camp you are in, it is important to set a strategy for the organization now before users adopt AI and the horse is out of the barn, much like we are seeing with the issues around TikTok. Once users get used to using the technology in day to day operations, it will be harder to pull them back. Users don’t necessarily understand the risk posed to organizations when they use AI while performing their work.

Hence, the need to evaluate the risks, set a corporate strategy around the use of AI in the organization, and disseminate the strategy in a clear and meaningful way to employees.

We have learned much from the explosion of technology, applications, and tools through our experience over the last few decades with social media, tracking technology, disinformation, malicious code, ransomware, security breaches and data compromise. As an industry, we responded to each of those risks in a haphazard way. It would be prudent to learn from those lessons and try to get ahead of the use of AI technology to reduce the risk posed by its use.

A suggestion is to form a group of stakeholders from the organization to evaluate the risk posed by the use of AI, how the organization may reduce the risks, set a strategy around the use of AI within the organization, and put controls in place to educate and train users on its use within the organization. Setting a strategy around AI is no different than any other risk to the organization and similar processes can be used to develop a plan and program.

There are myriad resources to consult when evaluating the risk of using AI. One I found to be helpful is: A CISO’s Guide to Generative AI and ChatGPT Enterprise Risks published this month by the Team8 CISO Village.

The Report outlines risks to consider and categorizes them into High, Medium, and Low, and then outlines how to make risk decisions. It is spot on and a great resource guide if you are just starting the conversation within your organization. 

Colorado is poised to become one of the first states to regulate how insurers can use big data and AI-powered predictive models to determine risk for underwriting. The Department of Insurance recently proposed new rules that would require insurance companies to establish strict governing principles on how they deploy algorithms and how they submit to significant oversight and reporting demands.

The draft rules are enabled by Senate Bill (SB) 21-169 , which protects Colorado consumers from insurance practices that result in unfair discrimination on the basis of race, color, national or ethnic origin, religion, sex, sexual orientation, disability, gender identify, or gender expression. SB 21-169 holds insurers accountable for testing their big data systems – including external consumer data and information sources, algorithms, and predictive models – to ensure they are not unfairly discriminating against consumers on the basis of a protected class.

The draft rules regulate the use of predictive models based on nontraditional factors including credit scores, social media habits, purchasing habits, home ownership, educational attainment, licensures, civil judgments, court records, and occupation that does not have a direct relationship to mortality, morbidity, or longevity risk for insurance underwriting. Insurers that use this sort of nontraditional information or algorithms based on it will need to implement an extensive governance and risk management framework and submit documentation to the Colorado Division of Insurance. New York City recently postponed enforcement of its AI bias law amid criticism of vagueness and impracticability, as we recently reported. In contrast, Colorado’s draft insurance rule is among the most detailed AI bias regulations to come out yet. AI regulation is a rapidly growing landscape, and these draft rules may be a sign of what’s to come

Slow down when adopting and using Artificial Technology tools (AI). There are a number of issues that have been presented in literature regarding the use of AI tools, one of which centers around ethical concerns. When exploring the use of AI, methods used to explore the use of social media platforms, allowing technology companies to track you through cookies, sharing your camera, voice, and location are still relevant in assessing the risk of using AI technology.

Start researching how the use of AI technology can affect you or your employer so you stay up to speed on the risks. This will be a benefit to both you and your employer.

We have provided some resources to start researching the capabilities and risks of AI technology and will continue to provide resources to our readers. This week, take a look at this blog by Cem Dilmegani, Generative AI Ethics: Top 6 Concerns, this blog by Harry Guinness, or this article by VentureBeat.

We will continue to provide resources on AI to assist our readers with evaluating its use, both personally and professionally.