I hang out with a lot of Chief Information Security Officers (CISOs), so this piece is for them. Of course, it will be of interest to all security professionals struggling with assessing the risk of large language models (LLMs).

According to DarkReading, Berryville Institute of Machine Learning (BIML) recently issued a report entitled “An Architectural Risk Analysis of Large Language Models: Applied Machine Learning Security,” which is designed “to provide CISOs and other security practitioners with a way of thinking about the risks posed by machine learning and artificial intelligence (AI) models, especially LLMs and the next-generation large multimodal models so they can identify those risks in their own applications.”

The core issue addressed in the report is that users of LLMs do not know how the developers have collected and validated the data to train the LLM models. BIML found that the “lack of visibility into how artificial intelligence (AI) makes decisions is the root cause of more than a quarter of risks posed by LLMs….”

According to BIML, risk decisions are being made by large LLM developers “on your behalf without you even knowing what the risks are…We think that it would be very helpful to open up the black box and answer some questions.”

The report concludes that “[s]ecuring a modern LLM system (even if what’s under scrutiny is only an application involving LLM technology) must involve diving into the engineering and design of the specific LLM system itself. This architectural risk analysis is intended to make that kind of detailed work easier and more consistent by providing a baseline and a set of risks to consider.”

CISOs and security professionals may wish to dive into the report by requesting a download from BIML. The 28-pager is full of ideas.