The Urgency of Securing AI Workloads for CISOs

By Loris Degioanni - MAY 21, 2024

SHARE:

Media attention on various forms of generative artificial intelligence (GenAI) underscores a key dynamic that CISOs, CIOs, and security leaders face – namely, to keep current with the fast pace of technological change and the risk factors that this change brings to the enterprise. Whether it’s blockchain, microservices in the cloud, or these GenAI workloads, security leaders are not just tasked with keeping their organizations secure and resilient, but they are also the key players in understanding and managing the risks associated with new technology and new business models. While each novel technology brings new considerations and risks to evaluate, there are a handful of constants that the security profession must address proactively. 

Temporal considerations

Our businesses and the applications that underpin them run at network and machine speed. Web services, APIs, and other interconnections are designed for near-instantaneous response. It’s not only lawyers who note that “time is of the essence,” it’s every colleague we support and the business applications and services that we use collectively to run the organization. The focus on speed and response times permeates business transactions and the application development environments they rely upon. The rush to respond and deliver has undermined more traditional risk assessments and evaluations that were effectively point-in-time analyses. Security today demands real-time context and actionable insights instantiated at machine speed. Runtime telemetry and runtime insights are required to speed up our security operations.

Automation

The evening news is awash in stories suggesting that AI systems will displace workers with machines and applications that do the job more effectively than us sentient beings. Automation is not new. Almost every industry has invested in automation. We see robots building cars, kiosks at banks and retail outlets, and automation within the cybersecurity profession. We will witness new forms of automation as GenAI tools are rolled out to support businesses. We already see this with system, code, and configuration reviews within infrastructure, operations and security programs. Automation should be welcomed within our security programs and integral to the program’s target operating model. 

Algorithms and mathematical models

The third constant we witness with technological change is using algorithms and mathematical models to contextualize and distill data. We live in an algorithmic economy. Data and information drive our businesses. Algorithms inform business models and decision-making. Like the other constants of speed and automation, algorithms are also used in our cybersecurity profession. Algorithms evaluate processes, emails, network traffic, and many other data sets to determine if the behavior is malignant or benign. A notable challenge with algorithms is that, in most cases, they are considered the manufacturer’s intellectual property. Algorithms and transparency are at odds. Consequently, addressing the fidelity and assurance of an algorithmic outcome is less science and more a leap of faith. We assume the results are fine, but there’s no guarantee that two plus two does not equal 4.01 after executing the algorithm. 

How to assess new technologies

This context of speed, automation, and algorithmic use should be front and center for CISOs as they evaluate how their organization will deploy AI tools for both their business and the security of its operations. Having a methodology to contextualize new technologies, like GenAI, and their commensurate risks is integral to the CISO and CIO roles. Technology leaders must effectively operate their respective programs and support the business while governed by these constants of speed, automation, and the widespread use of algorithms for decision-making and data analysis. 

A methodological approach to rapidly assessing new technologies is required to avoid being caught flat-footed by technological change and the inherent risks that this change brings to the business. While each business will have its own approach to evaluating risk, some effective techniques should be part of the methodology. Let’s take a quick look at some important elements that can be used to evaluate the impacts of GenAI. 

Engage the business

New technologies like GenAI have pervasive organizational impacts. Ensure that you solicit feedback and insights from key organizational stakeholders including IT, lines of business, HR, general counsel, and privacy. CISOs who routinely meet with their colleagues throughout the business can avoid being blindsided by the new tools and applications that these colleagues employ. CISOs should be asking their colleagues and counterparts within the security community how they are using AI currently and/or how they intend to use AI for specific functions within the organization. 

Conduct a baseline threat model using STRIDE and DREAD 

Basic threat modeling complements more traditional risk assessments and penetration tests, and can be done in an informal nature where expediency is required. CISOs and their staff should go through prospective AI use cases and ask questions to evaluate how user activity within an AI application could be spoofed, how information could be tampered with, how transactions would be repudiated, where information disclosure could occur, how services could be denied, and how privileges could be elevated within the environment. The security team should take an inquisitive approach to these questions and should think like a threat actor when trying to exploit a given system or application. A basic STRIDE model ensures that key risks are not omitted from the analysis. DREAD looks at the system’s impact and complements STRIDE context. The CISO and security team should evaluate the potential damages that may result if an AI workload or service were compromised, how easy it would be to reproduce the attack against the system, the degree of skill and tooling required to exploit the given system, who the affected users and systems would be, and how hard it would be to discover the attack. 

Evaluate telemetry risks

Newer applications and technologies, like the current forms of GenAI, may lack some of the traditional telemetry of more mature technologies. The CISO and security team members must ask basic questions about the AI service. A simple open-ended question may start the process – “What is it that we don’t see that we should see with this application, and why don’t we see it?” Delve a bit deeper and ask, “What is it that we don’t know about this application that we should know, and why don’t we know it?” Lean into these questions from the runtime, workload, and configuration perspectives. These types of open-ended questions have led to significant improvements in application security. If questions like these were not being asked, security professionals would not have seen the risks applications encounter at runtime, that service accounts are overly permissioned too frequently, or that third-party code may introduce vulnerabilities requiring remediation or additional controls to be implemented. 

Use a risk register for identified risks

CISOs and their teams should document concerns about using GenAI applications and how these risks should be mitigated. There are many forms of risks that GenAI may present, including issues related to the fidelity and assurance of responses, data, and intellectual property loss that may occur when this information is fed into the application; the widespread use of deep fakes and sophisticated phishing attacks against the organization; and polymorphic malware that quickly contextualizes the environment and attacks accordingly. GenAI dramatically expands the proverbial attack surface of an organization in that these large language models (LLMs) can quickly create organization-specific attacks based on the dossier of the organization’s employees and publicly available information. In effect, while the algorithms that these AI tools use are obfuscated, the data they use is in the public domain and can be quickly synthesized for both legitimate and nefarious purposes. Use a risk register to document all of these potential risks when using AI tools and applications. Ultimately, the business will decide if the upside benefits of using a specific AI function or application outweigh any identified risks. Risk treatment should remain with the business. Our job as security leaders is to ensure that our colleagues in the C-suite are aware of risks and potential remediation and the resources required. 

Focus on training and critical thinking

AI has the opportunity to fundamentally change our economy just as the internet modernized business operations via ubiquitous connectivity and access to information in near real-time. The proverbial genie is out of the AI bottle. Creative and new uses of AI are being developed at breakneck speed. There is no fighting market forces and innovation. As security professionals, we must proactively embrace this change, evaluate sources of risk, and make prudent recommendations to remediate risks without interrupting or slowing the business down. This is not an easy charge for our profession and our teams. However, by adopting a proactive approach, ensuring that our colleagues are well-trained in critical thinking, and exploring how services may be targeted, we can make our organizations more resilient as they embrace what AI may bring to the enterprise. 

As AI’s presence in our enterprises and the economy expands, new business models and derivative technologies will undoubtedly emerge. CISOs and security leaders will need to use this context to evaluate the efficacy of their current and future security practices and security tooling. Our adversaries are highly skilled and use automated techniques to compromise our organizations. These adversaries are already using nefarious forms of GenAI to create new zero-day exploits and other highly sophisticated attacks, frequently using social engineering to target key roles and stakeholders. In short, our adversaries continue to up their game. As security leaders, it’s incumbent upon us to do the same. We know that the pace and speed of our security operations must improve to confront risks executed at runtime and at network speed. 

Subscribe and get the latest updates