Practical AI security in multi-cloud environments

Get Demo
By Dan Belmonte - JUNE 4, 2025

SHARE:

Facebook logo LinkedIn logo X (formerly Twitter) logo
AI security in multicloud environments

As artificial intelligence solutions become ubiquitous, AI security is a key consideration for organizations that want to leverage AI as a competitive advantage.

Security teams face considerable obstacles as AI proliferates through various implementation models: externally managed AI platforms (AWS Bedrock, GCP Vertex, Azure OpenAI), custom-built AI services within cloud infrastructures, and on-premises AI systems. Larger organizations frequently operate with a mixed ecosystem, where certain units utilize AI platforms while critical systems incorporate purpose-built AI functionalities.

Regardless of which AI solution you’re using to deploy Generative AI (GenAI) or agentic AI products, the key challenges remain the same: lack of visibility and security approaches to AI platforms. 

In addition, depending on your country and your company sector, you may need to meet certain regulatory requirements. As AI regulation is rapidly evolving, establishing a compliance monitoring process is essential to stay current with new requirements.

In this article, we will share some best practices and tips for AI security in multi-cloud environments.

Starting with a strong security foundation for your AI infrastructure

Starting with a proactive security foundation is key. So before we get started, let’s level set on a security baseline for what we want to achieve. You have have the ability to:

  • Get visibility of any AI components deployed within your infrastructure.
  • Manage related risks within your environment.
  • Strengthen your security posture: the combination of policies, controls, and monitoring keeps your AI infrastructure resilient against evolving threats.
  • Implement continuous security monitoring for proactive detection of zero-day vulnerabilities.

Once you have this, you can integrate a reactive component with cloud detection and response (CDR) to create a robust overall security strategy. Let’s dig a little more into each baseline.

Visibility into AI across your infrastructure

To effectively manage and secure AI, we absolutely need solid visibility. If we don’t have that, we’re looking at a “Shadow AI” situation where ungoverned, potentially risky AI items can appear at any place.

To keep your AI security tight, you’ve got to quickly spot and fix any problems, ensuring comprehensive security coverage.

Continuously monitoring existing and new assets (especially Large Language Models) is crucial for protection against emerging threats, but it must be paired with robust threat management to be effective.

Stay ahead of risks in your infrastructure

When vulnerabilities align with other risk factors like misconfigurations, exposure, excessive permissions, and suspicious runtime activity, it can really mess things up for your business. In essence, risk represents the likelihood of unwanted incidents and their consequences.

With insights into your AI infrastructure now available, you can prioritize and manage identified risks based on their severity.

In case you don’t know where to start, Sysdig Secure summarizes all findings grouped into a section called Risks, so you can prioritize and address the most critical issues by identifying affected resources and potential threat impacts.

The Risk section shows you affected resources with broader context, including details such as whether a workload has an exploit, is used at runtime, contains an AI package, is exposed to the internet, etc.

Before this analysis can be meaningful, though, you must establish a security baseline that provides context for evaluating these potential threats.

Establishing a resilient security posture for AI: AI-SPM

Global regulators are introducing AI governance frameworks to address security, privacy, and ethical concerns. 

Securing AI in multicloud environments requires proactively implementing the necessary internal security measures, policies, and regulatory standards.

List of policies assigned to a “Zone” within Sysdig Secure

A structured approach to AI security involves adopting a risk management framework based on MITRE ATLAS and OWASP AI guidelines, being the latest discussed here.

Furthermore, you should create your own controls tied to your specific AI components. Which can be attached to a custom policy, in order to maintain a best practice like for example this one related to AWS Sensitive Data.

AI Security in Multicloud

Establishing a robust security stance enables your AI environment to satisfy necessary regulatory requirements while simultaneously building trust with your customers. In a nutshell, security posture is your defensive stance – it defines whether your organization is exposed or equipped to handle threats.

Behavioral threat detection for cloud AI security

Effective AI security requires more than static posture assessments – it demands real-time awareness of how components behave at runtime. Unlike traditional systems, AI-driven services can behave unpredictably, making it difficult to define what’s normal. This increases the risk of undetected misconfigurations, anomalies, or compromises.

Therefore, continuous monitoring and behavior-based detection are critical to identify when workloads deviate from expected patterns or enter undesired states.

Using cloud detection and response and runtime security policies capabilities, we can detect when an AI service enters an undesired configuration, when a workload or instance is compromised, or when any other potential security issue arises.

Continuous threat detection for emerging AI risks

Sysdig’s agent powered by Falco and cloud audit logs enable behavioral monitoring across assets and components, triggering alerts to designated endpoints when suspicious activity is identified.

With Sysdig Secure, you get managed Falco rules directly within your runtime policies, providing real-time threat detection. 

These curated rules are continuously maintained and updated by our Threat Research Team to keep pace with the latest emerging threats. The Sysdig team is built for responsiveness. For example, a recent malware variant was detected and addressed in our product within just 24 hours.

Brief list of managed Falco rules for AWS Bedrock within Sysdig Secure

Falco rules in general are defined with conditions that trigger events potentially linked to threats.

Depending on the use case, each event can carry a different severity level, directly influencing the associated risk. Hence, we advise you to review which ones you want to enable and/or tune it to your accordance.

For example, here is a managed Falco rule example for detecting AWS Bedrock agent invocation:

- rule: Bedrock Invoke Agent

 id: 111111

 description: Detect invoking an Amazon Bedrock agent to run inference using the prompt and inference parameters provided.

 condition: ct.src="bedrock.amazonaws.com" and ct.name="InvokeAgent" and not ct.error exists

 output: An Amazon Bedrock agent %json.value[/requestParameters/agentId] was invoked to run inference …

 source: awscloudtrail

 tags

 exceptions (1)Code language: JavaScript (javascript)

Using Falco rules based on audit logs is an important step, but we can expand the focus to workloads, allowing for continuous investigation at the compute level for your AI infrastructure.

AI Security in Multicloud
List of events triggered based on previous rules shown for AWS Bedrock

This approach brings full protection in the event that a threat bypasses detection at the cloud level (such as missed audit logs) but is identified at the compute level (e.g., through suspicious tools being executed, LLMjacking, etc.) or in the opposite way.

Closing thoughts

Ensuring visibility across all AI components within your environment is critical as it ensures that you know what’s deployed, the risks tied to them, and the actions necessary to effectively minimize them. After gaining visibility, you should adopt an AI risk management framework for a structured approach to AI security (AI-SPM). Using a framework also contributes to meeting AI compliance standards, such as the EU AI Act and NIST AI Risk, if applicable to your organization.

We then addressed behavioral threat detection at runtime using Falco as our primary tool, extending protection beyond cloud infrastructure (audit logs) to include the application runtime layer (containers and virtual machines)

The set up enables the detection of malicious behavior triggered by zero-day vulnerabilities, which often evade conventional preventative measures, providing an essential layer of defense against sophisticated attacks targeting our AI infrastructure and data. 


Ready to dig more into AI security with Sysdig?

This whitepaper will help you to effectively communicate AI threats, regulatory implications, and required controls to executive leadership with clear business impact analysis.

For a broader view of AI security, explore the Top 8 AI Security Best Practices, covering everything from data integrity to supply chain protection.

👉 Request a demo to see how Sysdig works.

Subscribe and get the latest updates