What is dark AI?
Dark AI is a new and growing threat to cybersecurity with the potential to impact your digital business operations and sensitive data. AI is now being applied to create cyberattacks specifically tailored to exploit the weaknesses of your organization and infrastructure. This new threat requires the adoption of new security technologies and practices.
This article explains what dark AI is, what makes it different from traditional AI, the new threats it poses, and the technical measures organizations can employ to mitigate this increasing risk.
What is dark AI?
Dark AI is the use of artificial intelligence, including generative AI and large language models (LLMs), for malicious purposes. On a technical level, dark AI is no different from mainstream AI (which was built to improve productivity and automate repetitive tasks) — it’s the application of the technology that differentiates it. Any AI tool can be used for “dark AI” purposes, such as crafting a phishing email or writing malicious code.
However, AI tools specifically created for malicious purposes are emerging, creating a distinct category of tools that can be defined as dark AI. These tools are created without the guardrails of mainstream AI tools that (attempt to) prevent them from being used to generate deceptive or harmful content, and instead tailor their feature set around cybercrime use cases.
Dark AI: A significant threat
Dark AI is considered a significant threat due to its ability to amplify the abilities of cyber attackers by:
- Improving social engineering: Dark AI tools can be tailored to create convincing phishing and scam emails, and replicate login pages to trick users into participating in a cybersecurity breach.
- Tailoring attacks to your network environment: AI tools can be used to analyze configurations and network environments for exploitable vulnerabilities.
- Optimizing attacks against security tools: AI can optimize malicious code to make sure it avoids known protection measures, allowing it to move laterally through your infrastructure and potentially leaving back doors for later intrusion.
- Increasing the frequency of attacks: Brute force attacks can be intelligently adapted to find and exploit weaknesses, human or technical, in an organization.
- Creating convincing deep fakes: AI can clone voices, and even completely replicate a person’s appearance in live video, making it possible to impersonate people on a video call.
AI has made cyberattacks viable for even inexperienced cybercriminals. By making attack methods more accessible (requiring less technical knowledge and time investment to understand targets and establish tool chains), it increases the number of attacks overall. While many of these AI-enhanced attacks may be amateurish, the increased scale does significantly raise the chance of a successful hack or data leak.
Tools and tactics used in dark AI attacks
The power of AI was quickly recognized by the developers of hacking and malware tools, who have developed and released their own solutions, often for profit.
AI Tool | What It Is |
---|---|
FraudGPT | A generative AI model for social engineering, used for creating targeted phishing emails, fake websites, and login pages for skimming passwords from users. |
DarkBERT | Trained on data from the dark web, this LLM is intended to assist security researchers while navigating the venues used by cybercriminals (including understanding their jargon and codewords), but can also be used by criminals themselves to find the resources they need to craft attacks. |
WormGPT | Similar to FraudGPT, WormGPT is also used to generate malicious code for malware attacks. |
Mainstream LLMs | Just because mainstream LLMs like Google Gemini, Microsoft Copilot, and ChatGPT aren’t intended to be used for malicious purposes doesn’t mean that they can’t be — guardrails can only go so far in preventing bad actors leveraging AI to further their goals. |
These tools are tactically deployed using a combination of methodologies. Dark AI may be used to direct the attack itself (for example, by directly identifying vulnerabilities in software or cloud configurations, and writing scripts that target them), or it may be used to craft the elements of an attack, such as fraudulent emails, landing pages, and other deceptive content.
The impact dark AI is having on cybersecurity
The only way for organizations to avoid the negative impact dark AI is having on cybersecurity is to take a vigilant, proactive stance in their cybersecurity. This is especially important in always-online cloud environments that do not have the well-defined borders of traditional on-premises IT infrastructure.
AI tools are also having an impact on supply chain security, as they provide a mechanism for obfuscating the intent of code, increasing the possibility that malicious code can go undetected in open source software and dependencies.
Whether an attack is AI-driven or not, the increasing complexity and real-time nature of attacks targeting cloud environments (and the workloads that run on them) have highlighted the need for a multi-faceted approach. Development teams, as well as those in charge of IT security, must employ robust DevSecOps practices, including runtime security and automated threat detection and response (especially in scaling containerized environments where visibility can be an issue).
How to protect yourself from dark AI
How you protect yourself against dark AI will depend on whether you’re protecting yourself, or the IT infrastructure for an organization or business.
- For individuals, the biggest threat from dark AI is social engineering and scams. You should be wary of phishing emails that have been crafted to look like real emails from your bank, email providers, and social networks. You should also avoid following any links in suspicious emails that may lead to fake login screens designed to steal your credentials. Furthermore, you should be wary of impersonators using dark AI and data from the public web (such as social media profiles and company directories) to impersonate your friends and colleagues.
- Organizations suffer increased risks from dark AI. In addition to the risk of your staff being manipulated through social engineering, the infrastructure you use to provide your products and services can also be an attack vector. Cloud infrastructure is particularly vulnerable, as orchestrated and automatically scaling containers can harbor unseen threats.
In these cases, a cloud-native application protection platform (CNAPP) should be deployed to provide a combination of protections against dark AI cyberattacks, including:
- Configuration scanning to identify exploitable weaknesses and privilege misconfigurations in cloud platforms.
- Vulnerability management for identifying known vulnerabilities in the supply chain and other software.
- Runtime security, including monitoring activity within containers and orchestration platforms.
- Posture management for prioritizing risks.
- Compliance checking to ensure that you are adhering to required data and user privacy regulations, including GDPR and CCPA.
- AI-powered detection and response to ensure the fastest possible detection and remediation of exploitable weaknesses or active threats.
The value provided by AI in cybersecurity isn’t hypothetical: Organizations leveraging automation and AI in their cybersecurity platforms saved an average of USD 2.22 million over those who relied on traditional cybersecurity technologies.
But AI is also an attack vector that must be protected. LLMjacking targets organizations that use public AI platforms (such as ChatGPT and Microsoft CoPilot), stealing their account credentials and running up their bills, or using the large language models that underpin generative AI to steal information. The variety of AI attacks is growing, requiring up-to-date threat intelligence in security tools that can identify and secure LLM usage (whether sanctioned or not) within a business.
How Sysdig helps to protect the cloud from dark AI
Dark AI poses an ongoing threat to cloud security, and requires an elevated security posture across your infrastructure. Active monitoring of configurations, processes, and user activities — whether they are running on-premises or in the cloud, and even if they are running in containers or on serverless platforms — is critical to this.
Sysdig provides a unified CNAPP that combines cloud security posture management and runtime protection for your cloud workloads. It includes Sysdig Sage: AI and automation that helps you quickly and effectively counter new threats to your infrastructure and data, including threats crafted by or implementing dark AI.
Read our Sysdig 555 Guide for Security Practitioners to find out what measures you should be taking to secure your organization from cloud threats, and how to make sure the infrastructure and data you are responsible for is protected.
FAQ
Dark AI is the application of artificial intelligence technologies for malicious purposes. For example, AI can be used to craft convincing scam emails, generate code for viruses and worms, and analyze data for exploitable information or weaknesses.
Technologically, dark AI is no different from regular AI except for its application. However, AI tools specifically developed for performing malicious or criminal acts are emerging, making them distinct from traditional AI tools that are developed to improve productivity and create value.
The rapid evolution of AI technologies, including dark AI, requires an enhanced security posture. Traditional cybersecurity protection measures that are limited to periodically scanning processes and files are no longer enough. Runtime security and active monitoring of users and processes behavior is required to identify and close off attack vectors that are threatened by AI tools.
Dark AI and the application of AI technologies for cybercrime is an increasing threat to individuals and businesses. AI can greatly reduce the effort and knowledge required to build targeted social engineering and cyberattacks by writing code, extracting information, and adapting to the specific infrastructure, organizational structure, and cybersecurity measures deployed to mitigate online threats.
How do you protect against dark AI?
The rapid evolution of AI technologies, including dark AI, requires an enhanced security posture. Traditional cybersecurity protection measures that are limited to periodically scanning processes and files are no longer enough. Runtime security and active monitoring of users and processes behavior is required to identify and close off attack vectors that are threatened by AI tools.
Sysdig 2024 Year-in-Review Threat Report
See how attackers are exploiting AI for financial gain