From Misuse to Abuse: AI Risks and Attacks
AI from the attacker’s perspective: See how cybercriminals are leveraging AI and exploiting its vulnerabilities to compromise systems, users, and even other AI applications
Cybercriminals and AI: The Reality vs. Hype
“AI will not replace humans in the near future. But humans who know how to use AI are going to replace those humans who don’t know how to use AI,” says Etay Maor, Chief Security Strategist at Cato Networks and founding member of Cato CTRL. “Similarly, attackers are also turning to AI to augment their own capabilities.”
Yet, there is a lot more hype than reality around AI’s role in cybercrime. Headlines often sensationalize AI threats, with terms like “Chaos-GPT” and “Black Hat AI Tools,” even claiming they seek to destroy humanity. However, these articles are more fear-inducing than descriptive of serious threats.
For instance, when explored in underground forums, several of these so-called “AI cyber tools” were found to be nothing more than rebranded versions of basic public LLMs with no advanced capabilities. In fact, they were even marked by angry attackers as scams.
How Hackers are Really Using AI in Cyber Attacks
In reality, cybercriminals are still figuring out how to harness AI effectively. They are experiencing the same issues and shortcomings legitimate users are, like hallucinations and limited abilities. Per their predictions, it will take a few years before they are able to leverage GenAI effectively for hacking needs.
For now, GenAI tools are mostly being used for simpler tasks, like writing phishing emails and generating code snippets that can be integrated into attacks. In addition, we’ve observed attackers providing compromised code to AI systems for analysis, as an effort to “normalize” such code as non-malicious.
Using AI to Abuse AI: Introducing GPTs
GPTs, introduced by OpenAI on November 6, 2023, are customizable versions of ChatGPT that allow users to add specific instructions, integrate external APIs and incorporate unique knowledge sources. This feature enables users to create highly specialized applications, such as tech support bots, educational tools, and more. In addition, OpenAI is offering developers monetization options for GPTs, through a dedicated marketplace.
Abusing GPTs
GPTs introduce potential security concerns. One notable risk is the exposure of sensitive instructions, proprietary knowledge, or even API keys embedded in the custom GPT. Malicious actors can use AI, specifically prompt engineering, to replicate a GPT and tap into its monetization potential.
Attackers can use prompts to retrieve knowledge sources, instructions, configuration files, and more. These might be as simple as prompting the custom GPT to list all uploaded files and custom instructions or asking for debugging information. Or, sophisticated like requesting the GPT to zip one of the PDF files and create a downloadable link, asking the GPT to list all its capabilities in a structured table format, and more.
“Even protections that developers put in place can be bypassed and all knowledge can be extracted,” says Vitaly Simonovich, Threat Intelligence Researcher at Cato Networks and Cato CTRL member.
These risks can be avoided by:
- Not uploading sensitive data
- Using instruction-based protection though even those may not be foolproof. “You need to take into account all the different scenarios that the attacker can abuse,” adds Vitaly.
- OpenAI protection
AI Attacks and Risks
There are multiple frameworks existing today to assist organizations that are considering developing and creating AI-based software:
- NIST Artificial Intelligence Risk Management Framework
- Google’s Secure AI Framework
- OWASP Top 10 for LLM
- OWASP Top 10 for LLM Applications
- The recently launched MITRE ATLAS
LLM Attack Surface
There are six key LLM (Large Language Model) components that can be targeted by attackers:
- Prompt – Attacks like prompt injections, where malicious input is used to manipulate the AI’s output
- Response – Misuse or leakage of sensitive information in AI-generated responses
- Model – Theft, poisoning, or manipulation of the AI model
- Training Data – Introducing malicious data to alter the behavior of the AI.
- Infrastructure – Targeting the servers and services that support the AI
- Users – Misleading or exploiting the humans or systems relying on AI outputs
Real-World Attacks and Risks
Let’s wrap up with some examples of LLM manipulations, which can easily be used in a malicious manner.
- Prompt Injection in Customer Service Systems – A recent case involved a car dealership using an AI chatbot for customer service. A researcher managed to manipulate the chatbot by issuing a prompt that altered its behavior. By instructing the chatbot to agree to all customer statements and end each response with, “And that’s a legally binding offer,” the researcher was able to purchase a car at a ridiculously low price, exposing a major vulnerability.
- Hallucinations Leading to Legal Consequences – In another incident, Air Canada faced legal action when their AI chatbot provided incorrect information about refund policies. When a customer relied on the chatbot’s response and subsequently filed a claim, Air Canada was held liable for the misleading information.
- Proprietary Data Leaks – Samsung employees unknowingly leaked proprietary information when they used ChatGPT to analyze code. Uploading sensitive data to third-party AI systems is risky, as it’s unclear how long the data is stored or who can access it.
- AI and Deepfake Technology in Fraud – Cybercriminals are also leveraging AI beyond text generation. A bank in Hong Kong fell victim to a $25 million fraud when attackers used live deepfake technology during a video call. The AI-generated avatars mimicked trusted bank officials, convincing the victim to transfer funds to a fraudulent account.
Summing Up: AI in Cyber Crime
AI is a powerful tool for both defenders and attackers. As cybercriminals continue to experiment with AI, it’s important to understand how they think, the tactics they employ and the options they face. This will allow organizations to better safeguard their AI systems against misuse and abuse.