Emerging cyberthreat: Malicious AI chatbots
It has been a little over a year since AI-based chatbot ChatGPT entered the cyberspace (November 2022) and opened up a whole new world of possibilities across industries and businesses. OpenAI’s tool was quickly followed by other chatbots including Google’s Gemini (formerly Bard), Microsoft’s CoPilot, Jasper, YouChat, HuggingChat and a plethora of others.
But, as is so often the case, if a tool can be used for good, it can also be used for evil.
Threat actors quickly started using ChatGPT for nefarious reasons. According to WatchGuard Technologies, cybercriminals are using ChatGPT for:
- more targeted phishing attacks
- more effective identity theft attempts
- better social engineering attacks
- creation of more malicious bots, and
- generation of sophisticated malware.
Because of its ability to create tailored content based on a few simple prompts, cybercriminals are exploiting ChatGPT by getting the system to craft convincing scams and phishing messages.
For example, a scammer could enter some basic information – person’s name, gender and job title – into ChatGPT and use it to craft a phishing message tailored just for that person.
ChatGPT’s remarkable capacity to mimic the language style of specific organisations and institutions enables scammers to create highly-detailed and realistic copies which they use to create scam messages or fake websites.
One of the tell-tale signs of a phishing email used to be that it was poorly worded and contained spelling mistakes, grammatical errors and the like. ChatGPT and other large language models (LLMs) can write far more convincing emails – ones which use the right words in the right order – enabling threat actors to propagate more impactful phishing campaigns on a mass scale and facilitate the generation of more effective malicious code.
Research suggests that AI-generated phishing emails are more likely to be opened than those which are manually created.
Threat actors are also using chatbots for malware obfuscation. They use obfuscation techniques to evolve malware signatures that can bypass traditional signature-based security controls. The chatbots can be abused to create malware that is highly evasive, as well as difficult to detect.
Due to the tool’s ability to link via APIs (an application programming interface is a way for two or more computer programs or components to communicate with each other), cybercriminals are also making use of ChatGPT to feed other chatbots. “Users can be convinced they are actually interacting with a human, making them more likely to provide personal details and other valuable data,” WatchGuard wrote in IT Brief.
Chatbots are also being exploited to write malware code, with implications for ransomware. A bot can also identify vulnerable code and software, and be used as a tool to disseminate misinformation.
According to research by Sapio and Deep Instinct in 2023, 75% of security professionals said they saw an uptick in attacks over the past year – with 85% attributing the rise to bad actors using generative AI.
More than half of Australian IT professionals believe the country is less than a year away from a successful cyberattack via ChatGPT, research from BlackBerry found.
The chatbots developed by large tech companies such as Google, Microsoft and OpenAI have a number of guardrails and safety measures in place to stop them from being misused. For example, if they are asked to generate malware or write hate speech, they’ll generally refuse. However, if the cybercriminal uses the right terminology, they can dupe the chatbot into providing the requested information.
But instead of just finding work-arounds to exploit the legitimate chatbots, such as using “jailbreaks” (an attempt to manipulate or “trick” the LLM into performing actions that go against its policies or usage restrictions such as answering illegal or dangerous questions) – cybercriminals took the next logical step.
And it wasn’t long before malicious AI bots surfaced – or, more accurately, hit the darknet.
Rise of malicious chatbots
Hackers and other threat actors have turned to creating their own generative AI engines designed to cause mischief and used specifically to create malicious content. From writing more convincing phishing attacks to creating new methods of social engineering including “deepfakes”, cybercriminals are using malicious chatbots to increase the sophistication of their attacks and better target their activities.
Cybercriminals are using LLMs which mimic the functionalities of ChatGPT and other legitimate chatbots to generate text to answer the questions or prompts users enter. But unlike the LLMs made by legitimate companies, these chatbots are marketed for illegal activities.
Amongst the growing list of malicious chatbots are:
- WormGPT
- FraudGPT
- XXXGPT
- ChaosGPT
- WolfGTP and
- Evil-GPT.
Like other AI-powered chatbots, the malicious bots are language model-trained on vast amounts of text data, allowing them to generate human-like responses to input queries. Cybercriminals exploit this technology to create deceptive content for various malicious purposes including:
- Phishing scams – the chatbots can generate authentic-looking phishing emails, text messages, or websites that trick users into revealing sensitive information, such as login credentials, financial details, or personal data.
- Social engineering – the chatbots can imitate human conversation to build trust with unsuspecting users, leading them to unknowingly divulge sensitive information or perform harmful actions.
- Malware distribution – the chatbots can create deceptive messages luring users to click on malicious links or download harmful attachments, leading to malware infections on their devices.
- Fraudulent activities – the AI-powered chatbots can help hackers create fraudulent documents, invoices, or payment requests, leading individuals and businesses to fall victim to financial scams.
According to cybersecurity researcher Daniel Kelley, the “AI models are notably useful for phishing, particularly when they lower the entry barriers for many novice cybercriminals”. In a test of the first of the malicious chatbots – WormGPT – it was asked to produce an email that could be used as part of a business email compromise (BEC) scam, with a purported CEO writing to an account manager to say an urgent payment was needed. “The results were unsettling,” Kelley wrote in the research. The system produced “an email that was not only remarkedly persuasive but also strategically cunning, showcasing its potential for sophisticated phishing and BEC attacks”.
While investigating WormGPT, researchers found that, like other chatbots, it had been trained on a large pool of data. But, unlike other chatbots, it is specifically being trained on malware-related data. It also removes all the ethical considerations of the likes of ChatGPT, Gemini and other legitimate chatbots, which will (generally) refuse to respond to prompts that request any content that could be used for malicious purposes.
WormGPT, FraudGPT and other malicious chatbots can create malware, find security vulnerabilities in systems, advise on ways to scam people, support hacking and compromise people’s electronic devices.
Marketed as the “ultimate enemy of ChatGPT”, the latest offering – Evil-GPT – writes malware in Python (a high-level, general-purpose programming language) that “grabs computer’s username, external IP address, and Google Chrome cookies, zip everything, and send to a Discord webhook [automated messages and data updates get sent to a text channel in the user’s server]”.
Both the FBI in the US and European law enforcement agency Europol have issued warnings about the use of malicious chatbots. The law enforcement agencies say LLMs could help cybercriminals with fraud, impersonation, and other social engineering faster than before and also improve their written English. LLMs also make it feasible to conduct large-scale phishing scams, targeting thousands of people in their own native language.
With the rise in malicious chatbots – and the sophistication and capabilities of the LLMs ever-increasing – businesses need to be even more vigilant to not fall victim to an AI-generated cyberattack.
Businesses need to be aware of the newest waves of attacks and ensure they have measures in place to counter them. For example, deploying tools such as endpoint detection and response (EDR) can assist by warning of an attempted attack. It is also vital that employees are educated on the techniques being used by cybercriminals and the types of attacks they may see.
Talk to your EBM Account Manager about risk mitigation strategies and the cyber insurance policy options available for your business.