Weaponised AI – how cybercriminals are exploiting the tech
Artificial intelligence (AI) uptake within Australian businesses is ever-increasing. Around 85% of businesses currently implement, or are intending to implement, AI, according to data cited by All About AI. Some 53% of Australian professionals are actively using or experimenting with generative AI in their work environments (up from 36% in 2023), and 35% of employees report that their organisations are investigating how to best deploy AI technologies. The Department of Industry, Science and Resources notes that by Q4 2024, some 40% of SMEs in the country were adopting AI.
AI is big business. By the end of 2025, the AI market in Australia is projected to reach a value of $9.41 billion and, between 2025 and 2030, the market is set to grow at an annual rate of 28.55% to reach $20.34 billion by 2030. The Productivity Commission has estimated AI could add $116 billion to the Australian economy over the next decade.
How businesses are using AI
The Department of Industry notes the top five applications favoured by SMEs adopting AI are:
- Generative AI assistants.
- Data entry and document processing.
- Fraud detection.
- Marketing automation.
- Customer support and chatbots.
Businesses looking to introduce AI in their operations favoured data entry and document processing, and fraud detection.
Services, hospitality, distribution, and retail trade businesses are most likely to have adopted generative AI assistants, while retail trade, and hospitality businesses led in marketing automation.
The top three business outcomes SMEs believe AI could help achieve were:
- Faster access to accurate data to inform decision-making (22% said it was likely; 49% said it was possible).
- Improved customer experience/engagement (18% likely; 46% possible).
- Stronger security, data protection and fraud detection (18% likely; 53% possible).
However, it isn’t only legitimate businesses that are tapping the tech. Cybercriminals are also readily adopting AI to launch more efficient and effective attacks.
How cybercriminals are using AI
To define AI in the simplest of terms, it “refers to a machine’s ability to combine computers, datasets and sets of instructions to perform tasks that usually require human intelligence, such as reasoning, learning, decision-making and problem-solving”, notes Forbes.
It can be categorised as a ‘general-purpose tool’ (i.e. it can be used for almost every task), and cybercriminals can – and do – use it for malicious purposes.
Rapid advancements in machine learning (ML), natural language processing (NLP), and automation have led to a rise in AI-assisted cybercrime.
The Australian Signals Directorate’s (ASD) 2024-25 Cyber Threat Report notes: “The prevalence of artificial intelligence almost certainly enables malicious cyber actors to execute attacks on a larger scale and at a faster rate. The potential opportunities open to malicious cyber actors continue to grow in line with Australia’s increasing uptake of – and reliance on – internet-connected technology.”
AI is making it easier for threat actors to commit various cybercrimes – from precision-targeted phishing campaigns to compromised supply chains, automated malware creation to deepfake impersonations. The technology is making campaigns more sophisticated and credible, and deployment broader and more rapid.
As a result of using the technology, the crimes are more effective – and harder to detect.
Forbes notes some ways in which AI can be used in cybercrime include:
- Enhancing existing attacks – making it more difficult for antivirus software/spam filters to detect threats.
- Creating new attacks – AI can be used to manipulate or create fake data to create confusion or impersonate officials.
- Automating and scaling attacks – cybercriminals can use AI to automate large-scale attacks with very little effort.
Key uses of AI in cybercrime
Cybercriminals are using AI-powered attacks including:
Deepfakes
Deepfakes are a combination of ‘deep learning’ and ‘fake media’, whereby AI is used to craft/manipulate audio/visual media to appear authentic. It is increasingly being used in social engineering scams, allowing criminals to impersonate individuals to trick victims into providing sensitive information or making unauthorised financial transactions. According to Pindrop’s 2025 Voice Intelligence & Security Report, deepfake fraud surged 1,300% in 2024. “AI-generated media is not just a future risk, it’s a real business threat. We’re seeing executives impersonated, hiring processes compromised, and financial safeguards bypassed with alarming ease,” Andrew Philp, ANZ Field CISO at Trend Micro, said in a statement.
Phishing attacks
AI is used to craft highly personalised phishing emails (voicemails, SMS, QR codes) by analysing social media profiles and email patterns – making the phishing attempts more convincing and therefore more likely to succeed. Phishing emails crafted by large language models (LLMs) mimic individual writing styles, include personal details, and bypass spam filters. Almost gone are the days of easy-to-spot scams, full of spelling mistakes, dodgy logos and unrealistic ‘bait and lures’, as cybercriminals deploy attacks that are increasingly harder to distinguish from legitimate messages.
Social engineering scams
AI is supercharging social engineering by transforming how cybercriminals manipulate human behaviour. In addition to using deepfake technology, threat actors are also using dynamic vector switching (whereby they start with a benign email, measure engagement, and then pivot to deliver a voice or video payload), creating personas at scale (the criminals build credible personas using aggregated data from social media and breach dumps, complete with names, roles, and tone of voice, to infiltrate businesses), and using personalisation for spear-phishing (synthesising public information across all channels that connect to an individual to create highly targeted and specific phishing campaigns). Cybercriminals can also use NLP-based AI chatbots to manipulate individuals into revealing confidential information. Read our article Hacking the Human.
Business email compromise (BEC)
BEC and funds transfer fraud (a type of phishing attack) is being enabled by AI. Businesses are targeted with the aim of stealing money and/or critical information. AI algorithms can analyse communication patterns and generate convincing phishing emails (or deepfakes to imitate the voice and appearance) which impersonate high-level executives or business partners to deceive employees into transferring funds or disclosing sensitive information. Read our article BEC Concerns SMEs.
Business identity compromise (BIC)
Cybercriminals are increasingly using AI to enhance their BIC tactics. Amongst the AI tools being used for BIC are deepfakes, phishing, BEC, and credential theft (whereby AI is used to generate working credential sets that cybercriminals use to gain access to systems or take over accounts). Read our article BIC & the AI Threat.
Cyber fraud
AI has enabled fraudsters to develop more sophisticated scams and fraud attempts. Roger Darvall-Stevens, head of fraud and forensic services at RSM Australia, told Accountants Daily fraud typologies often intersect between IT security concerns and classic fraud and cyber fraud. “This is technology enabling further perpetration of the classic fraud types such as theft, identity theft and fraud, account takeover and cheque fraud. Cyber fraud examples in addition to deepfakes include business email compromise, identify theft and fraud, synthetic identity fraud, pharming (cyberattack through a website that looks legitimate), and methods of sending various communication types to entice and deceive victims to part with money such as phishing (emails), vishing (via voice by phone), smishing (via text), and quishing (via QR code).”
Chatbots
LLMs like OpenAI’s ChatGPT, Microsoft’s Bing Chat and Google’s Gemini are readily being adopted by the general public and businesses – and by cybercriminals. Threat actors can use these chatbots to enhance and augment their attacks. In addition to using mainstream chatbots, a plethora of ‘evil chatbots’ have also been developed with the explicit purpose of creating malicious content. Malicious LLMs, such as GhostGPT, accelerate the creation of polymorphic malware that adapts to avoid detection, notes Security Daily Review. Read our article Rise of Evil Chatbots.
Adversarial AI
Adversarial AI (also known as adversarial ML) involves the study of attacks on ML algorithms and the development of defences against such attacks. These attacks exploit the vulnerabilities in ML models to manipulate their behaviour, often leading to incorrect predictions or classifications. “Cybercriminals can distort the output generated by AI by feeding them inaccurate data or tampering with the settings of the AI model. The inaccuracies introduced can lead to dangerous biases or instructions being created that suit the threat actor’s objectives”, notes Kasperksky. Examples of attacks include evasion attacks (modifying the input data to evade detection by the model), data poisoning attacks (the training data is contaminated with malicious data to corrupt the model), byzantine attacks (occurring in distributed learning environments where some participants may act maliciously to disrupt the learning process), and model extraction (a ML system is probed to extract the data on which it was trained).
Ransomware attacks
Ransomware encrypts business-critical data and demands a ransom for decryption codes. Attacks can be optimised using AI, for example AI algorithms can automate and enhance ransomware distribution and selectively target valuable assets. AI can also analyse potential victims, allowing the cybercriminals to tailor their attack and make them more effective.
Fraudulent transactions
Sophisticated AI algorithms can be employed to automate fraudulent transactions targeting businesses. By mimicking legitimate transaction patterns, AI can be used to evade traditional fraud detection systems and exploit weaknesses in payment processes.
Investment scams
Cybersecurity experts have noted a sharp rise in investment scams driven by AI. NordVPN’s chief technology officer Marijus Briedis told Cyber Daily scammers are “using AI not just to automate attacks but to make them deeply convincing”. AI has transformed traditional scams, merging automation with social engineering to create fake investment opportunities, voice calls, and phishing campaigns that appear genuine. Cybercriminals are now using AI-generated voices to impersonate trusted contacts or financial institutions, while fake online shops – many built with AI-designed templates – have proliferated, with more than 120,000 malicious sites impersonating Amazon alone in just two months, noted the publication.
Payment gateway fraud
AI technology can be used to automate and leverage various aspects of payment gateway fraud. Techniques such as generating realistic synthetic identities, analysing patterns to evade detection systems or conducting targeted phishing attacks using AI-generated content can make the fraud more sophisticated and challenging to detect.
Intellectual property (IP) theft
AI can help automate the process of targeting businesses to steal valuable IP. Forbes notes: “AI algorithms can analyse a high-volume of data and identify high-value trade secrets or sensitive information, facilitating their theft for competitive advantage or financial gain.” Read our article IP vs AI.
Autonomous malware
AI can be used to generate malware that adapts to evade detection by antivirus software, for example, by creating variants to existing malware that are harder to identify. Self-adapting malicious software is capable of altering its behaviour in real-time to circumvent static security controls.
Automating large-scale attacks
AI can enable cybercriminals to automate and scale attacks, thereby allowing them to launch large-scale operations with minimal effort. For example, they can automate the distribution of phishing emails or the execution of denial-of-service (DoS) attacks.
Advanced Persistent Threats (APTs)
Using sophisticated techniques to breach business networks, remain undetected and exfiltrate sensitive information over an extended period, APTs use AI to rapidly detect exploitable systems and misconfigurations. Along with reconnaissance and vulnerability scanning, AI algorithms enable attackers to adapt their tactics, evade security measures and exploit vulnerabilities in business systems.
Password cracking
By employing ML and AI to improve algorithms, passwords can be more efficiently cracked – giving criminals access to accounts and systems. AI algorithms are used to analyse large password datasets and generate different password variations.
Hacking
In addition to password cracking, cybercriminals are also using AI to automate and enhance various hacking activities. “AI algorithms enable automated vulnerability scanning, intelligent system weaknesses detection and exploitation, adaptive malware development, etc.”, notes Forbes.
Supply chain attacks
AI can also be used to compromise an organisation’s software or hardware supply chain, for example by inserting malicious code or components into legitimate products or services.
Distributed Denial of Service (DDoS) attacks
The scale of intensity of DDoS attacks against business websites and online services can be enhanced by AI. For example, AI-powered botnets can coordinate massive volumes of malicious traffic to overwhelm servers and disrupt business operations.
Manipulating information
AI can be used to create fake news or disinformation campaigns designed to manipulate public opinion or create confusion during critical events. For example, interference in election campaigns has become a reality. State-backed groups are also using AI to conduct reconnaissance, probe for vulnerabilities, harvest credentials and extract sensitive data, notes AI developer Anthropic.
Smear campaigns
Cybersecurity experts have warned of a mounting risk to corporate reputations as AI-powered smear campaigns become more prevalent. “A growing number of companies are confronting coordinated disinformation attacks, often orchestrated by ex-employees, corporate rivals, or activist groups,” reported Insurance Business. According to London-based law firm Schillings, which specialises in crisis response and reputation management, there has been a 150% rise in smear campaigns over the past three years targeting high-performing firms and executives.
Protecting businesses from AI threats
Using AI tools enables cybercriminals to automate processes, analyse vast amounts of data, and execute complex strategies.
To combat the risks, businesses are encouraged to:
- Keep up to date with the emerging AI threat landscape.
- Implement robust cybersecurity measures.
- Investigate risk transfer options.
AI and insurance
The AI landscape is rapidly evolving, as are the threats to businesses from cybercriminals using the technology for nefarious purposes. The insurance industry is attempting to keep up with protecting against the rising threats, but it is a very dynamic and challenging environment, and there are few policy options currently available specific to countering AI risks to businesses. In light of this, it is important for businesses to work with their EBM Account Manager to review their insurance program to see where cover for AI risks may be provided and to look at available options where protection gaps are identified. Read our article AI vs Insurance.
Key takeaway
The integration of AI into cybercrime represents a significant evolution in the tactics used by cybercriminals. By tapping the tech, for example by using ML algorithms to create more convincing phishing attacks, automating attacks, or using AI-generated deepfakes for extortion and misinformation, cybercriminals are creating more successful attacks.
As AI technology continues to advance, businesses need to stay informed and implement robust cybersecurity measures to mitigate these emerging threats.
Need expert guidance?
Talk to your EBM Account Manager about the AI risks facing your business. While AI-specific cover is not currently mainstream, your broker can work with you to find ways to mitigate some AI risks through your insurance program.
Together we can:
- Review policies – To assess whether a cyber insurance policy explicitly addresses AI-driven threats, including deepfakes and social engineering attacks. Other policies within your insurance program may potentially offer some protection against AI-risks.
- Discuss risks – Keeping up to date with the evolving threat landscape is critical. Your broker can also provide guidance on proactive cyber hygiene and insurance coverage.
- Risk assessment – The adoption of strong internal controls, such as multi-factor authentication, voice verification protocols and employee training to detect and respond to AI-enabled threats, is critical.


























