Insurance in the age of AI

In just the last couple of years, the adoption of artificial intelligence (AI) among businesses has skyrocketed. It has been estimated that around 40% of companies worldwide are actively using AI in their operations, and another 42% are exploring its adoption. The AI market is estimated to have reached US$184.04 billion at the end of 2024.

In Australia, over 35% of businesses have adopted AI or automation technologies, and businesses’ AI-related spending grew by 20% in 2024, reaching an estimated A$3.5 billion. The market is set to grow at an annual rate of 28.55% to reach a value of A$20.34 billion by 2030.

Even the Government is pushing for greater use of AI, having allocated A$124 million toward AI research and development to accelerate adoption. Little wonder, as greater utilisation of AI in key Australian industries is expected to lead to a short-term boost in GDP of more than A$200 billion per year and the creation of an additional 150,000 jobs from 2023-2030, according to Professor John Mangan, an emeritus professor of economics at The University of Queensland.

Larger businesses are leading the charge in adopting the technology – with around 60% of larger companies investing in AI, compared to 35% of SMEs. Common applications include predictive analytics, customer service chatbots, and process automation. Data from the Department of Industry, Science and Resources showed businesses adopting AI favoured:

  • data entry
  • document processing
  • fraud detection
  • generative AI assistants, and
  • marketing automation.

The 2025 Australian Tech Leaders Survey found AI is set to play a central role in shaping Australian business strategies this year, with one-third of respondents identifying AI as the most significant opportunity for business growth and 67% citing AI as the defining technology for 2025.

Key advantages in using AI include increased efficiency and productivity, enhanced decision-making, reduced human error, cost savings, efficient handling of big data, improved customer experience, improved safety, and automation of repetitive tasks and processes. Consequently, 48% of businesses report a positive ROI within the first year of implementing AI solutions.

AI presents opportunities – but also risks.

The adoption of AI is not without risks.

In using AI, businesses could face losses from risks such as cybersecurity threats, copyright infringements, wrong or biased outputs, misinformation or disinformation, and data privacy issues, notes Deloitte.

AI incidents are rising. According to Stanford University’s AI Index, there were 260 AI incidents and controversies in 2023 – up 2,500% from just 10 in 2012. And a World Economic Forum report found AI was the biggest technology risk organisations face.

According to research from NTT Data, 89% of executives are concerned about AI security risks. The rapid adoption of AI also poses risks for directors and officers, particularly in respect to disclosures and regulation, Allianz found, with D&Os citing AI-related risks and litigation as a key concern in 2025. Clyde & Co’s 2024 corporate risk radar found cyber threats and AI disruption were the top business risks, with 76% citing cyber threats as their top concern and 29% highlighting disruption caused by AI as a high-impact risk.

CrowdStrike’s 2024 State of AI in Cybersecurity Survey found just 39% of respondents believe AI’s benefits outweigh its risks.

AI is not just a tool – it is a target.

It is not only legitimate businesses that are adopting AI, criminals are also exploiting the technology.

AI is reshaping the cyber risk landscape. Hackers are leveraging AI to enhance traditional attacks like phishing, email extortion and other social engineering tactics, while also developing new methods of attack like deepfake scams. Ransomware-as-a-service, where criminals deploy pre-developed ransomware tools, is also expected to increase with the support of AI. AI serves as a primary tool for cybercriminals to research their targets more effectively, allowing them to gather information and launch attacks in shorter timeframes.

AI also allows cybercriminals to adapt quickly to cybersecurity measures by automating malicious activities and strategies. According to Gartner, by 2027, 17% of all cyberattacks and data incidents will involve generative AI tools.

AI tools lower the barrier for cybercriminals and enable them to increase the sophistication of their attacks. The rapid evolution of these AI-enabled attack techniques makes it increasingly difficult for businesses to detect and defend against threats.

The rising and evolving threat is having a flow-on effect to insurance.

According to S&P Global Ratings, AI will be a “key focus” for the cyber insurance industry as attackers and defenders aim to deploy it.

AI can be integrated into diverse applications, affecting various aspects of a business’ operations. Such integration can alter existing risk profiles and introduce new risks.

The increasing use of AI across industries is prompting both policyholders and insurers to assess how coverage can address AI-related liabilities.

Businesses adopting AI need to ask what specific risks does AI introduce for the business. How can AI affect traditional insurance policies? And does insurance cover AI? Your EBM Account Manager can help you to answer these questions.

The insurance industry is addressing AI-related risks by adapting traditional policies and developing AI-specific coverage options.

Law firm Herbert Smith Freehills notes that AI risks may be covered under traditional policies (that do not exclude them) including:

  • Professional indemnity (PI) – liabilities related to AI-related services or the use of AI in the provision of services.
  • Directors’ and Officers’ (D&O) – where executives face scrutiny over AI governance issues.
  • Products liability (PPL) – if an AI-powered product malfunctions and causes consumer harm.
  • Cyber – may address data breaches, security incidents, system failures and potential ransomware attacks linked to AI.
  • Employment practices liability (EPL) – claims related to AI-driven decisions that result in workplace discrimination or unfair treatment.
  • Property damage and business interruption (BI) – if AI contributes to property damage or operational disruptions.

It is important to note that cybersecurity coverage focusses on mitigating operational losses stemming from cyberattacks and data breaches. At this stage, it is not clear exactly how AI will cause loss under a cyber insurance policy.

A challenge with cover lies in determining liability. The lawyers note, liability policies typically require a causal link between any claim against the policyholder and the perils insured by the policy (for example, acts/omissions in the performance of professional services in PI policies or wrongful acts by persons in their capacity as directors/officers in a D&O context).  AI decision-making raises questions about accountability – that is, who is responsible. Does liability fall on the business, the AI provider, or another party? This ambiguity can affect policy coverage.

A case in point was a chatbot incident with Air Canada in 2024. The airline was forced to pay damages to a passenger after it was found liable after its chatbot gave the passenger misleading information. The chatbot assured the passenger that he could book a full-fare flight for his grandmother’s funeral and then apply for a bereavement fare later. This was not the case. The airline’s argument that the passenger should have gone to the link provided by the chatbot where he would have seen the correct policy was rejected by the tribunal. As was the airline’s assertion that the chatbot was “responsible for its own actions”. In this case, there was no insurance product existing to cover that liability, and cyber policies didn’t apply, highlighting a coverage gap that insurers will need to address.

AI exclusions

As new risks develop, insurers will likely determine whether to price in or exclude AI risk.

Options for insurers include adjusting coverage, offering specific endorsements, or excluding certain risks, as has happened in the cyber insurance space over the years.

While most policies do not yet include AI-specific exclusions, insurers may introduce them as risks become more defined.

As the situation evolves, your EBM Account Manager can discuss if and how exclusions are being adopted in the market (either as standalone clauses or within existing cyber exclusions).

AI-specific policies

A broad range of risks are posed by AI, including data and privacy risk, model manipulation risk, inherent bias in the model risk, supply chain risk, overreliance on AI for cybersecurity risk, and regulatory risk. The risks cover the full scope of operational, commercial, and regulatory aspects of business and have the potential to create significant losses and liabilities.

Given the wide scope of risk, some insurers may determine that AI risks are too unpredictable to cover under existing policies – giving rise to AI-specific policies.

Affirmative AI insurance is emerging in the market and is expected to expand as businesses seek protection against AI-related failures.

Several insurers have introduced products designed to address AI-related losses. This includes Munich Re, which has developed a policy that covers losses when an AI model fails to perform as expected. AXA XL launched an endorsement to cover businesses developing their own generative AI models. Armilla Insurance offers warranties ensuring that AI models perform as intended by developers. Meanwhile, Coalition has added an AI endorsement to its cyber policies, broadening coverage for AI-driven incidents.

AI-specific policies are expected to grow significantly. Deloitte Center for Financial Services projects that, by 2032, insurers could write approximately US$4.77 billion in annual global AI insurance premiums.

While affirmative AI policies are now emerging, most businesses need to focus on ensuring that existing policies adequately address AI-related exposures.

Businesses implementing AI should talk with their EBM Account Manager about how AI affects their risk profiles and, consequently, their insurance coverage. Together we can:

  • Assess AI-related exposures – understanding how AI integrates into operations and identifying potential risks.
  • Communicate AI governance – presenting the businesses’ governance framework to insurers.
  • Review and adjust coverage – ensuring policies are aligned with the evolving risk landscape.
  • Monitor policy language – notifying you about new exclusions and negotiating terms to maintain comprehensive coverage.

Your broker can help you determine how liability may arise and look at the ways cyber insurance and liability policies may provide protection for the business.