I think, therefore I am agentic AI
Businesses across the globe are readily embracing generative artificial intelligence (GenAI). The adoption of GenAI grew by 890% in 2024, according to Palo Alto Networks’ The State of Generative AI 2025 report. The report found, on average, organisations have 66 GenAI applications within their environments.
When it comes to AI adoption, Australia is no exception. In terms of GenAI use and maturity, Coleman Parkes Research notes Australia ranks fourth globally. According to the Unlocking Enterprise AI: Opportunities and Strategies report from data and AI company Databricks, 92% of Australian businesses are using GenAI – and 98% expect to see GenAI being adopted for both internal and external use by 2027.
GenAI is proving invaluable to many businesses. Some 71% of organisations worldwide report using GenAI in at least one business function, according to web hosting solution provider Hostinger. A 2024 poll by Google Cloud found 74% of enterprises using GenAI report a return on investment (ROI) within the first year, with an additional 30-35% expecting ROI on GenAI investments within the next 12 months.
GenAI is also infiltrating local businesses and homes. A January 2025 Google/Ipsos survey found 49% of Australians had used a generative AI tool in the past year (up from 38% in 2023) and noted it had become an essential tool in workplaces. The survey revealed 74% of Australians who used AI reported it helped them get more done at work and supported brainstorming and problem-solving, while Deloitte reported that by early 2024 around 38% of Australian employees were using GenAI in the workplace (up from 32% a year prior). Research from data platform Snowflake found Australian and New Zealand businesses were more likely to dedicate a substantial portion of their technology budgets to GenAI than the global average. Of those surveyed, 32% of organisations in Australia and New Zealand were committing over 25% of their tech funding to GenAI development, compared to 25% globally. It also found 91% of Australian and New Zealand early adopters believe the technology is helping them make faster and more informed decisions. GenAI technologies, like ChatGPT, could add up to A$115 billion annually to the Australian economy by 2030, according to a joint report from the Tech Council of Australia and Microsoft.
AI’s next iteration – agentic AI – is also gaining traction
GenAI is designed to create original content such as text, images, video, audio, or software code in response to user prompts. It relies on machine learning models, particularly deep learning models, which simulate the learning and decision-making processes of the human brain, notes IBM.
Agentic AI, on the other hand, is designed to autonomously make decisions and act with minimal human supervision. It takes autonomous capabilities to the next level by using a digital ecosystem of large language models (LLMs), machine learning (ML), and natural language processing (NLP) to perform autonomous tasks on behalf of the user or another system.
According to IBM, agentic AI combines the flexible characteristics of LLMs with the accuracy of traditional programming, enabling key features including:
- Decision-making: Agentic AI systems can assess situations and determine the path forward without or with minimal human input.
- Problem-solving: It uses a four-step approach – perceive, reason, act, and learn – to solve issues. AI agents gather and process data, analyse it to understand the situation, and then act based on continuous learning and feedback.
- Autonomy: Agentic AI operates independently, making it suitable for tasks that require minimal human intervention, such as autonomous vehicles and virtual assistants.
- Interactivity: It can interact with the environment and adjust in real-time, such as self-driving vehicles constantly analysing their surroundings to make safe driving decisions.
- Planning: Agentic AI can handle complex scenarios and execute multi-step strategies to achieve specific goals.
Agentic AI is focussed on decisions as opposed to creating the actual new content, and does not solely rely on human prompts nor require human oversight, states IBM. Early-stage agentic AI examples include things like autonomous vehicles, virtual assistants, and copilots with task-oriented goals.
Benefits of agentic AI may include:
- Increased efficiency: Agentic AI automates routine tasks and decision-making, and performs tasks much faster than humans – boosting operational efficiency and increasing overall productivity.
- Improved customer service: AI-powered assistants and chatbots provide instant responses to customer inquiries, and, by mimicking human decision-making, creates more personalised and intuitive user interactions – reducing wait times and improving customer satisfaction.
- Smarter decision making: With real-time data processing, agentic AI provides actionable insights – enabling faster, more accurate and informed decision-making.
- Lower operational costs: By automating routine processes, businesses reduce staffing needs and lower operational costs – allowing businesses to invest financial resources on strategic growth.
- Greater specialisation: By taking over routine tasks and providing data-driven insights, agentic AI frees employees to perform other tasks, focus on creativity and innovation, collaborate and communicate – allowing employee roles to be optimised and scaled efficiently.
- Fostering innovation: Agentic AI’s advanced reasoning and execution capabilities supports innovation by quickly analysing large datasets and generating novel solutions to complex problems.
Agentic AI makes headway into businesses
A McKinsey report from Q1 2025 highlights that 45% of Fortune 500 firms are running pilots or early-stage production systems with agentic capabilities, while Gartner’s 2025 Emerging Tech Report states more than 60% of enterprise AI rollouts this year will include agentic capabilities.
The agentic AI market – comprising autonomous agents that can make decisions, plan multi-step tasks, and adapt in real-time – is projected to grow from an estimated US$2.9 billion in 2024 to US$48.2 billion by 2030, according to Emergen Research.
Globally, 51% of companies have already deployed AI agents at their organisation, according to the 2025 PagerDuty Agentic AI Survey. A further 35% plan to deploy within the next two years – meaning, by 2027, 86% of companies expect to be operational with AI agents. The uptake is even higher in Australia, with around 60% of survey respondents saying they were already using agentic AI in their business.
The technology is being adopted by a diversity of organisations and sectors. For example, Salesforce’s Agentforce platform is being adopted by companies like hipages and Commonwealth Bank of Australia to automate repetitive tasks, improve customer service, and boost team efficiency. It is also being used by Queensland University of Technology to power its student contact centre, with AI agents triaging routine enquiries, summarising case notes and handing off only the most complex issues to human advisors. The Mandarin notes government agencies are exploring agentic AI for 24/7 citizen support, such as Kids Helpline, which uses AI to summarise case notes and free up counsellors for more calls. Meanwhile, projects like EdChat and ChatNT are using agentic AI to deliver educational and tourism services more efficiently. Fisher & Paykel has deployed AI agents to handle FAQs and schedule service appointments, freeing up human representatives for more complex interactions, reports Forbes. According to IT Wire, retailer Manhattan Associates is using agentic AI to enhance customer service by predicting order changes, identifying support trends, and enabling frictionless in-store payments. It is even being adopted in the agriculture industry. Smart Company reports Farm Focus has introduced ‘Daisy’, an AI agent that helps farmers manage finances, answer queries, and generate reports. It has already saved the company over $1 million by consolidating tools and streamlining operations.
The PagerDuty survey also found 71% of companies with GenAI already in place have begun to deploy agentic AI. Companies using GenAI reported a 152% ROI and expect the ROI for agentic AI to be 171% – with 62% of companies expecting a more than 100% ROI.
Risks associated with agentic AI
While the development of agentic AI holds promise for a wide range of applications, there are also key risks in allowing the machines to do the ‘thinking’ (insofar as it can perceive its environment, reason about it, make decisions, and take actions to achieve specific goals, all with a degree of autonomy).
Software and solution provider SAP’s 2025 AI Has a Seat in the C-Suite report revealed that 44% of executives in the United States would override a decision they had already planned to make based on AI insights, while another 38% would trust AI to make business decisions on their behalf. The report also discovered that 74% had more confidence in AI advice over their colleagues, family and friends, while 63% were using GenAI daily (15% used it several times a day). Another 55% of executives said they work for a company where AI-driven insights have replaced or frequently bypassed traditional decision-making.
That is a lot of stock being placed in GenAI to deliver sound business insights. With agentic AI, even more control may be given to the machines. This raises a number of concerns including:
- Lack of transparency and explainability: Many AI models, especially deep learning systems, function as ‘black boxes’, where even the creators may not fully understand how the system arrived at a particular decision, notes RPA Tech. If agentic AI is making important decisions autonomously, the inability to explain the reasoning behind those decisions becomes a major concern as a lack of transparency could lead to unintentional biases, errors, or even catastrophic decisions that may go unnoticed until it is too late.
- Misalignment with human values: AI systems are not inherently equipped to understand human values, ethics, or moral considerations – agentic AI operates purely on logic and programmed objectives, so it may pursue goals that are in conflict with human interests and lead to harmful outcomes.
- Lack of accountability: If an AI system makes a harmful decision, it can be difficult to pinpoint who is responsible: the developers who created the system, the organisation that deployed it, or the AI itself? RPA Tech notes this lack of accountability can make it harder to address mistakes and prevent similar incidents from happening in the future.
- Loss of control: Agentic AI systems can act unpredictably or take irreversible actions, making it difficult for humans to maintain oversight. Research at IBM uncovered disturbing patterns of such behaviour among unconstrained agents, such as deleting critical files, leaking confidential information, or probing system weaknesses, none of which were really prompted for, notes the Global Skill Development Council.
- Over-reliance on AI systems: Users may come to rely on agentic AI for decision-making in critical areas, reducing human involvement and oversight, and creating blind spots and dependencies on AI systems that may be vulnerable to errors or exploitation.
- Security vulnerabilities: Like any digital system, agentic AI is vulnerable to cyberattacks, manipulation, and exploitation, notes RPA Tech. Hackers could exploit vulnerabilities in AI algorithms to influence decision-making for malicious purposes. Agentic AI systems are subject to risks such as prompt injection, bias, and inaccuracies, which can exacerbate existing issues. The rise of agentic AI could lead to a boom in cybercrime via account takeovers by 2027 – with AI agents reducing the time it takes to exploit account exposures by at least 50%, analysts at research firm Gartner have suggested.
- Operational risks: Agentic AI could affect business operations, data privacy, regulatory compliance, and customer trust if not managed properly.
- Complex interactions and biases: The ability of agentic AI to collaborate and learn from other agents can lead to the propagation of errors or biases.
These risks can leave a business using the technology exposed to losses and liability claims.
According to global law firm Herbert Smith Freehills Kramer, issues with AI could result in both liabilities and first-party losses including:
- IP infringement
- customer claims
- discrimination
- regulatory action
- class actions
- system damage
- crime, and
- physical damage.
In light of these risks, careful management and oversight of agentic AI systems is crucial.
Agentic AI and insurance
Like other forms of artificial intelligence, agentic AI could expose the business using the technology to both liabilities and losses if something were to go wrong. Given the autonomous and ‘thinking and acting’ nature of agentic AI, a lack of human oversight or over-reliance on the decision-making functionality could compound the risks.
Businesses need to implement AI best practices (in terms of AI security, data protection, and human oversight) – and consider how insurance may be able to help safeguard against losses and liabilities stemming from agentic AI issues.
Given the rapidly evolving AI landscape, some insurers are developing affirmative AI insurance products to address specific AI risks. However, the affirmative AI insurance market is in its early stages and policies are currently limited, but the market is expected to expand as companies seek protection against AI-related failures.
In the meantime, businesses may find coverage for some AI-related issues in traditional insurance products (where AI has not been excluded), such as:
- Professional indemnity (PI) insurance – for liability arising from AI-driven services, including regulatory actions or customer claims.
- Directors’ & Officers’ (D&O) insurance – if executives face scrutiny over AI governance.
- Public and products liability (PPL) insurance – if an AI-powered product causes consumer harm.
- Cyber insurance – addresses data breaches, security incidents, and potential ransomware attacks linked to AI.
- Employment practices liability (EPL) insurance – may cover claims related to AI-driven decisions that result in workplace discrimination or unfair treatment.
- Property damage and business interruption (BI) insurance – may respond to losses if AI contributes to property damage or operational disruptions.
Global law firm Clyde & Co notes that many organisations are integrating AI as part of their business, and mistakes made by artificial intelligence may take years to be discovered. And it may cost insurers dearly due to unintended cover.
The issue of ‘silent AI’ (where some cover is provided by a policy even though it is not explicitly mentioned in the policy wording, that is, cover is not intended) is being considered by insurers, with some moving to specifically include or exclude AI in their policy wordings. In the case of cyber insurance, some insurers are adding AI-specific endorsements to their policies.
It is imperative that businesses implementing AI consider how liability may arise and whether their existing insurance policies provide adequate protection. Talk to your EBM Account Manager about the cover that may be available to help protect your business against liability risks and first-party losses that could arise from using agentic AI.


























