Australia is looking to regulate the use of AI – what businesses need to know 

The use of artificial intelligence (AI) is growing exponentially. Figures from Forbes show the AI market is projected to reach a staggering US$407 billion by 2027 – a substantial increase from its estimated US$86.9 billion revenue in 2022. According to PwC’s Global Artificial Intelligence Study, AI could contribute up to US$15.7 trillion to the global economy in 2030 – more than the current output of China and India combined. 

Locally, digital technologies, including AI, could be worth $315 billion to the Australian economy by 2028, according to the CSIRO. A report from the Tech Council of Australia and Microsoft notes generative AI, such as ChatGPT, could contribute up to $115 billion dollars a year to Australia’s economy by 2030. Management consultancy firm McKinsey estimates the integration of automation technologies could add an additional $170 to $600 billion a year to Australia’s Gross Domestic Product (GDP) by 2030. Research from data hub, Statista, revealed the leading industries most serviced by AI firms are aerospace, defence and security (50.6%), followed by mining, energy and resources (43.4%) agriculture and healthcare (both 33.7%).  

According to a report from IT consultancy firm Avanade, Australians are embracing AI in the workplace faster than any other country. The AI Readiness Report found that, globally, an average of 57% of workplaces make daily use of AI – in Australia that figure jumps to 76%. Australian businesses are concerned about being able to adopt AI fast enough, with 96% feeling that an AI-first policy is required to remain competitive and that it should be put in place in the next 12 months or risk falling behind in the marketplace.  

The advance of AI has its challenges.  

Proofpoint’s 2023 Board Perspective Report revealed that the rise of AI is seen as a security risk by the majority of Australian board members – with 71% seeing applications such as ChatGPT as a potential problem for their company.  

The World Economic Forum’s Global Risk Report 2024 highlights the dangers of AI-driven misinformation – with disinformation named as the top concern for the year. 

Research from Versent found 11% of survey respondents were “terrified” of AI and 42% said they were “concerned”. Survey respondents were particularly concerned about businesses using AI in their operations, with only 45% saying they trust companies to use AI appropriately. Data security was also a major concern – 85% said they were either “worried” or “very worried” about their data being stolen.  

A study conducted by BSI across nine countries, found Australia has the lowest trust in AI and 64% want international guidelines for the safe use of AI to be put in place.  

Along with the rapid growth in AI adoption has come calls for greater regulation.  

Even tech powerhouses have weighed in on the debate.  

Bill Gates, whose Microsoft operates Azure AI Platform, floated the idea of establishing a “global government”/ “global regulatory body” to keep potential abuses of AI at bay.  

Elon Musk stated that there is an “overwhelming consensus” that there should be some AI regulation. The tech mogul, who’s AI company xAI has created a chatbot named Grok, said that AI regulation “will be annoying” but, ultimately, “having a referee is a good thing”.  

ChatGPT creator, Open AI’s CEO, Sam Altman, proposed the establishment of an International Atomic Energy Agency (IAEA)-like body for regulating AI. Chatbot Bard’s parent company, Google CEO, Sundar Pichai, also called for a global regulatory framework for AI similar to the treaties used to regulate nuclear arms use. While Apple CEO, Tim Cook, believes regulation and guardrails are necessary.   

Those calls are being addressed by organisations around the globe – including in Australia. 

Companies are under immense pressure to identify innovative ways to leverage AI and create differentiation to stay ahead of the competition. According to McKinsey, “over the last three years, the spread in digital and AI maturity between leaders and laggards has increased by 60%”. It notes that there is evidence that companies that have leading digital and AI capabilities outperform laggards by two to six times on total shareholder returns across every sector analysed.   

With businesses rushing to implement AI, there are risks of unchecked AI. These risks include privacy, intellectual property and security issues, misinformation and disinformation, discrimination and unfair practices, data poisoning, input manipulation and AI “hallucinations” (a response generated by an AI which contains false or misleading information presented as fact) 

In response, businesses and boards are increasing their focus on AI governance.  

According to the Human Technology Institute at the University of Technology Sydney, around two-thirds of Australian organisations are already using, or actively planning to use, AI systems to support a wide variety of functions. The State of AI Governance in Australia report notes that corporate leaders are largely unaware of how existing laws govern the use of AI systems in Australia and that they need to rapidly appreciate their existing legal duties and emerging responsibilities regarding AI.  

According to one law firm, companies need to start applying guardrails to AI development, use and deployment. Governance aspects should include: 

  • accountability
  • visibility
  • regulatory compliance
  • risk management systems
  • policies and processes
  • data governance, quality and privacy
  • transparency, explainability and interpretability
  • consumer engagement
  • supplier risk management
  • accuracy, robustness and security
  • AI incidents and resilience, and
  • training.

Government is also stepping in to regulate the use of AI. 

In June 2023, the Australian Government opened consultation on its “Safe and Responsible AI in Australia” discussion paper. Consultation closed on 14 August (500 submissions were received) and an interim report was released on 17 January 2024. 

Minister for Industry and Science, the Hon. Ed Husic said: “Australians understand the value of artificial intelligence, but they want to see the risk identified and tackled…Australians want stronger guardrails to manage higher-risk AI”. 

The response is targeted towards the use of AI in high-risk settings (where harms could be difficult to reverse), while ensuring the vast majority of low-risk AI continues to flourish largely unimpeded. 

The Government’s considerations of mandatory guardrails for AI development and deployment may involve amending existing laws or formulating new, AI-specific legislation. 

The first step in developing the safeguards is to identity what risks AI tools present, what mandatory safety safeguards would appropriately deal with said risks and the best ways to implement them, the Government noted.  

Part of the risk-based approach includes: 

  • introducing voluntary measures such a labelling and watermarking AI-generated content for low-risk AI; and 
  • imposing mandatory rules such as independent testing and audits for high-risk products such as self-driving cars. 

Immediate actions being undertaken include: 

  • working with industry to develop voluntary AI Safety Standards; 
  • working with industry to develop options for voluntary labelling and watermarking of AI-generated materials; and 
  • establishing an expert advisory group to support the development of options for mandatory guardrails. 

Mandatory guardrails to promote the safe design, development and deployment of AI systems will be considered, including possible requirements relating to: 

  • Testing – testing of products to ensure safety before and after release. 
  • Transparency – transparency regarding model design and data underpinning AI applications; labelling of AI systems in use and/or watermarking of AI-generated content. 
  • Accountability – training for developers and deployers of AI systems, possible forms of certification, and clearer expectations of accountability for organisations developing, deploying and relying on AI systems.   

The Government said it is closely observing similar regulatory efforts being undertaken in the EU, the US and Canada, and it is working with other countries to shape international efforts in this area.  

The use of AI in the insurance, advice, banking and credit sectors is also being reviewed by the Australian Securities and Investments Commission (ASIC) amid concerns rapid uptake of the technology is increasing risks for consumers and outpacing regulations.  

Speaking at a University of Technology Sydney symposium on 31 January 2024, ASIC chair Joe Longo expressed concerns about the adequacy of current regulations to prevent AI-related harms before they occur. Mr Longo said, “the rapid pace of AI advancement raises crucial questions about transparency, explainability, and the capacity of existing regulations to adapt promptly”.  

Regulation of AI brings additional liability risks for business. 

Businesses using AI need to be aware of the current issues in the use of the technology. The Australian Cyber Security Centre released a comprehensive set of guidelines on how to safely take advantage of generative AI platforms, especially in the workplace.  

Owners, directors and boards also need to prepare for any regulatory changes that are likely to come into effect. Any changes to regulations have the potential to introduce additional liabilities for businesses. Talk to your EBM Account Manager about risk mitigation strategies and the insurances available to help protect the business, including cyber and liability covers such as Directors & Officers and statutory liability.