Why businesses need to make addressing risks part of the chatbot conversation

ChatGPT was released in November 2022 and even though it is barely six months old, businesses around the globe have been quick to jump on the bandwagon. It is the latest technology to make headlines and many business leaders are integrating it into their products and services as quickly as possible. But such rapid appropriation into business plans and operations rarely allows time to prioritise some key aspects of risk management.

What is ChatGPT?

Developed by Open AI, ChatGPT (GPT stands for generative pre-trained transformer) is a natural language processing tool driven by artificial intelligence (AI) technology that allows users to have human-like conversations and much more with the chatbot.

You may be familiar with the use of chatbots in online customer service settings, with the language model enabling it to answer questions – with varying degrees of success! Now, ChatGPT is also being used to assist with tasks such as research, composing emails and other correspondence, preparing speech notes, drafting business plans, writing essays, and generating code.

Use is currently open to the public free of charge as ChatGPT is in its research and feedback-collection phase. A paid subscription (ChatGPT Plus) was launched in February 2023. In April this year, Open AI announced that ChatGPT will be upgraded to allow users to communicate with the chatbot using text and images.

According to Swiss bank UBS, ChatGPT is the fastest-growing app of all time, with the analysis estimating that it had 100 million active users in January – just two months after its launch.

ChatGPT works by gathering data, written by all sorts of people (who may or may not be qualified enough to be considered ‘authorities’), on which it was trained and uses computing predictions to answer questions and queries inputted by the user. The replies it generates are promoted by textual requests and information, from which the chatbot ‘learns’ more about different subjects and how to discuss them.

Given the capabilities of the chatbot, many owners and managers are considering ways to incorporate the technology into their businesses to enhance workflows, streamline operations and improve customer experience.

ChatGPT threat landscape

As businesses increasingly embrace products and services based on AI, such as ChatGPT, the risks associated with the technology need to be understood so they can be adequately mitigated.

Cyber

In a survey by Blackberry, 74% of IT professionals said they were concerned about the potential cybersecurity threats posed by ChatGPT. The concern is well-founded, as other researchers report hackers have been successfully using the chatbot in cyberattacks since at least December 2022. The use of generative AI by businesses and threat actors could result in more effective cyberattacks, data breaches and other risks.

AI technology can make it harder for businesses to defend against certain cyberattacks such as social engineering (phishing) and business email compromise attacks. “Just imagine how much more powerful phishing attacks will be, for instance, when AI as sophisticated as ChatGPT sends emails to staff that appear to come from the boss, that use information only the boss would normally know, and that even use the boss’s writing style,” notes an April 2023 article in Forbes by Glenn Gow. The article goes on to highlight the potential for “the use of deepfake technology like voice clones in cyber swindles” to increase cyber risk.

According to a report from Team8, the chatbots could be exploited by hackers to access sensitive corporate information or perform actions against the company.
A salesforce.com survey found 67% of IT leaders were prioritising generative AI for their organisations, but 71% believe the technology is likely to “introduce new security risks to our data”.

For businesses using the technology, there is the risk of data breaches if the tool itself falls victim to a cyberattack. Also, as ChatGPT adds input from its users to the database it is trained on, the tool could potentially resurface confidential data entered by a business, if properly prompted.

Technology limitations

Businesses must also consider the technological limitations of ChatGPT. While AI technology can replicate many human-like behaviours and capabilities, it lacks essential skills like critical thinking, strategic decision-making and creativity.

ChatGPT will often write plausible-sounding, but incorrect or nonsensical answers, with a great deal of confidence. The chatbots have also been known to ‘hallucinate’ or make up content (fabricate facts and sources) where it does not have access to sufficient data. The ‘hallucination’ rate of ChatGPT has been estimated to be between 15 and 20%, which is a relatively significant margin of error. In short, it can get things wrong – and that’s obviously risky.

AI models like ChatGPT require extensive training and fine-tuning to perform at levels businesses need to be assured they have an acceptable level of reliability and effectiveness. At present, ChatGPT and other AI chatbots cannot accurately assess the information it provides to users, so businesses need to be cautious about how they use the AI tools.

Errors

The chatbot is trained from books, websites and articles to create questions, answers, summaries, translations, calculations, code generation, conversations and more. Its knowledge is limited to information that was available when it was trained (data up to June 2021), and it is unable to access new information. As a result, some of the information and answers ChatGPT provides users may be low quality or outdated, or it may contain errors. So, businesses cannot be certain that the information this technology provides or what it produces will be accurate. In some cases, AI-generated errors could prove costly, subjecting businesses to government audits, fines and penalties, or legal action.

Legal

This technology can create potential legal and privacy issues businesses must consider. For example, AI-generated content can violate copyright laws and create privacy issues.

ChatGPT has been trained using large datasets, including existing articles, literature, quotes and websites, and so when a user prompts a response, the output may incorporate materials subject to copyright. The business distributing the output may be infringing copyright.

Privacy concerns have also been raised. Incidents of ChatGPT leaking chat histories of people’s private conversations with the bot have already occurred. Should a business input sensitive or private data, and that data is breached, there could be serious ramifications.

In addition, the conversations users have with AI chatbots may be reviewed by AI trainers, inadvertently disclosing sensitive and confidential business information and trade secrets to third parties. This could potentially expose businesses to legal risks under privacy laws.

Data bias can result in legal and financial issues for the business. When data is collected and used to train machine learning models, these models may inherit the biases of the people building them, resulting in unexpected and potentially harmful outcomes.

Reputation

If a business’ AI behaves in a way that is not in accordance with its values, it can result in reputational damage.

The use of chatbots may also ruin relationships with customers. According to Forrester Research, 75% of consumers are disappointed by customer service chatbots, and 30% take their business elsewhere after a poor AI-driven customer service interaction.

Mitigating ChatGPT liabilities

While risk management around ChatGPT and AI adoption is in its infancy, there are a number of core principles that can help businesses effectively manage the evolving threats related to security, privacy and intellectual property.

According to PwC, best practice includes, but is not limited to, the following good advice:

  • Set generative AI usage policies. As businesses look to integrate generative AI with their own content, including their intellectual property and other assets, it is important to set policies to avoid confidential and private data from going into public systems, and to establish safe and secure environments for generative AI within the business.
  • Focus on data hygiene. Identifying the appropriate data to input into the system will help reduce the risk of losing confidential and private information to an attack. All information being inputted should be thoroughly vetted.
  • Assess the risk of data bias. As AI outputs rely on the data quality that is input, the business should evaluate any outputs that may indicate inherent bias. According to the Data Bias: The Hidden Risk of AI survey by Progress, 66% of Australian organisations suffer from data bias. And while most businesses were aware of the importance of mitigating data bias, they were unsure how to tackle it effectively.
  • Manage access to generative AI. Privileged access management programs need to identity and limit the individuals permitted to use generative AI for content creation.
    Businesses also need to have an AI risk management plan and mitigation strategies which include appropriate insurances.

Role of insurance

The use of ChatGPT presents corporates with a number of risks, many of which can have significant consequences for the business, including financial, legal and reputational costs. Businesses may look to insurance to help mitigate those risks. Cyber insurance and other insurance policies such as professional indemnity (PI), directors’ and officers’ (D&O), statutory liability and public and product liability (PPL), may help businesses impacted by claims related to the use of the emerging technology. Cover may be available to protect against claims arising from specific cyber incidents, cyberattacks or alleged wrongful collection and/or sharing of information (either directly or indirectly through a vendor), errors and omissions, or other liabilities. Talk to your EBM Account Manager about cyber and liability policies that may help protect your business as you embrace the latest technology.