Feature

Generative Artificial Intelligence (AI) tools can power up a business, but there are watch-outs as attendees of an IBANZ webinar on the topic learned recently.

One of the first things to understand is the difference between AI and generative AI. AI is a broad field encompassing a range of technologies where machines perform tasks that typically require human intelligence. Generative AI is a specific type of AI that focuses on creating new content, such as text or images. All generative AI is AI, but not all AI is generative.

Putting AI tools to work

The most common ways businesses are using AI tools are to automate common tasks. Generally, this means taking content from one form and translating it into another form. That could be anything from data analysis to streamlining client reporting and producing reports. Adoption is very widespread. Even if a business has not yet formally adopted AI tools, it should assume staff are already using them.

The benefit of AI is that it can free up staff from mundane and repetitive tasks to higher-value strategic work. However, sometimes it’s when doing those everyday tasks that employees will spot that something is amiss. Compare that with an AI tool that lacks human intuition and doesn’t necessarily know what to look for. 

Accuracy, reliability and legal risks

One of the realities of generative AI is that when it makes mistakes, it makes them more confidently than we do. Outputs might appear accurate, but they can include convincing errors known as hallucinations. This reflects that AI wants to give an answer, and often isn’t well-trained at telling you it doesn’t know the answer.

Imagine these scenarios:

•        An executive needs to write something for a professional publication but is up against a deadline, so they ask an AI tool to do a first draft. It might look great, but how do they know whether the piece infringes on existing rights or ownership? Imagine if the content was actually just pulled from a competitor’s website and the executive offers it up as their own. 

•        A junior member of staff is helping pull together a strategic plan for their employer and wants the language to be more professional, so they copy and paste it into an AI tool. The result is great, but the unintended consequence is that your business's confidential information now forms part of the AI tool’s knowledge base and is potentially available to anyone – including your competitors.

•        A consultant has a question about the regulations or standards governing an industry sector that’s unfamiliar to them and asks an AI tool for the answer. The problem here is that AI tools all rely on different sets of data, and you can’t be sure the tool you’re using has access to the latest version of the relevant standard or regulation. AI tools are always eager to help and will confidently rely on the most recent version they have access to, but that could mean the answer is out of date. 

Policy implications

An assessment of AI usage needs to form part of coverage reviews. A standard business description question is a great place to start. Have the client clearly explain what the business does and what the people in the business do. If they say they’re not using AI, it’s a sign to dig deeper – they probably don’t realise the extent to which their staff already rely on these tools. The bottom line is that clients have a duty to disclose AI use to insurers to ensure accurate risk assessment and proper coverage. 

Claim implications

Cover for cyber and technology risks is likely to be harder to get and become more expensive over time. Expect insurers to start asking more detailed questions about how AI is being used. And when it comes to a claims situation where AI is involved, if the client can’t show that it was reasonable to rely on that AI, then the claim could be denied. Further, some AI tools provide cheap and effective means to carry out fraudulent attacks.  

Here are a couple of examples of how a client might come unstuck.

Failing to stay in their lane – the ease of using AI tools means insureds may take on risks they wouldn’t have otherwise. For example, there’s a relatively low level of risk when an experienced builder uses AI to sense-check something they already know and can independently double-check the answer. Compare this with a junior tradie who is using it to find an answer that they cannot sense-check. If something goes wrong, the assessment of carelessness involved might be quite different for the experienced builder than for the junior. There are also advantages in spreading the risk. There are times when the experienced builder should call the engineer, not because they don’t know the answer, but because they are then able to rely on that person’s duty of care, effectively using them as an additional layer of insurance. Sometimes, there can be good reasons to resist the appeal of AI and just call an expert. AI cannot trust your gut. 

DIY legal documents – using AI to write legal documents can be a slippery slope. If you ask ChatGPT to draft a legal contract, it will produce something that might superficially look good but may well create headaches for a business when it needs to rely on the document down the track. AI will always try to be helpful and will produce something that looks convincing but is not necessarily correct. 

Helping clients to be AI savvy

Brokers have an important role to play in helping clients understand the insurance implications of AI.

A good place to start is to encourage businesses to have governance policies in place covering the use of AI, including the selection of appropriate tools, how they’ll be used and how responsible use will be monitored. 

The next step is staff training, making sure employees understand the limitations of AI, possible risks, and ethical challenges. Regulatory and ethical compliance are good examples of areas that can’t necessarily be outsourced to AI. A robust staff training programme will help satisfy insurers that all reasonable steps have been taken in the event of a claim.

Remind clients of the importance of staying in their lane. While AI represents an opportunity for entrepreneurship and to free up resources, beware of situations where it actually changes the nature of the business and leaves it operating outside its cover.

Brokers have an important role to play in continuing to update their knowledge in this fast-evolving area so that they can work collaboratively with clients to manage AI risk and secure appropriate insurance coverage for their needs.


Disclaimer: The content of this article is based on a seminar for IBANZ members by law firm Duncan Cotterill. It is general in nature and not intended as a substitute for specific professional advice on any matter and should not be relied upon for that purpose.



December 2025