Artificial intelligence (AI) is no longer a future-state discussion in insurance; it is now core infrastructure for the industry in both Australia and New Zealand. Global surveys suggest the industry is well past the experimentation phase: research cited by Vonage notes that 11% of insurers have fully adopted AI and a further 49% are in the process of implementing it at scale. A separate 2024 trends survey of 431 insurance executives across regions, including Australasia, found “rapid use of AI” is now a defining feature of the operating environment.
A joint CSIRO–Insurance Council of Australia report argues AI is “set to redefine the way Australia’s insurance industry operates,” from fraud detection to complex claims management. At the same time, PwC’s AI Jobs Barometer shows how quickly the required skill mix is shifting: AI-related postings in Australia have jumped from around 2,000 in 2012 to 23,000 in 2024, and in financial and insurance activities, 11.8% of job ads now explicitly demand AI skills.
New Zealand insurers are part of this global AI ramp‑up too and are already experimenting with AI in fraud detection, pricing and operations. For brokers and underwriters, that surge in tooling is colliding with an older, slower craft – learning how to read risk.
AI can’t replace the craft of risk
Michael Lewis (main picture, left), cyber development manager for Australia with CFC, is concerned about the future for people who still haven’t engaged with AI. “I think it won't replace jobs as much as it will make people better at their jobs, their current job, and the ones who don't embrace it may fall behind,” said Lewis.
That line aligns with what the global data is already telling boards: the competitive threat is not an abrupt wave of redundancies, it is the quiet but rapid divergence between teams that build AI into their workflows and those that do not. AI-powered models are already cutting claim processing times by weeks and improving routing accuracy and customer outcomes in major markets.
But Lewis’ deeper fear is arguably more urgent for the Australian and New Zealand markets. “Previously, it's taken years to understand risk and be a decent underwriter,” he said. That long apprenticeship depended on junior staff absorbing patterns by reading proposals, examining wordings, sitting in on claims disputes and watching senior underwriters make judgment calls at the edge of the data.
As AI takes over more of the rote work – triaging submissions, extracting data, auto-populating referrals – that “on-the-job osmosis” disappears. If the industry does not redesign its training, a generation of recruits could become highly efficient systems operators who never fully learn what the system is doing.
Lewis’ prescription is clear: “So the training we give to new entrants should include the fundamentals of understanding risk and using AI as the tool it is to help make those decisions,” he said. That means treating AI literacy and risk fundamentals as twin pillars of early-career development, not as something juniors are expected to “pick up” while the machine does the heavy lifting.
“Still a people business”: what brokers must protect
Trent Nihill (main picture, right), Coalition’s general manager for Australia, sits in a similar camp: he sees AI as an amplifier, not a replacement. Used well, he argued, it can “analyse more data than people can; it can do in seconds what takes days” and fundamentally change how the market understands and prices risk.
Yet Nihill is adamant that the human side cannot be automated away. “Insurance is still a people business; you still need your reputation, you still need all the relationships,” he said. That observation should land heavily with Australian and New Zealand intermediaries. Even as AI adoption rises in local insurance and healthcare, consumer trust in AI remains mixed – customers see potential benefits but harbour concerns about fairness and reliability.
In that environment, the broker who can explain how an AI-assisted decision was reached, challenge a model-driven declinature and translate complex outputs into plain language will be worth more, not less. Conversely, the broker who simply relays what the system says, without the grounding to question it, is easily substituted.
For training managers, the implication is that technical AI skills are now table stakes, including understanding how tools work, where they fail and which tasks they should never own. But those skills must sit on top of explicit teaching in policy construction, wordings, causation, aggregation, regulatory expectations and claims behaviour in local conditions – precisely the tacit knowledge that used to be acquired slowly at the desk next to a seasoned insurance professional.
As AI integrates itself deeply into insurance on both sides of the Tasman, the industry’s real test won’t be how quickly it can spin up new models. It will be whether brokers and insurers can redesign training fast enough to preserve the craft of underwriting and risk assessment in a world where a machine handles the “easy” bits – and much more.
Insurance Business NZ