Eighteen months into the AI boom, adoption in New Zealand remains uneven. Some organisations are experimenting with automation, while others are still asking, “How do I write a good prompt?”
The risk is not that everyone needs to be cutting-edge overnight. The risk is the gap between early adopters and laggards is growing. Businesses that embed AI early into their operations will move faster and learn more, creating advantages that are hard to claw back. For those ahead of the curve, the opportunity is not just efficiency but laying the foundations for long-term growth.
A recent Public Service Association survey suggested only 19 percent of New Zealand organisations feel fully prepared for AI, with 13 percent partially prepared, leaving the majority still on the sidelines. In New Zealand, leaders from boardrooms to factory floors are often asking us: how should we use AI, and how fast should we move? Beneath the excitement, there are realities about risk, capability, and culture that businesses cannot ignore.
One of the clearest risks in enterprise AI is overreliance. Generative models are powerful, but they are also prone to “hallucinations,” producing outputs that sound convincing but are wrong. Without people in the loop, errors can flow into company software code, presentations, and strategy documents. For enterprises, that is not a theoretical risk, it is a compliance failure, a reputational hit, and potentially millions of dollars lost.
The challenge runs deeper than catching mistakes. If businesses treat AI as a replacement for junior staff, they risk hollowing out the next generation. Junior employees are tomorrow’s experts: they apply critical thinking, learn to spot flaws, and eventually step into senior roles. Investing in them is not just about today’s tasks, it is about building tomorrow’s workforce. There is also a cognitive dimension. If people stop practicing critical thinking skills in everyday, low-risk situations, their ability to apply them when it really matters will weaken. This challenge was recognised decades ago in studies of automation and has re-emerged in the context of generative AI, where too much reliance on technology can gradually erode human judgment.
Another pressing risk is data leakage. In the rush to experiment, staff often copy sensitive information into free AI tools, unaware they may be exposing customer data or intellectual property. Real world examples already exist, from companies’ intellectual property being leaked online to sensitive information being inadvertently exposed on the open internet.
This is not theoretical. It is happening now, and the reputational and regulatory consequences could be severe. Enterprises must put in place clear guardrails, secure platforms, and educate their staff to prevent costly vulnerabilities. Another subtle risk arises during business transitions. Without adequate safeguards, confidential information can be exposed or carried outside the organisation. Large corporations are tightening controls around this, recognising that security is as much about managing change as it is about protecting systems
Adoption is not just about rolling out software, it is about showing how AI can make work easier and the business stronger. Take transport and logistics. Drivers may still prefer their usual routes, even when software suggests a faster one. Or consider farming, where companies can leverage drones and machine learning to complete stock counts in a fraction of the time it takes by hand. These examples highlight that AI works best when it feels like support, not replacement.
Leaders have a role in framing AI as a partner. By engaging staff early and asking where technology could remove frustrations or free up time, businesses can build trust and adoption naturally. Effective governance strengthens that trust further, protecting against misuse and ensuring AI adoption is sustainable and responsible. The result is a workforce that sees AI as an ally in delivering better outcomes.
Enterprises must strike a balance between seizing the benefits and safeguarding against pitfalls. That means:
Keeping humans firmly in the loop, particularly in oversight and early-career roles.
Putting guardrails around data and tool usage.
Recognising that uneven adoption is creating divides.
Framing AI as an enabler that supports people, rather than a tool that replaces them.
The right questions to ask are simple: where are the real frustrations in your business, and what long-term goals do you want to achieve? The harder part is working out how to get there. That is where the right expertise makes the difference. Starting with small steps and drawing on specialist guidance can help turn AI from a buzzword into a business advantage.