Blog - Latest Enterprise IT News and Information | Inde Technology

Reality Bytes: AI isn’t the answer to everything

Written by Natassja Savidge | Feb 18, 2026

In tech and IT, everyone has an opinion. But sometimes the most interesting insights come from the ideas no one else dares to say out loud. Unpopular opinions push us to think differently, challenge assumptions, and question the “standard” way of doing things. In this series, we’re sharing the bold ideas and contrarian takes that keep us thinking, learning, and improving and sometimes shaking up the status quo. Think of it as a safe space for bold, evidence-backed perspectives.

Whether you agree or not, these opinions spark discussion and often uncover new ways of thinking.  In 2025 we ran an event called Reality Bytes where 5 leading IT experts shared their straight shooting, unfiltered insights on the most pressing and taboo challenges. 

The response to Reality Bytes confirmed something we already suspected: there’s real value in open, practical conversations that challenge assumptions. That’s why we’re turning these sessions into blogs, capturing the ideas worth exploring further. If you’d like to stay across future posts or be the first to hear the next perspective that questions the status quo, subscribe to receive our latest blogs and event updates.

With the scene set, it’s time to dive into our first hot take: 

AI isn't the answer to everything

By Natassja Savidge, Technical Director – AI, Automation and Integration at Inde

Personal hot take: Crocs are an excellent footwear choice.
Professional hot take: AI isn’t the solution to every technical problem.
Audience survey results*: 98% agree, 2% disagree
*Inde survey at Reality Bytes event 

Even though most organisations report widespread interest and initial adoption of AI, the reality of meaningful, scaled integration is far more nuanced. According to the AI Forum’s 2025 Productivity Report, adoption has surged with a large majority of New Zealand businesses reporting efficiency gains (91 %) and operational cost savings (77 %) from AI tools, and setup costs falling sharply for many organisations. 

Yet widespread enthusiasm doesn’t automatically equate to effective or comprehensive use: much of this adoption remains focused on specific efficiency wins rather than organisation-wide transformation, and challenges like building trust, creating capability, and aligning AI with real business needs persist. At Inde, we sometimes have clients come to us with deep technical issues wanting an AI solution and whilst there are plenty of scenarios where we can innovate an AI solution (and love doing it), throwing AI at every IT challenge without clear purpose can lead to overengineering, wasted resources, and missed opportunities to solve problems more simply and strategically.
 
So here’s three unpopular opinions on why I think AI isn’t the solution to every technical problem:

Don’t use AI to compensate for bad data

What often happens is that teams want to “shortcut” established processes, hoping AI will magically fix messy data or poor documentation. This idea is tempting, especially when leadership pressures you to adopt AI quickly. But AI should amplify good processes and accurate data, not work around poor ones.

Research confirms this: poor data quality remains one of the biggest barriers to effective AI projects, with organisations struggling when their data isn’t ready for advanced automation or machine learning.

A common temptation is to use AI to paper over poor data rather than fixing the root cause.  If your data is inaccurate, inconsistent, or undocumented, AI will either repeat or amplify those problems. That means AI might produce outputs you don’t trust or worse, decisions that don’t align with your business needs.  

For example:
A team introduced an automated system designed to speed up internal decision-making by pulling information from existing documentation and process guides. During testing, the outputs were inconsistent, and trust in the system dropped quickly. The assumption was that the automation itself wasn’t fit for purpose.

In reality, the problem sat much further upstream. The documentation it relied on hadn’t kept pace with how the organisation actually worked. Responsibilities had shifted, escalation paths had changed, and processes had evolved without being properly updated. When the system surfaced information about who owned what, it was technically doing its job — it was just working from inputs that no longer reflected reality.

The initial response was to ask whether AI could cross-check or validate the information before returning it. But that approach only adds complexity. It doesn’t solve the real problem. Anyone accessing that same documentation outside of the AI experience would still receive the wrong information.

This is why AI shouldn’t be used to compensate for bad data. Fixing the source of truth is always more effective than building layers of intelligence on top of inaccurate inputs. Clean, accurate data benefits every system, not just AI.
Before introducing AI, simplify the process, update documentation, and make sure ownership and data are correct. AI works best when it amplifies good foundations, not when it’s asked to work around them.

Don’t introduce AI into situations where you don't want ambiguity

AI models are probabilistic by nature they make predictions, not guarantees. If your process cannot tolerate ambiguity, AI might not be the right fit. In those cases, automation or a process change can deliver better results with less risk.  

For example:
One practical example I often share is around using AI to process manually submitted requests. On the surface, it sounds like a great efficiency gain — and in some contexts, it absolutely could be. If you’re selling something low-risk, like T-shirts, the impact of a mistake is usually manageable.

But when you apply the same thinking to regulated industries like financial services, the risk profile changes completely. Requests often involve specific instructions, approval thresholds, and strict compliance requirements. If AI misinterprets a request — for example, executing the wrong transaction or applying incorrect terms — the consequences are serious. That’s not just a customer experience issue; it’s a compliance, financial, and legal one.

This is a good example of where AI introduces ambiguity into a process that cannot tolerate it. In situations like this, deterministic systems, strong validation rules, or human oversight are far more appropriate than probabilistic AI models.
The key lesson is simple: don’t introduce AI into processes where getting it wrong isn’t an option. Start by understanding the risk, then choose the technology that best fits the outcome you need.

Don’t forget, human connection matters

While AI is powerful, it can never replace the human judgement and connection essential in many business scenarios. Use AI as a tool, not a telephone game that passes information around without understanding. AI should augment human capability, not replace it.

For example:
Another lesson I’ve seen firsthand is why human connection still matters, especially when complexity is involved.

When requirements are unclear or nuanced, it’s tempting to rely on AI tools to interpret and summarise them instead of asking clarifying questions. One person uses a chat tool to make sense of the inputs they’ve been given, then passes that interpretation on. The next person, unsure they fully understand it, does the same.

Before long, the process becomes a game of telephone. Each handoff introduces new assumptions and subtle changes in meaning, and the final output drifts further away from the original intent. What started as a clear idea ends up diluted — not because the tools failed, but because human understanding was never properly aligned in the first place.

Research suggests that over-reliance on AI can dull human cognitive skills if we stop engaging deeply with information and problem solving ourselves. As the Harvard Gazette reports, while AI excels at processing large datasets and generating outputs, leaning on it too heavily can weaken our own critical thinking and decision-making abilities. That’s why human oversight, conversation, and context remain essential. AI should augment our thinking, not replace it.

Sometimes the most effective solution isn’t another tool or another layer of interpretation, it’s simply picking up the phone, asking questions, and aligning as humans.

AI should support understanding, not replace conversation. When the goal is clarity, direct human connection is still one of the most powerful tools we have.

Ultimately, AI is most useful when applied purposefully, to solve defined problems where the outcome is clear, measurable, and aligned with organisational goals. Before investing in AI, make sure you understand the challenge you’re trying to solve, clean up your data, and ensure your processes support the outcomes you want.

AI might be exciting, but it isn’t always the right tool for every job - and that’s okay.