Menu Close

Every business needs an AI hero.

Why AI Reports Are Only as Good as the Human Behind Them

G’day Rankers, welcome back.

AI tools are everywhere right now, and chances are you’re already using them. If you’re not, you should be. But as useful as these tools are, there’s a growing issue that’s becoming more and more apparent — and it’s one we’re seeing play out with our own clients every week.

The Rise of the AI-Generated Site Report

We’re getting a lot of clients who are plugging their websites into tools like ChatGPT, Claude, and similar platforms, and coming back to us with AI-generated reports. And honestly, that’s great — that’s exactly what they should be doing. The problem, though, is that these reports often lack context, and sometimes the person reading them doesn’t fully understand how the tool works.

We had a report come in from a client just this week that’s a perfect example of this. Over a year ago, we had our whole team run these kinds of reports for our existing clients because we wanted to understand what the AI chatbots were seeing — and more importantly, what they were missing. So we’ve built up a pretty solid understanding of the pitfalls.

The 27% Problem

In this particular report, there was roughly a 27% failure rate. Now, that might sound bad, but it actually means 73% of the findings were usable — which is a decent result. The issues that came up in the inaccurate 27% included things like the AI flagging a Christmas sale banner that was no longer running, because it was looking at cached or stale versions of the site. It also recommended implementing live chat — which was already on the site and had been for some time.

You can reduce these kinds of errors by refining your prompts, and there are plenty of tools out there to help generate better reports. But the bigger question is: even when the report is technically accurate, does it actually understand what’s happening with the business?

The Human Layer You Can’t Afford to Skip

This is where the concept of the “human layer” becomes critical. We’ve always said: keep a human in the loop. And that’s never been more important than it is right now.

When you’re working through AI-generated recommendations, you need someone who can ask the right questions. Have some of these suggestions already been tried before? Have major site changes just gone live from a separate project or report? What else is happening across the business? Are other departments making changes? Has the brand voice shifted?

These are things an AI tool simply cannot know. And without that context, even the most accurate recommendation can lead you in the wrong direction.

That person — your AI-savvy operator, your “AI hero” as I like to call them — needs to understand how the tools work, the kinds of mistakes they make, and the opportunities they create. But they also need to understand the broader business picture. Because at any given time, you might be fielding 20 different reports from 20 different stakeholders, each with their own interests and priorities. Someone needs to be able to cut through that noise and say: here’s what we should actually focus on.

Give the Best Tool to the Best Operator

We’re building all of this thinking into our own processes right now, including working it directly into our SKAW platform. The principle is simple: the best paintbrush should go to the best painter. Whoever that AI-literate, context-aware person is in your organisation, make sure they’re across all the tools, understand what those tools might miss, and know how the recommendations dovetail with everything else you’re working on.

Because the volume of information available to us now is immense. What becomes truly valuable is no longer just knowing what you can implement — it’s knowing what you should implement.

If you don’t have that person internally, make sure your agency is stepping into that role. Every single team member at our agency has their own instance of Claude Code configured specifically for their role, so everything we do comes through that filter. When it tells us something we know is wrong or not applicable, we tell it — and because our instances have memory, it learns from that over time.

What Shopify’s New AI Tool Is Telling Us

On a related note, keep an eye on what’s happening in Shopify right now. Sidekick — their AI shopping simulator — has been rolling out to a whole range of stores, free of charge. It uses virtual agents that browse and shop on your site the way a real human would, and the recommendations it’s surfacing are quite telling.

For the clients we’ve looked at, a lot of the advice coming through is consistent with what we’ve been recommending for the past 10 years around user experience. That’s reassuring. I can remember a conversation with someone from Shopify about a decade ago about exactly this — speed to purchase. The easier you make it to buy, the more people will buy, and average order value goes up. These fundamentals haven’t changed; the AI tools are just starting to catch up to what good operators already know.

The Bottom Line

AI recommendations are almost always generated in isolation. They rarely know the full picture — not just across your whole site, but certainly not across your whole organisation. That gap is where human expertise, experience, and context become the real competitive advantage.

Hopefully that’s been helpful. If you’ve got any questions, or if you’d like to test SKAW for WordPress — we’re rolling that out on a couple of client sites right now — feel free to reach out. I’m always open to feedback. You can reach out to me at jim@stewartmedia.biz

As always, like, share, and subscribe, and we’ll see you next week. Thanks very much. Bye!

The post Every business needs an AI hero. appeared first on StewArt Media.