Almost every agency website in 2026 has an “AI” page. Most of them shouldn’t. The pattern is so consistent we’ve started using it as a diagnostic — show us your AI page and we can usually tell whether you actually do AI work or whether you’re just performing.
This is what’s gone wrong, and what serious AI work in an agency context actually looks like.
The fake AI page
The fake AI page reads like marketing copy with the word “AI” added. The headline talks about “AI-powered transformation.” The services list reads “AI Strategy, AI Integration, AI Consulting.” The case studies are vague: “We helped Client X leverage AI.” There’s a paragraph about how AI is changing everything.
What’s missing: any specific AI capability the agency has built, deployed, and maintained. No mention of which models. No mention of what infrastructure. No screenshots of working systems. No discussion of cost or latency. No examples of what didn’t work.
This isn’t an AI service. It’s a page about AI.
What’s actually happening
The agencies that talk this way are usually doing one of three things, none of which they’d admit to on the AI page:
- Reselling ChatGPT. Client wants help with content, agency uses ChatGPT to draft it, charges market rates for the work. This is fine — humans should use AI tools, and clients pay for outcomes — but it’s not “AI services.”
- Reselling someone else’s chatbot. Client wants a chatbot, agency configures Intercom or Drift, charges a setup fee. Again, fine. Not “AI services.”
- Adding AI to the site copy because the market expects it. No actual AI work is happening. The page exists because every competitor has one.
Each of these is legitimate work or legitimate marketing. The problem is the gap between what’s being delivered and how it’s described. Clients pay for “AI integration” and get tool configuration. Six months later they realise they could have done it themselves with a free trial and a weekend.
What real AI work looks like
An agency genuinely doing AI work has at least one of these pieces, and usually several:
Custom models or fine-tuning. Not just calling OpenAI’s API — actually training, fine-tuning, or building from scratch. This is rare and expensive. Agencies that do it well usually have at least one ML engineer and have the case studies to back it up.
Production RAG systems. Real retrieval-augmented generation pipelines deployed to production traffic. This means a vector database, an embedding pipeline, a retrieval orchestration layer, evals, and ongoing monitoring. Our chatbot implementation guide covers what this actually involves.
Custom AI products. Built-from-scratch AI-native products where the AI is core to the product, not an add-on. These are software products with a substantial team behind them.
AI search optimisation. Real work on getting clients cited by ChatGPT, Perplexity, and Gemini — structured content, schema, llms.txt, semantic SEO, citation-friendly pages. Measured by actual citation appearances over time.
An agency claiming AI capability should be able to show specific work in at least one of these areas, with measurable outcomes.
The buyer’s checklist
If you’re hiring an agency for AI work, three questions filter most of the bad fits:
“Show me a production AI system you’ve built and currently maintain.” Not a slide deck. Not a hypothetical. A working system, with real users, that we can see.
“Walk me through what failed in your last three AI projects.” Anyone who’s done real AI work has stories. Hallucinations, latency problems, cost blowouts, model deprecations. An agency that can’t surface specific failures probably hasn’t done specific work.
“What’s the monthly run-cost of this system at our scale?” Real AI work has real run-costs. Inference isn’t free. An agency that can’t quote run-costs hasn’t operated production AI systems.
What we tell our clients
We do AI work, and we tell clients exactly what kind. We build production RAG chatbots. We optimise sites for AI search. We integrate LLM-powered features into custom products. We don’t fine-tune our own models — we use the major providers — and we say so. We don’t sell “AI strategy” — we either build something or we don’t.
This narrowness is a feature. The clients who hire us for AI work get capabilities we’ve used in production. The clients who need fine-tuned models or custom infrastructure get referred to specialists who actually do that.
The agencies that win in AI over the next five years won’t be the ones with the most impressive AI page. They’ll be the ones who said the least and shipped the most. Our small-agency philosophy is built on this kind of restraint.
If you’re choosing an AI vendor, choose for evidence, not announcements. The agencies that have done the work talk like engineers describing systems. The ones who haven’t talk like marketers describing futures.