Every founder has heard the pitch: “Add an AI chatbot. It’ll handle 70% of your support tickets.” That number is mostly nonsense, but the underlying idea isn’t. A well-implemented chatbot can deflect repetitive questions, qualify leads before they reach a human, and surface information buried three clicks deep in your help center. A badly implemented one frustrates every visitor who touches it and damages the trust your homepage spent six months building.
This guide walks through what actually works in 2026 — what to build, what to skip, and what the budget really looks like.
Two kinds of chatbots, two completely different costs
The first decision shapes everything else: are you building a scripted bot or a retrieval-augmented bot?
A scripted bot follows a flowchart. The visitor clicks “I want a quote,” the bot asks three questions, and a lead goes into your CRM. It’s cheap, predictable, and useful for narrow tasks. Intercom, Drift, and Tidio all do this well. Budget: $50–$200/month for the platform, plus a few hours of setup.
A retrieval-augmented bot — the kind people mean when they say “AI chatbot” — uses a large language model that has been given access to your content. It can answer questions in natural language, cite the right page, and handle questions you didn’t anticipate. Budget: $1,200–$5,000 for the build, plus $80–$400/month in API costs depending on traffic.
If you’re a B2B service business with a knowledge base of 50+ articles, the retrieval bot is the one. If you’re a single-product e-commerce site, a scripted bot is probably enough.
The architecture that works
Most production AI chatbots in 2026 follow the same pattern, called RAG — retrieval-augmented generation. It’s three pieces:
- Index your content. Crawl your site, your help center, and any internal docs you want the bot to know about. Convert each page into vectors and store them in a vector database — Pinecone, Weaviate, or pgvector if you’re already on Postgres.
- Retrieve relevant chunks. When a visitor asks a question, embed the question, find the 4–8 most similar chunks of your content, and pass them to the LLM as context.
- Generate the answer. The LLM (Claude, GPT-4, or an open-source model) writes a response grounded in your retrieved content, with citations back to the source pages.
This pattern matters because it solves the hallucination problem that killed the first wave of AI chatbots. The model isn’t inventing answers — it’s summarising your content. If a question can’t be answered from your indexed content, the bot says so and offers to connect a human.
What to index, and what to leave out
The temptation is to index everything. Resist it. The bot’s quality drops sharply when retrieval pulls back marketing copy that talks around a question instead of answering it.
Index: help center articles, product documentation, pricing pages, FAQ pages, technical guides, and anything that answers a “how do I” or “what does this mean” question.
Skip: blog posts (mostly), case studies (mostly), about pages, careers pages, and anything written for SEO rather than answering questions. These pages bloat your index and pull retrieval toward marketing language when the user wants a specific answer.
If you’re following along, this is also why a clean llms.txt file is worth half a day of work — it tells AI agents exactly which pages to prioritise when they crawl you.
Where chatbots go wrong
The most common failure mode isn’t bad AI — it’s bad UX. A chatbot that opens automatically, plays a sound, and demands an email address before answering anything is a worse experience than no chatbot at all.
Some patterns we’ve seen kill conversion in production:
- Auto-open after 5 seconds. Lets visitors read maybe one paragraph before being interrupted. Disable this. Let them open the bot themselves.
- Email gate before the first answer. Catastrophic. Visitors leave. Ask for email after the bot has provided value, not before.
- Pretending to be human. Names like “Sarah” and stock photos of customer-service reps. Visitors figure it out in 10 seconds and trust you less for the deception. Be honest: it’s an assistant.
- No human handoff. Even the best bot can’t handle every question. A “talk to a human” button needs to exist on every screen, and it needs to actually connect to someone within an hour during business hours.
Measuring whether it’s working
The vanity metric is “questions answered.” The real metrics are:
- Deflection rate. Of visitors who asked the bot a question, how many didn’t then submit a contact form or open a support ticket? Anything above 40% is good for a B2B site.
- Conversion rate of bot users. Are visitors who interact with the bot more or less likely to become customers? It should be more — if the bot is hurting conversion, fix it or remove it.
- Hallucination rate. Sample 50 conversations a month and check whether the bot ever invented information. Even one fabricated fact is a problem; track this aggressively for the first six months.
The build: what an honest project plan looks like
For a typical B2B service business with 50–100 indexable pages, a production-quality AI chatbot takes 3–4 weeks. Here’s the rough breakdown:
Week 1: Discovery and content audit. Decide what to index. Identify the 20 most common visitor questions and write or rewrite the source pages so the bot can answer them clearly. This is the unglamorous work that determines whether the bot will be good.
Week 2: Build. Set up the embedding pipeline, the vector database, the retrieval logic, the LLM integration, and the chat UI. Most of this is plumbing.
Week 3: Internal testing. Have your team ask the bot the questions they actually get asked. Fix the failure cases — usually missing content, not bad AI.
Week 4: Soft launch and monitoring. Roll out to 10% of traffic. Watch the conversation logs for the first few days. Refine system prompts, adjust retrieval thresholds, fix obvious bugs. Then go to 100%.
If you want help thinking through any of this — what to index, which model to use, how to scope it — our AI chatbot service covers the full build at a fixed price. And for context on how AI is changing the broader web-development picture, our take on what real custom builds look like is a good place to start.
The goal isn’t to have an AI chatbot. The goal is to handle a real customer problem. Build for that, and you’ll quietly outperform the competitors who built for the marketing announcement.