Tijdo Koster
AI & Work13 min read

What are AI agents:
plain-English guide for businesses

Every vendor demo I have sat through in the last 18 months opens with the same phrase: "and this is powered by our AI agent." I started tallying. Six in one afternoon is my personal record. I told myself that was either impressive or a sign I needed better vendor selection criteria. The criteria needed work.

Robot hand and human hand reaching toward each other — AI agents and what they can do for business

Photo: Pexels

So what are AI agents, actually? Here is the direct answer: an AI agent is software that takes a goal, figures out how to reach it, uses tools to get there, and adjusts when something changes — without you specifying every step. That is the whole definition. The rest of this post is the context that matters when you are deciding whether to spend money on one.

I have been building automation since 2018. In the last 18 months, the shift from "this tool does one thing" to "this tool can orchestrate a whole workflow" has been the most significant change I have seen. Whether it is useful for your business depends entirely on what your process actually looks like.

TL;DR

AI agents are the step beyond chatbots and basic automation: they plan, act across multiple tools, and adapt mid-task. For most businesses, the honest question is not "should we use agents" but "do we actually need something this autonomous, or will a well-configured Zapier workflow do the same job for €50 a month." The checklist near the end helps you answer that.

An AI agent acts. A chatbot answers.

A chatbot answers a question and stops. Ask it "what is our refund policy," get an answer, done. Useful for a narrow set of things.

An AI agent is given a goal and works out how to reach it. "Research these five companies, find the decision-maker at each one, summarise what they do, and flag the two most relevant for follow-up." The agent makes a plan, uses the tools it has access to — search, browser, CRM — and works through it step by step. When something unexpected happens, it adjusts.

The technical term is "agentic AI." The "agent" part means it takes consecutive actions toward an outcome. The "AI" part means a large language model — the same kind powering ChatGPT or Claude — is doing the reasoning: figuring out what to do next, interpreting results, deciding when it is done.

What it is not: magic. An agent is only as useful as the tools it has access to and the clarity of the goal you give it. Vague goal in, vague outcome out. This has not changed from any other technology I have worked with in 22 years.

How an agent decides what to do

Every AI agent runs the same basic loop: perceive, plan, act, check.

Perceive — the agent reads the context: the goal it was given, any data it has access to, the result of its last action. Plan — it decides what to do next. Act — it calls a tool: searches the web, reads a document, writes a draft, updates a record. Check — it looks at what happened and decides whether it is done or needs to try something else.

The "brain" is an LLM. The "hands" are integrations — access to a browser, a search API, your CRM, your email, your ERP. Agents do not have these capabilities built in. They are plugged in. Which means the quality of an agent depends heavily on what it is connected to and how well those connections are maintained.

For the full technical breakdown, AWS has a useful overview of agent architecture covering planning modules and memory systems in detail. I will stay focused on the business implications — the part that changes what you should actually do.

Business team reviewing a workflow diagram before deciding on automation tools

Photo: Pexels

Three ways agents differ from standard automation

"AI agents vs automation software" is the question I get most from business owners trying to figure out where to spend money. Here is a specific answer.

Standard automation tools — Power Automate, Make.com, Zapier — follow fixed rules. Invoice arrives: extract fields, update ERP, send confirmation. They do this reliably and cheaply, right up until the input is unexpected. A supplier changes their invoice format. A field moves. The automation breaks and waits for someone to fix it.

Agents handle ambiguity

An agent can read a document and understand what it says even when the format varies. A rule-based system needs every case defined upfront. This matters most when your inputs are unpredictable — different suppliers, different formats, different customer enquiry types.

Agents chain tools together dynamically

An agent can decide mid-task that it needs to look something up before it can proceed, then act on what it finds, then write a response based on both. Standard automation follows a fixed sequence. Agents improvise within a defined scope — which is both their strength and their risk.

Agents adapt when the first approach fails

If a tool call fails or the result is not what was expected, an agent can try a different approach. An automation either succeeds or stops. For workflows where edge cases are common and important, that adaptability is the practical difference.

The pattern I keep seeing across 100–200 projects: companies buy agents when they need standard automation, and buy standard automation when they have not documented their process well enough to know what they actually need. In both cases the result is the same — months of tweaking what should have been a 2–4 week build. This is not an agent problem. It is a "solution before diagnosis" problem, and it predates AI by about twenty years.

Where agents actually earn their money

Four places where, in my experience, the business case for an AI agent holds up:

Variable-format document processing. Invoices, contracts, insurance forms — anything where the layout differs by supplier or source. Standard automation requires templating every format. An agent reads the document, understands what it says, and extracts the right fields regardless of layout. For businesses handling hundreds of documents a week, the time saving is real and compounding. BCG's breakdown of AI agent use cases covers the full range of industries where this applies if you want a wider picture.

Sales outreach that requires research.If your process involves finding companies matching a profile, researching them, and writing personalised outreach, that is two to four hours of manual work per batch. An agent does the research and drafting in minutes. A human still reviews and edits. But starting from a draft rather than a blank page is a meaningful productivity shift — at least until your competitors start doing the same thing and everyone's inbox looks identical.

Customer support triage. Not replacing support staff — routing and drafting. An agent reads an incoming enquiry, categorises it, pulls the relevant account information, and drafts a response for a human to approve. The human spends two minutes reviewing instead of twenty minutes researching and writing. At 200 support tickets a week, that compound saving matters.

Internal knowledge retrieval.Every company has documents sitting in a shared drive that nobody can find. An agent with access to that archive can answer questions, surface relevant policies, and retrieve specific information without a 20-minute search. Unglamorous. Genuinely useful. (My own consultancy has 47 folders labelled "OLD" in various stages of relevance. I am told this is ironic given what I do for a living. I prefer "comprehensive filing system." My wife prefers "ironic." We disagree.)

Team at a whiteboard mapping their business process before buying automation software

Photo: Pexels

When to save your money and use simpler tools

Here is where I lose points with vendors and gain them back with clients.

Nine times out of ten, when a business owner tells me they need an AI agent, what they actually need is to stop manually copying data between three spreadsheets. That is a Zapier workflow. It costs €50 a month and I can have it running in two days. The agent they were quoted costs €15,000 and will take six weeks to configure. The workflow solves 80% of the pain.

Do not use an AI agent if:

  • Your process is simple and consistent. Email arrives → extract data → update system → send confirmation. That is standard automation. Build it with standard automation tools. They are cheaper, faster to configure, and considerably easier to audit when something goes wrong.
  • You have not mapped your process first. If you do not have a clear picture of your workflow today — who owns what, where the exceptions are, what a correct output looks like — an agent will not save you. It will make the chaos move faster.
  • Nobody will review the output. An AI agent running unsupervised on consequential tasks — invoices, contracts, customer commitments — is not a time-saver. It is a liability with a slow burn. Build the human review step into the workflow before you build the agent.
  • You are choosing a tool before you understand the problem. Agents are the current most expensive version of the software-before-process trap. The pattern is the same regardless of how impressive the demo looked.

I have written the longer version of "document first, buy second" in process mapping before automation and how to choose automation software. The principle is the same whether the tool is an agent, a workflow platform, or an ERP. Know what you need before you spend on what sounds good.

What an AI agent will actually cost you

Honest numbers, three tiers.

Off-the-shelf platforms— Microsoft Copilot agents, Salesforce Agentforce, Google's Agentspace — run roughly €30–150 per user per month depending on plan and existing licensing. Setup is relatively accessible for non-technical teams. The limitation: they work best within the vendor's ecosystem. If your workflow lives in Microsoft 365, Copilot agents are genuinely powerful. If it does not, you are spending money on integration before you get to the agent. IBM's overview of agentic AI has a useful comparison of platform architectures if you are evaluating options.

Build-it-yourself with n8n, Make.com, or a custom API setup: €50–200 per month for infrastructure, plus the time to configure it. If you have someone technical in-house, this is the most flexible option. If you do not, this is a project that will take longer than the documentation suggests. (It always does. I have watched this play out dozens of times. Budget the learning time as a real cost, not an optional one.)

Custom-built agents — connecting your specific ERP, document workflow, or CRM — typically fall in the €5,000–€15,000 range depending on scope. That covers design, integration, testing, and the first round of iterations when real-world inputs turn out to be messier than the initial brief suggested. They always are. Once the process is clearly defined, implementation takes 2–4 weeks. If it is not clearly defined, double that number and add some aspirin.

The decision is not complicated: if your team can use an off-the-shelf tool without customisation, start there. If your workflow is specific enough that no existing tool fits, a custom build is worth the money. If you are unsure which category you are in, spend an afternoon mapping the process first before you talk to anyone.

Five questions to answer before you commit

Before spending anything on an AI agent, get clear answers to these:

Can you describe the workflow on paper right now?

Not the ideal version — the actual one, with exceptions and edge cases included. If the answer is no, process mapping comes first. Agents built on undocumented workflows are the most expensive way to automate a problem you have not fully understood yet.

Have you tried simpler automation first?

For most workflow pain, a rule-based setup clears 60–70% of the manual work. Do that first. The remaining fraction is where an agent might earn its cost. Skipping this step is the single most common way to overspend on a problem with a cheap solution.

Is the output consequential enough to require human review?

If getting it wrong matters — financially, legally, reputationally — build in a review step before you build anything else. 'Mostly right most of the time' is not an acceptable spec for invoices, contracts, or customer commitments.

Do you know what correct looks like?

If you cannot define what a good output is, you cannot configure, test, or improve an agent. 'It feels right when I see it' is not a spec. Write the spec first. This applies to every automation project I have done, and it still catches people out on project 100.

Who owns this when it goes wrong?

Not if — when. Every automated system produces errors eventually. Decide in advance who catches them, what the escalation path looks like, and how you would detect a systematic failure before it compounds quietly downstream. Name that person before the build starts.

For more on the broader picture of what AI can and cannot replace, that is on the blog if you want to keep reading. The posts on business process automation and automation ROI sit alongside this one well.

Frequently asked questions

What is the difference between an AI agent and a chatbot?

A chatbot answers. An AI agent acts. Ask a chatbot a question and it generates a response and stops. Give an AI agent a goal — 'research these five leads and flag which ones look relevant' — and it makes a plan, uses the tools it has access to, and works through the task. The difference is autonomy: an agent takes consecutive actions toward an outcome without you specifying every step.

What is the difference between an AI agent and automation software?

Standard automation (Zapier, Power Automate, Make.com) follows a fixed rule: if X happens, do Y. It breaks when the input is unexpected. An AI agent can handle ambiguity — it reads context, adjusts its approach, and picks between different tools to reach a goal. The trade-off: agents are more expensive, harder to audit, and require a human reviewing the output. For predictable, high-volume, consistent tasks, standard automation is usually the better choice.

What can AI agents do for a small business?

The most practical applications right now: variable-format document processing (invoices and contracts where the layout differs by supplier), sales outreach that involves researching leads and personalising messages at scale, customer support triage where an agent drafts responses for a human to approve, and internal knowledge retrieval from shared drives nobody can navigate. Start with tasks that are annoying and medium-stakes — not critical and high-stakes.

Are AI agents reliable?

Reliable enough for defined tasks with a human checking the output — yes. Reliable enough to run unsupervised on anything financially or legally consequential — not yet. The failure mode for agents is not dramatic breakdown. It is confident, plausible-looking errors that a human would catch in ten seconds. Build your workflow assuming the agent will occasionally be wrong, and make those errors visible rather than hidden downstream.

How much does an AI agent cost?

Off-the-shelf platforms (Microsoft Copilot agents, Salesforce Agentforce) run roughly €30–150 per user per month depending on plan and existing licensing. DIY with n8n or Make.com plus an LLM API costs €50–200 per month for a small operation. A custom-built agent connected to your specific ERP, document workflow, or CRM typically falls in the €5,000–€15,000 range depending on complexity. The cheap options require technical setup time. The expensive options require you to know exactly what problem you are solving.

Do I need to be technical to use an AI agent?

For off-the-shelf tools, no. Microsoft Copilot, Salesforce Agentforce, and similar platforms are built for business users without engineering backgrounds. For anything custom — connecting an agent to your ERP, your invoicing system, your CRM — you will need someone technical or a consultant who has done it before. The gap between 'click and configure' and 'make this work with our existing stack' is where most SMBs get stuck.

When should I use an AI agent instead of basic automation?

When the path to the outcome is not always the same. If your process is: receive email → extract data → update system → send confirmation, that is basic automation. If your process is: receive enquiry → figure out which team should handle it → draft a contextual response → flag anything requiring human judgment, that is agent territory. Rule of thumb: if you would need more than twenty 'if/then' rules to cover all the cases, you probably need an agent.

What is a multi-agent system?

A multi-agent system is a setup where several AI agents work together, each handling a different part of a task. One agent researches, another drafts, another reviews. For most SMBs, a single well-configured agent solves 80% of the use cases that multi-agent marketing materials make sound essential. Start with one, make it work, and expand only if there is a clear reason to.

TK

Tijdo Koster

Automation consultant since 2009. 100–200 projects. Still answers his own emails.

If you have made it to the bottom and your main takeaway is that "AI agent" is a fancy way of saying "software that keeps going after you stop paying attention to it," that is a reasonable starting point. The term will keep evolving. The underlying problems it solves will not.

There is more on the blog if you want to keep going. The products page has the AI toolkit for when you are ready to go from understanding the tools to actually using them.