Skip to content
Checklist for preparing a workflow before adding AI

What to Do Before You Add AI to a Workflow

Adding AI to a workflow feels productive. Sometimes it is. A lot of the time it just makes the mess faster.

Before you wire a model into anything that matters, slow down and do this first.

A Simple Pre-AI Checklist

If you can’t answer these clearly, adding AI will not fix the problem. It will mostly hide it.


1. Define the Actual Job

Not the tool. Not the output. The job.

Ask:

  • What is the task trying to accomplish?
  • Who uses the result?
  • What decision or action depends on it?

Bad definition:

  • “Summarize customer feedback”

Better definition:

  • “Produce a short list of recurring issues that a product manager can review weekly”

If the job isn’t clear, AI will happily produce something that looks useful and isn’t.


2. Identify the Inputs (Precisely)

AI does not work on “information.” It works on inputs.

List:

  • Where the input comes from
  • What format it’s in (text, notes, tickets, PDFs, etc.)
  • How consistent it is

Ask:

  • Is this input clean, messy, incomplete, biased, outdated?

If the inputs are garbage, the output will be impressively confident garbage.


3. Decide What “Good Enough” Means

Perfection is not an option. Define acceptable.

Write down:

  • What a usable output looks like
  • What errors are tolerable
  • What errors are not

Example:

  • Minor wording issues are fine
  • Missing a legal disclaimer is not
  • Inventing facts is absolutely not

If you don’t set boundaries, AI will invent its own. You won’t like them.


4. Map the Failure Modes

Assume it will fail. Decide how.

Common failure modes:

  • Hallucinated facts
  • Overconfident wrong answers
  • Missing edge cases
  • Inconsistent tone or structure

Ask:

  • How would I know this failed?
  • Who notices first?
  • What’s the consequence if no one notices?

This step is boring. It’s also the one that prevents damage.


5. Decide Where Humans Stay in the Loop

“Human-in-the-loop” is vague. Be specific.

Define:

  • Who reviews the output
  • What they are checking for
  • When they can safely skip review (if ever)

If no human is responsible for the result, the system owns you. Not the other way around.


6. Start With a Small, Reversible Test

Do not roll this into production immediately.

Instead:

  • Run it on old data
  • Compare AI output to existing work
  • Measure time saved vs. errors introduced

If the experiment fails, you should be able to turn it off without a meeting, a migration, or a minor existential crisis.


7. Write Down What You Learned

This is the part everyone skips.

Capture:

  • What worked
  • What broke
  • What assumptions were wrong
  • Where humans still add the most value

This becomes your real documentation. Not the prompt. Not the diagram.


The Shortcut That Never Works

If the workflow is unclear, the inputs are unstable, and no one owns the output, AI will not “optimize” anything. It will just accelerate confusion.

Do the thinking first. Add AI second.

Share the Post: