Most teams build the wrong AI workflow first.

The one some manager keeps pushing. The flashy demo. The loudest Slack thread.

Three weeks later, the output is shaky. The cleanup lands on someone else. The team quietly stops trusting any of it.

Most teams are not short on AI ideas. They are short on workflow judgment.

Why the wrong workflows get built first

Sales wants AI to summarize deal context. Support wants AI to classify tickets. Ops wants AI to clean CRM notes before handoff. Leadership wants AI to help with quarterly reporting.

None of those are bad ideas on their own.

The problem is that some of those workflows are ready for AI help, and some are messy process problems wearing an AI costume.

Strong operators do not say yes to every request. They know what to build, what to template, what to defer, what to fix manually first, and what to kill.

The filter

Here's the quick filter I use before giving any workflow AI attention.

That last line is the one that saves people time.

The fastest way to waste a quarter is to AI-enable a workflow that only works today because one experienced person knows the exceptions. The AI will not fix that. It will scale the confusion.

A worked example: deal desk triage

Say your deal desk or RevOps team wants AI to do first-pass triage on incoming submissions. Flag missing approver fields. Call out non-standard discounting or payment terms. Route known exception patterns to the right reviewer before a human touches the record.

At first glance, that sounds like a great AI workflow. High volume. Repeated work. Pattern matching.

Sometimes it is. Run the filter.

Frequent enough to matter? Yes. First-pass review burns real hours and creates routing bottlenecks.

Pain real? Yes. Sales escalates slow reviews. Finance gets blamed for delays.

Who cleans it up if it goes wrong? Sales, Finance, Legal, and Order Management may all inherit the mess. That cost is real.

Real source of truth? Maybe. If required fields are consistently filled in, this works. If reps dump half the context into Slack or leave approval fields blank, it breaks fast.

Can you tell whether it worked? Yes. Track triage time, routing accuracy, resubmission rate, and exception catch rate.

Passes. This is a good candidate for AI-assisted triage.

Not full automation. AI-assisted triage.

The difference matters. A first pass that flags likely issues for human review is one thing. Letting the workflow route or decide on its own when source fields are messy is how you create downstream cleanup and lose trust fast.

Now contrast that with something teams also love to suggest: using AI to help with quarterly pipeline review.

It looks important. Leadership sees it. It feels strategic. It usually fails the filter. Cadence is too low, the source of truth lives in comments and side conversations, the real logic is rep judgment and manager caveats, and if the output is wrong you are not cleaning up a draft. You are arguing about the business in a meeting.

That is not an AI workflow. That is a signal to clean up process, definitions, and data first.

Where teams get this wrong

They pick the workflow that sounds smartest, not the one stable enough to help.

They count time saved in one step and ignore cleanup created two teams downstream.

They confuse process debt for AI opportunity.

And once you push a bad AI workflow to production, turning it off gets political. People get attached to the idea long before the workflow earns trust.

What to do when the answer is no

A workflow failing the filter does not mean doing nothing. Usually, the move is one of four: document it, fix the process, template it, or defer it.

Deferring matters more than people think.

People frame AI workflow work as progress because something shipped. That is not progress if the workflow was never ready in the first place.

Good ops judgment looks quieter than that. It looks like saying yes to the boring, high-frequency workflows with clear inputs and limited cleanup cost. It looks like saying no to the messy, hard-to-measure workflows that will eat time and trust. And it looks like knowing the difference before the build starts.

Where to start

Pick one workflow your team is already talking about. Not five. One.

Run it through the five questions. Be honest about weak spots, especially source of truth and cleanup cost.

If it fails, pick the right non-AI move. Document it. Template it. Fix the process. Or defer it.

If it passes, baseline it before you build. If you cannot tell whether the AI version got better, you are guessing.

If nobody owns it, stop there. A workflow without an owner does not become safer because AI touched it.

Better tools do not fix weak workflow judgment.

They just make bad bets faster.

Until next week,

@OpsJzn

AI should mean fewer steps, not more tools.

Reply

Avatar

or to participate

Keep Reading