The Art of Saying No to Helpful AI
I've been building with AI coding agents lately — and they're surprisingly good. But one thing's become clear: you still have to say no. A lot.
Recently, I was working on a lesson plan generator app. I asked an AI agent to help improve the architecture. What followed was a firehose of well-intentioned suggestions: new abstractions, reorganized parameters, helper classes, schema changes — the works. Everything looked clean and organized. But it wasn't what I needed.
The Pitch: Clean Code and Abstractions
One of the first changes it proposed was a LessonPlanConfig
dataclass — a wrapper to avoid passing multiple parameters to functions. On paper, it made sense. Type safety. Easier to extend later. Less clutter.
But here's the thing: I didn't need it.
The current setup passed three arguments. That's it. The config class added complexity without any real benefit. It was "future-proofing" at the cost of clarity.
So I said no.
The Real Problem: It Also Dropped What Mattered
After I pushed back on the overengineering, I noticed something else: the AI had quietly dropped the database schema changes.
That was a much bigger problem.
The agent had proposed adding a lesson evaluation system — an LLM-as-a-judge that scores the quality of generated lesson plans. But without saving those scores to the database, the whole feature was pointless. No tracking, no analysis, no sorting by quality. Just ephemeral output.
This reminded me: AI agents are great at proposing "nice" changes, but they often miss what's necessary.
What I Kept (and Why)
After pruning the fluff, here's what made the cut:
- A
call_llm()
helper to standardize prompt calls - Thoughtful use of system prompts to improve LLM responses
- An evaluation module that scores lesson plans via LLM
- A DB schema change to persist those scores
- Basic logging for observability
Nothing flashy — just focused improvements that actually moved the needle.
What I Learned
Working with an AI agent is like pairing with a junior dev who's read all the right books but doesn't yet know what matters. They'll give you a clean pull request with perfect abstractions… that solve the wrong problem.
You have to lead. That means:
- Asking "why" for every change
- Pushing back on abstraction for abstraction's sake
- Prioritizing what delivers value right now
- Keeping things observable and traceable
- Saying no, even when the suggestion is technically "good"
Final Thought
AI agents are incredibly useful — and incredibly eager. They'll suggest things that sound right and look right, but aren't always what your product needs.
Don't be afraid to say no.
Saying no is what makes you the engineer. The agent can generate ideas, but you decide what ships.