Epistemic status: Personal observation, not a generalizable claim. The pattern is something I've started to see in my own work and life over the past several months. The specific examples are real. The unifying frame may not generalize beyond me.


I've been noticing a pattern in my own behavior that connects work and life in a way I didn't expect. It started, of all places, on a walk.

I was on autopilot, walking the loop I always walk. End of the block, turn, get home, next thing. About halfway through I noticed, almost by accident, that the wind was on my face. The leaves were rustling. The spring flora had colors I hadn't seen yet this year. None of this was happening for me. It was happening anyway. The walk was already there. I hadn't been at the right resolution to receive it.

The same week, in a project I'm working on called Stella, I caught myself doing the same thing in code.

Two readings of "experiencing more"

I have an old notes file with a three-word fragment: "Growth in experiencing more." Wrote it months ago, forgot about it.

When I came back to it, the phrase admitted two readings. The first: experiencing more means variety. Wider exposure, more cities, more cuisines, more projects. Growth as breadth. This is the dominant cultural reading.

The second: experiencing more means depth. Perceiving more in the same moment. More attention, not more events. Growth as resolution. The wind was already there during my walk. I just hadn't sampled it.

I picked the second. Once I picked it, the title I'd been carrying around for this piece, "Slow and Fast," meant something different than I had thought.

Slow and fast are sampling rates, not pace

If growth is about resolution rather than variety, slow and fast aren't pace. They're sampling rates.

Fast mode is low-resolution sampling. You move through the day, hit milestones, complete tasks. The events happen. They don't fully register. Time compresses in memory because there wasn't enough texture to mark it.

Slow mode is high-resolution sampling. The same minute, the same walk, but more of it lands. Time expands because there's more there to remember.

This is not the same as moving slowly. You can move quickly through a day at high resolution. You can move slowly through a day at low resolution. Pace and sampling rate are independent variables, even if our culture treats them as correlated.

So far this is a personal aesthetic, the kind of thing you might find in a meditation book. I'd have left it there if it weren't for what happened on Stella.

The Stella moment

Stella is an AI safety simulation tool I've been building. It tests how an AI system behaves in high-stakes situations, the kind where a wrong response has real consequences. The question that started the project was concrete: does the system actually catch the failure modes that matter most, especially the ones that are easy to miss?

A few weeks in, I caught myself in the middle of an obvious next step. I had been building out the surrounding infrastructure: more scenario coverage, richer test blueprints, an evaluation pipeline that scored across multiple dimensions. Each piece had been the locally rational next move. AI-native development made each piece cheap to add. The next addition was twenty minutes away.

Something made me stop. Probably tiredness. I asked myself, almost annoyed, what I was actually trying to do. Not the next infrastructure piece. The thing the infrastructure was supposed to serve.

The honest answer was that the original safety question had been quietly demoted. I had a sophisticated platform. I had not actually answered, with that platform, the question that motivated it: does the system catch the disguised failure cases or doesn't it. The platform had become its own end. Each cheap step had carried me one notch further from the question I'd set out to ask.

That's when the walk and Stella connected for me.

Same shape

Walk:

  • "Keep walking" is the locally rational move at every step.
  • Each step costs almost nothing, so I don't evaluate it.
  • Twenty minutes later I'm home. The walk happened. I wasn't really in it.

Stella:

  • "Add the next piece of infrastructure" is the locally rational move at every step.
  • AI-native development has driven the cost of each next step toward zero.
  • Three weeks later I have a working platform. The platform exists. It hasn't yet answered the question I built it to answer.

The structure repeats:

A locally rational fast default at every step. A globally impoverished outcome in aggregate.

The pattern, stated plainly

The mechanism is small enough to write out:

  1. An action has a per-step cost.
  2. The per-step cost is what triggers the question "should I take this step?"
  3. Drop the cost below the threshold of attention, and the question stops being asked.
  4. You keep moving. The direction drifts. The output looks productive.
  5. The aggregate is misaligned with what you actually wanted.

I don't think every cheap step is a problem. Most are fine. The danger is specifically when the cost of the next step is low enough that the question of whether to take it stops getting asked. The metabolic cost of decision-making was the regulatory mechanism. Take that cost to zero, and the regulator is gone.

Why AI-native development is the extreme case

Before AI assistance, the cost of writing the next ten lines of code was non-trivial. Twenty minutes of typing, looking up syntax, dealing with errors. Twenty minutes is enough friction that you tend to ask, before paying it, whether the ten lines are worth writing. The friction itself was a check on direction.

With AI assistance, those ten lines arrive in thirty seconds. The friction is gone. The check the friction was performing is gone with it.

What's left is a system where you can produce code as fast as you can describe it, where description is much faster than reflection, and where you therefore produce more than you reflect on. The code compiles, the tests pass, the demo works. But the question of fit between what you're building and what you actually wanted has been quietly skipped at every step.

The pattern isn't new. Cars made it easier to live far from work, which made it harder to ask whether the commute was worth it. Email made it cheap to send messages, which made it harder to ask whether each one needed to be sent. AI just sharpens the dynamic to where it becomes hard to ignore.

I'm part of what I'm describing. I build with these tools. I write enthusiastic posts about velocity and the collapsing cost of software. The critique here is being voiced by someone who has been benefiting from velocity for years. That doesn't make the analysis wrong, but it does make it implicated.

Slow as protocol

Slow isn't virtue. It isn't the contemplative life as an end in itself. It's something more functional.

Slow is the protocol for inserting reflection into a system that has otherwise been optimized for action. It's the artificial reintroduction of a check that the friction of effort used to provide for free.

Slow isn't the opposite of fast. Slow is the protocol for keeping fast aligned with what you actually wanted.

This is more useful than "slow is good, fast is bad." It tells you when slowness matters and when it doesn't. Slowness matters at decision points that affect direction. It doesn't particularly matter when you're executing within a direction you've already chosen.

The walk is about direction. The point of going outside, for me, isn't to traverse distance. It's to be in the world for twenty minutes. If I'm fast about that, I've done it wrong. The fast mode of walking accomplishes none of what walking is for.

The Stella project was also about direction, even though it didn't look like one. Each infrastructure addition felt like execution within a direction. Because the direction itself had drifted, every fast execution step was carrying me further from the safety question that started the project. The slow question, "what am I actually trying to answer here?", was the only thing that could detect the drift.

The practical question isn't "should I be slower in life?" It's "where in my system have the friction-based checks broken, and what's the artificial replacement?"

What I'm trying

I haven't solved this. I have one practice that's been useful enough to keep, and a lot of open questions.

The practice: before opening a coding session, I write down in one sentence what I'm trying to accomplish at the level of intent, not implementation. Not "add the next piece of evaluation tooling." But "find out whether the system actually catches the disguised failure cases, not just the obvious ones." This adds maybe ninety seconds. It isn't heroic. The days I do it feel different from the days I don't. The fast steps still happen. They just happen against a target.

The same logic, generalized, is what I'm trying to apply to the rest of life. Less successfully, so far.

What I'm still uncertain about

Is perceptual resolution trainable, or is it temperament? Some people seem to live in slow mode by default. Others live in fast mode. I don't know whether this is a learned capacity or a stable trait. I'd like to believe it's trainable, but that's the convenient belief, and convenient beliefs deserve extra scrutiny.

Is scheduled slowness enough, or do you need a shifted default? I can put a thirty-minute walk on the calendar. That costs almost nothing. The walk itself might still happen at low resolution if my default mode hasn't shifted. The open question is whether scheduling time for slow mode actually changes the default, or whether you need something deeper.

What's the failure mode of too much slow? I distrust frames that don't have a downside. The failure mode of fast mode is the one I've described: drift, low resolution, building the wrong thing. The failure mode of slow mode is probably paralysis, endless deliberation, rumination instead of perception, failure to act when action is what's called for. I don't have a good intuition for where the optimal mix sits.

Is this an AI-era problem, or older? The pattern of cheap action displacing reflection is old. AI feels like a phase change rather than a continuation. The cost reduction is large enough, and arrived fast enough, that the equilibrium between action and reflection has been disrupted in a way previous technologies didn't quite manage. I'm not certain. It might just look that way because it's happening to me right now.

The atrophy worry is the one I'd most like someone to study. If years of AI-native development trains a default mode where the next step is always taken without evaluation, does the slow capacity weaken from disuse? I don't know. The people I find most thoughtful tend to have explicit practices that protect time for slow mode. That might be aesthetic. It might also be load-bearing.

Where this leaves me

I started by writing down "Growth in experiencing more" without really knowing what I meant. After thinking about it, I'm reasonably sure I meant: growth happens when you're at high enough resolution to actually receive what's happening, in your work and your life.

Resolution gets attacked by anything that lowers the cost of the next step below the cost of asking whether the next step is worth taking. AI-native development is the most dramatic example I've encountered, but the pattern is broader. Anywhere a system is optimized to produce action without producing reflection, the reflection has to be reintroduced deliberately, or the alignment between what you're doing and what you wanted will quietly drift.

I don't know yet whether scheduled slowness is enough, or whether the default mode itself has to shift. That's the thing I'm watching in myself. The honest answer right now is that, in much of my work and life, nothing is asking whether I should be moving in this direction. That's the gap I'm trying to fix.