The Skills Decay Curve
Benn Stancil said something I had to sit with.
On the High Signal podcast, he compared prompt engineering and context scaffolding to learning Google search syntax in the nineties. Remember that? site:stackoverflow.com "exact phrase" -pinterest. We thought we were power users. Kids today just type questions and Google figures it out.
His claim: "People who are over-optimizing everything about how to use Claude Code are wasting their time. Not exactly—they're going faster now—but in six months none of that stuff will be useful."
I have a lot of scaffolding. Skills files. CLAUDE.md instructions. Hooks that catch bad patterns. Deliberation workflows that force multiple perspectives before committing to a plan. I've spent real hours on this infrastructure. And Stancil's argument is that it's all going to get absorbed—baked into the models, obviated by better defaults, forgotten like search syntax.
He's probably right about the absorption part.
But the key word in his quote is over-investing. The question isn't whether scaffolding is waste. It's where the line falls between productive scaffolding and diminishing returns. And that depends on where you sit.
If you're a leader in this space—building the tooling others will use, pushing the frontier—your scaffolding informs what gets absorbed. You're not wasting effort; you're doing R&D. The search operators we learned became the query understanding Google built. Someone had to figure out what "good search" meant before the machine could learn it.
If you're a follower trying to keep up, the calculus is different. Maybe you wait. Maybe you let the leaders absorb the pain and ship the defaults.
I'm somewhere in between. But here's what Stancil's framing misses: the scaffolding isn't expensive anymore.
In The New Stem, I argued that agent orchestration is becoming the core skill—knowing what constraints to apply, what guardrails to build, what scaffolding you need. But I was still thinking of scaffolding as something I build. That's shifted.
Claude builds my scaffolding now. And maintains it. And improves it.
I wrote about this in Closing the Loop on /insights. Claude ran a retrospective on 496 sessions, identified friction patterns, convened a panel of itself to debate solutions, proposed changes to my operational constraints, deployed them, and tested until they stuck. I was there—reviewing, steering—but the cognitive work was almost entirely on the other side of the conversation.
The cost of that efficiency gain? A few prompts and some review time.
Stancil assumes scaffolding is expensive human effort that gets wasted when models improve. But if Claude builds the scaffolding, the investment is trivial. The ROI calculation inverts. Even short-term gains pay off when the cost is near zero.
The skill that transfers forward isn't building scaffolding—it's knowing what scaffolding you need. The judgment about where friction lives, what constraints would help, what patterns keep recurring. That understanding doesn't decay when the implementation gets absorbed. It informs the next round.
Maybe the scaffolding I built today becomes a default tomorrow. Good. That's the best possible outcome—my R&D became everyone's baseline. And I'll have moved on to the next set of constraints that aren't default yet.
The search operators got absorbed. The instinct for when you need precision didn't. The next wave will absorb something else—and I'll do my best to keep adapting.