Signal vs. Noise at Scale
The minute I shipped Timbers, a $60M competitor landed in my inbox.
Entire, from GitHub's former CEO, hooks into your git workflow and captures complete AI agent sessionsāprompts, responses, tool calls, token countsāon every push. It stores everything on a separate branch so your main history stays clean. Git-native. MIT licensed. The works.
The overlap with Timbers is obvious. Both are git-native. Both address the gap between "what changed" and "why it changed." Both recognize that AI-generated commits need more context than a diff provides.
But the philosophy is inverted. Entire captures everything automatically. Timbers captures what you decide to keep. Raw transcripts versus curated decisions. The haystack versus the needle.
Here's my problem with the haystack: I've created roughly 600 agent sessions in the last month across a few projects. Solo. A typical session transcript runs 50-200KB. That's 30-120MB of raw material per month, just from me. For a team of agent-heavy devs, you're looking at gigabytes accumulating on that branch.
The separate branch keeps your working tree clean. It doesn't help with clone times, backup sizes, or CI/CD pipelines that fetch fresh. Git repos don't scale like databases. "Capture everything" has a carrying cost that compounds.
Which, presumably, is where Entire's platform comes in. Store it with us. Query it with us. The MIT-licensed CLI is the hook; the storage problem is the moat.
Timbers makes a different bet: most of the data in most sessions doesn't matter. The interesting decisionsāthe ones worth rememberingāare a fraction of what gets captured. Extract the signal at commit time, throw away the noise, and your repo stays a repo instead of becoming an ELT platform.
And when you pay the extraction cost upfront, query time gets cheap. timbers draft changelog --since 7d runs in seconds because there's no haystack to searchājust structured entries ready to transform. Capture is expensive; retrieval should be free.
The tradeoff: you want a strong model doing the extraction, since you can't easily re-process later. Today that means Opus or similar, which isn't cheap. But today's premium is tomorrow's baseline. The cost curve only goes one direction.
$60M buys a lot of needles-from-haystacks extraction. But I'd rather not store the haystack in the first place.