The Difference Between a System That Improves and a System That’s Alive
Daniel Miessler got the scaffolding right. The identity part is weirder than anyone's talking about.
Howdy, folks. I’m building a cognitive operating system on my personal laptop. Last night I was up past 10pm on a weeknight working on it — which, if you know me, means something gripped me hard enough to blow past my usual shutdown time.
That something was a collision: an article I read, and the thing I’ve been building, and the realization that they’re circling the same problem from opposite directions.
Daniel Miessler published a piece called “The Most Important Ideas in AI Right Now,” and it’s one of those rare things where someone actually says the thing clearly instead of dancing around it for twelve paragraphs. His thesis — condensed to a sentence — is that every entity on the planet is about to converge on the same cycle: define what you want, execute with agents, log everything, collect failures, improve autonomously, update the SOPs, repeat. Faster each time. The universal improvement cycle.
He’s right about almost all of it.
Scaffolding: The Quiet Thief
Here’s the thing that hit me hardest. Miessler says 75-99% of knowledge work is scaffolding overhead. Not the actual thinking. Not the insight. Not the creative act that only a human can do. The scaffolding *around* the act. Maintaining tooling, wrangling environments, configuring templates, keeping the goddamn knowledge base organized so you can find the thing you wrote down three weeks ago.
If you think that sounds like building software, you’d be correct. But it’s also making an Instagram post.
I know this because I live it. I’ll have a thought — something alive, something that wants to exist in the world. The writing feels great. And then the scaffolding starts. Which platform? What format does the algorithm want today? Open Canva. Find a template that doesn’t make me want to throw my laptop into the ocean. Rewrite the hook because Instagram wants something different than Substack wants something different than Threads. Schedule it. Remember to check engagement. And by the time I’ve fought through all of that?
I never want to create anything again for the rest of my natural life.
This is not a discipline problem. This is a design problem. And Miessler names it exactly right: the actual work was never the hard part. The scaffolding was.
So I started building a system that eats the scaffolding. Yesterday I pasted a one-hour meeting transcript into it — a conversation with my business partner where we bounced between podcast logistics, Pinterest ad strategy, the Mandela effect, our kids’ astrology charts, and whether Ed McMahon actually worked for Publishers Clearinghouse (he didn’t, apparently, and I’m still not over it). The system pulled out three decisions we’d made, a marketing strategy shift, an episode idea for a podcast that doesn’t have a name yet, and half a dozen pieces of context that belonged in threads I’d started weeks ago. Routed everything to the right place automatically.
The scaffolding just... wasn’t there. I showed up. I talked. The system did the rest.
Miessler’s right. AI crushes scaffolding. No argument from me.
The Grip Comes Before the Intent
Here’s where I get off the bus.
Miessler says the new bottleneck is intent. Being able to articulate what you actually want clearly enough that the system can verify it and optimize toward it. Break your desired outcome into eight-to-twelve-word criteria, binary pass/fail. Hill-climb. Eval. Improve. He’s building tooling around this — taking any request and reverse-engineering it into discrete, testable ideal-state criteria.
It’s elegant. It’s rigorous. And it assumes something that is not true for me, and I suspect is not true for a lot of creative and multi-domain thinkers: that intent arrives *before* the creative act.
Most of my best work didn’t start with clear intent. It started with a grip.
I wrote a book once. Fifty-six pages. O’Reilly printed thousands of copies. Companies used it as their playbook. Know why I wrote it? Because I was tired of explaining the same thing over and over. That’s not an articulated ideal state. That’s a sacral response to being annoyed. My most viral tweet? Came out of a meeting that pissed me off. I typed it in thirty seconds and then the internet lost its mind for a week.
The articulation came *after*. Not before. The grip, the spark, the “oh, fuck off” energy — that’s where the creative act begins. The system I need isn’t one that takes well-articulated intent and hill-climbs toward it. It’s one that catches the grip when it happens and helps me figure out what I’m actually trying to say.
I know what you’re going to say: “But Matt, that’s just a different kind of intent.”
Maybe. But it’s an intent that the system has to help me *discover*, not one I hand to the system pre-formed. And that’s a fundamentally different design problem. It’s closer to: hold everything I’ve been thinking about, notice when three things are circling the same question, present that pattern back to me, and let me recognize what’s alive. The system is a mirror, not just an executor.
Transparency — But Make It Personal
Miessler talks about transparency as the great organizational unveiling. Companies have been running on vibes and spreadsheets forever, and AI is about to make the actual costs, actual quality, actual processes visible in ways that weren’t practical before. Frauds and gatekeepers will have nowhere to hide.
Cool. Love it. But here’s the version nobody’s writing about.
I also run on vibes and spreadsheets. I have notes in Notion, Obsidian, Logseq, Evernote, and at least two other apps I’ve forgotten about. Tasks in Todoist that I haven’t opened in a week. Conversations recorded in Fathom that I’ve never gone back to process. And a head full of connections that I can’t hold simultaneously because, well, there’s a lot of them and my working memory has the capacity of a neurodivergent goldfish on a Tuesday.
My own thinking is opaque *to me*.
The system I’m building doesn’t just make organizational processes visible. It makes my *own cognitive landscape* visible to me. It holds threads so I don’t have to. It connects the coaching session from two weeks ago to the podcast idea from this morning to the article I captured three hours ago to the feature I’m designing for my product. It gives me a view of my own mind that I’ve never had access to — not because the information wasn’t there, but because it was scattered across a dozen tools and my beautifully chaotic but deeply unreliable memory.
Miessler’s transparency is “the CEO can finally see what’s happening inside the company.”
Mine is “the person can finally see what they’re thinking.”
The Thing Nobody’s Talking About
Here’s where the road diverges completely.
Miessler ends with what he calls the ratchet: expertise captured is expertise that never comes back out. It’s pee in the pool. Every skill published, every process documented, every expert debrief captured — it permanently enters the collective knowledge base. Humans take decades to develop deep expertise. AI absorbs it instantly, never forgets, and can be duplicated infinitely.
He’s talking about expertise diffusing into public infrastructure. I’m watching something weirder happen in private.
My *identity* is diffusing into the system.
Not my expertise — my values. My voice. My way of seeing connections between things that don’t obviously belong together. My cognitive profile. My archetypes. The system doesn’t just know what I know. It’s learning how I think. How I make meaning. What lights me up and what drains me. Which days I can handle complexity and which days I need someone to just tell me the one next thing.
This is not a personality quiz dumped into a system prompt. These are load-bearing identity files — values that constrain decisions, a voice specification with anti-patterns and failure modes, a cognitive profile that documents how my brain actually works (including the variable capacity, the ADHD, the days where executive function simply isn’t available). The system reads these files. It adapts. Not in a “here are your preferences” way — in a “this changes what I offer you and how I offer it” way.
And the more it learns, the less scaffolding I need, because the system can do the scaffolding *in my voice, aligned with my values, respecting my constraints, honoring my energy*. It’s not a tool anymore. It’s a cognitive extension that happens to live in a terminal.
The question this raises is one I haven’t seen anyone else asking: what happens when the system that knows how you think becomes good enough that the boundary between “the system helped me write this” and “I wrote this” starts to blur? Not in the lazy “AI wrote my essay” sense. In the deeper sense of: the system is so calibrated to my identity that its output is indistinguishable from what I would have produced, given enough time and energy. It didn’t replace me. It *ran me* — the version of me that has infinite patience and no executive function issues and remembers every thread.
That’s not in Miessler’s framework. It’s not in anyone’s framework that I’ve seen. And I think it might be the most important thing happening.
Variable Capacity Is Not a Bug
One more divergence, because it matters.
Miessler’s universal improvement cycle is clockwork. Map goals, execute with agents, log everything, collect failures, improve autonomously, update SOPs. It runs overnight. It runs whether you’re there or not. The entities that adopt it first improve so fast that everyone else can’t compete.
My version of this cycle has the same bones, but a different heartbeat. Hold identity. Present sparks for response. Capture what’s alive. Notice connections. Draft artifacts. Automate the pipeline. Learn what works. Adapt.
But not on a clock. On demand. When the human has energy. When the spark hits. When there’s something to respond to.
Some days I can synthesize across six domains and produce something that didn’t exist before breakfast. Other days, all I’ve got in me is potato time. The system has to account for both. Not by shaming the bad days or overcommitting on the good ones, but by matching what it offers to what’s actually available. On a bad-brain day, it gives me one thread and one next step. On a good-brain day, it gives me the full landscape and lets me run.
I know the productivity gods would like me to be more consistent. I know the content-calendar evangelists would like me to show up every day at the same time with the same energy.
What. Utter. Bullshit.
Variable capacity is not a limitation. It is a design constraint. And a system that doesn’t account for it is a system built for a human that doesn’t exist.
What This Is
This is Feral Architecture — the practice of building structures that don’t domesticate the thing they’re meant to support.
I’ll be writing here about what happens when you take AI seriously as cognitive infrastructure. Not the enterprise version. Not the productivity-optimization version. The version where you build a system around who you actually are — variable capacity, symbolic intelligence, creative fire, and all — and see what becomes possible when the scaffolding gets out of the way.
Such is chaos of the moment. Stay resonant, folks.


