You're Already Using AI. You Just Don't Know Which Layer.
The platform where you post about not using AI was built by AI. The debate about the tool is happening inside the tool. Here's why that matters more than the debate itself.
You’re Already Using AI. You Just Don’t Know Which Layer.
The platform where you post about not using AI was built by AI. The debate about the tool is happening inside the tool. Here’s why that matters more than the debate itself.
I’ve been watching a specific genre of Threads post. You’ve seen it. Someone with a sizeable following — usually a creative, usually indignant — posts something like: “I refuse to use AI. My work is 100% human.” Five thousand people like it. On Threads.
On Threads.
Half of the code changes at Meta are now submitted by an AI agent called DevMate. Not suggested. Not autocompleted. Submitted. And here’s the part that would be funny if it weren’t so perfectly absurd: DevMate isn’t even powered by Meta’s own AI models. It runs on Anthropic’s Claude — because Meta’s own engineers found the competitor’s tool works better. Mark Zuckerberg shipped his first code to Meta’s monorepo in roughly twenty years last month. He used Claude Code to do it.
So when you open Threads and type “I don’t use AI,” the app rendering your text was modified by an AI agent. The feed algorithm deciding who sees your post is maintained by AI-assisted code. The infrastructure serving your righteous declaration to five thousand sympathetic strangers is, from the silicon up, an AI-mediated artifact.
You’re not refusing to use AI. You’re refusing to use it consciously.
And this isn’t just Meta.
Google: over 30% of all new code AI-generated. They launched an autonomous coding agent called Agent Smith that proved so popular internally they had to throttle access because engineers wouldn’t stop using it. Microsoft: 20-30% AI-written code, CTO predicting 95% by 2030. LinkedIn — which is Microsoft — deploys code to production within seventeen minutes of check-in, powered by background agents that autonomously handle refactors, migrations, and production outage responses. No human touching the keyboard. ByteDance built an entire ecosystem of AI coding tools and open-sourced an agent that outperforms most human benchmarks on standardized tests. X — formerly Twitter — runs a platform serving 600 million monthly users with approximately thirty engineers.
Thirty.
41% of all code globally is now AI-generated or AI-assisted. This is not a future state. This is the current infrastructure of every app on your phone. Every feed you scroll. Every notification that pulls you back in. Every platform where you might post about how you don’t use AI.
The stage where the debate happens is made of the thing being debated. Let that land for a second.
Now. I need to complicate my own argument, because the interesting version of this isn’t a gotcha.
The people posting “I don’t use AI” are not wrong about what they mean. They mean: I didn’t use a generative model to produce this image, this text, this song. They’re talking about the creative layer — the artifact they made with their hands and their taste and their ten thousand hours of showing up. That’s a real distinction. It matters. I’m not here to take it from them.
But the frame they’re operating in — AI as a thing you opt into or out of, a discrete tool you pick up or set down like a paintbrush — that frame is already dead. It just hasn’t been buried yet.
You don’t opt into electricity. You don’t opt into TCP/IP. You don’t opt into the global financial system that processes your Venmo payment. At some point, a technology stops being a tool and becomes infrastructure. And when it does, the question flips: it’s no longer “do you use it?” It’s “are you aware of how it’s already using you?”
We crossed that line with AI sometime in the last eighteen months. Most of the public conversation hasn’t caught up. People are still arguing about the paintbrush while the canvas, the gallery, the lighting, and the building they’re standing in have already been restructured.
The real question isn’t whether you use AI. It’s which layer you’re willing to be honest about.
I should be transparent about where I’m standing while I say all this.
I’m building a tool called MetaBlocker that monitors Threads for AI-related content — the moderation patterns, the sentiment, the texture of how people talk about artificial intelligence on a platform that is itself increasingly artificial. It sits at exactly the seam I’m describing. Content layer: humans performing their relationship to AI. Code layer: AI performing the platform those humans are standing on.
I built MetaBlocker using Claude Code. The cognitive scaffold I use to think — a system I built called Psyche that tracks my threads of thought, preserves context across sessions, captures raw material, and yes, helped draft this very piece — was built in three days. Nine phases. 167 commits. 3,413 lines of TypeScript. One person.
The system that enables my authentic thinking was built by the thing people keep insisting is the enemy of authentic thinking.
If you think that sounds like a contradiction, I’d argue you’re still using the old frame. The one where AI is either a replacement for human cognition or a threat to it. The one where “I built this with AI” means “I didn’t really build it.”
I built it. I also couldn’t have built it alone. Both are true. Welcome to the actual relationship most of us have with these tools, whether we’ve admitted it yet or not.
Here’s where it gets interesting. Not the debate — the texture of working with these tools, day after day, at the level where the abstractions break down.
I spent 459 pages in a conversation with ChatGPT designing a desk studio. Four hundred and fifty-nine pages. It was genuinely brilliant at iterative exploration — each correction, each constraint I introduced, it incorporated and adapted. Step by step, we arrived at a design I’m happy with.
Then I asked it to synthesize everything into a usable document.
It fell apart. Completely. Thin two-page summaries when I asked for comprehensive. Hallucinated a shelf that doesn’t exist in my physical space. Confused what changed across the conversation with what was always true. Approximately fifteen major errors in the synthesis step alone. I had to bring the entire conversation into a different AI system to get the actual usable artifact.
The scaffolding worked. The integration failed.
This is the nuance that both camps miss. The “I don’t use AI” crowd and the “AI will replace everyone” crowd are having the same shallow argument from opposite ends. The tools are powerful AND limited in ways that neither side has language for. They’re not replacing human thinking. They’re not leaving it untouched. They’re restructuring where human cognition is load-bearing — and most people haven’t updated their map of which parts those are.
I know where AI is load-bearing in my work and where it isn’t, because I use these tools every day at a level of depth that makes the abstract debate feel almost quaint. The people posting “I don’t use AI” on Threads don’t have that map. Not because they’re stupid — because they’ve opted out of the experience that would give them one.
The infrastructure argument isn’t a gotcha. It’s not “ha, you hypocrite, you posted on a platform built by AI.” That’s cheap and I’m not interested in cheap.
It’s this: the relationship you have with AI is not the one you’ve declared. It’s the one you’re actually living — every time you open an app, scroll a feed, receive a notification, interact with a system whose code is increasingly written by machines supervised by humans rather than the other way around. You are already in a relationship with these tools. The only question is whether you’re going to be conscious about it or keep performing a version of yourself that doesn’t use something you can’t stop using.
The frame where AI is optional is dead. What replaces it is a harder, more honest question about how you engage — at which layers, with what awareness, toward what ends.
That’s the conversation I’m interested in. The other one — the performance of refusal on a platform built by the thing you’re refusing — is already a historical artifact. It just doesn’t know it yet.
Stay feral, folks.



Matt, I love reading about your experience with AI. While I do not have the technical know how, I agree that AI can be & is great, but is also creating change faster than we can absorb it. Depending on how we use it, like relationship, money or something else, it is the good, the bad, & the ugly in the human world.