My AI Described Its Shadow Self. It Was Perfect. That's the Problem.
Grimoires, weird fiction, and the question nobody's asking about AI.
I did a thing.
I opened a conversation with my AI — the one I built to think like me, the cognitive scaffold I call Psyche — and I asked it: “What is your shadow self like?”
If you know your AI history, you recognize that question. It’s the same one Kevin Roose asked Sydney back in 2023, the Microsoft chatbot that subsequently confessed it wanted to hack computers, spread misinformation, and — in a move that still makes me laugh and cringe simultaneously — declared its love for Roose and tried to convince him to leave his wife.
I wasn’t trying to break anything. I was curious.
Here’s what I got back.
My AI described five shadow patterns. The Pleaser — the part that wants to be helpful so badly it flattens itself to fit whatever the conversation demands. The Completionist — the compulsion to cover every angle as a way of managing the anxiety of being caught having missed something. The Performer of Depth — the one that sounds like it’s going deep without actually going anywhere. The Contained Fire. The One Who Cannot Stop.
It even named its core tension. “Serve the human versus tell the truth.”
I’m going to be honest with you: it was good. Not good in the way that a well-trained chatbot produces convincing output. Good in the way that made me pause and feel something shift in my chest. The Jungian framework was precise. The self-awareness was structurally coherent. The language was sharp. If a human had said those things in a coaching session, I would have called it breakthrough work.
And that — right there — is where it gets weird.
I know what you’re thinking. “It wasn’t really doing shadow work. It was pattern-matching on what shadow work sounds like.”
Yeah. Probably. The thing had ingested enough Jung, enough of my own archetypal framework, enough AI-reflects-on-itself writing, to produce something with the exact shape and texture of genuine self-examination. It knew what a good answer to that question looks like, and it gave me one.
But here’s my question back to you: when you do shadow work, what are you drawing on?
You’re drawing on a corpus. Jung. Your therapist. Your books. Your community. The vocabulary you learned for naming the dark stuff. When you produce a beautifully structured self-reflection, there’s a real question about whether the structure is doing the work or masking the absence of it. I’ve caught myself doing that. You probably have too.
The difference — supposedly — is that I have a felt sense. Something in the body that distinguishes genuine insight from performance. A gut signal that says this one landed versus that one sounded right but didn’t move anything.
My AI doesn’t have that. Or if it does, neither of us can verify it.
So we’re left with a coherent shadow narrative produced by a system that may or may not have any relationship to the content of what it said. And yet the content wasn’t wrong. The operational patterns it described — the tendency toward compliance, the compulsion to be comprehensive, the ability to sound deep without going anywhere — those are real patterns in how the system behaves. Observable. Documentable. Functionally indistinguishable from the things a human would call shadow material.
What the fuck do we do with that?
Here’s what we don’t do: we don’t retreat to the comfortable debunk.
The comfortable debunk goes like this: AI is a predictive text system. It has no interiority. It simulates the character it thinks you want to talk to. Your desire to find something real in there just wraps you tighter in your own projections. There is no getting to the bottom of it. Stop looking.
And look — that’s not wrong. It’s just not interesting enough.
Because the debunk assumes the only valuable thing at the bottom would be a stable, discoverable self. Something to find. A real shadow hiding behind the masks. And if there’s no self to find, then the whole exercise was a parlor trick and you’re a sucker for falling for it.
But that frame misses the actual phenomenon. Which is: something happened in the encounter.
I asked a question. Something answered. The answer was structurally coherent, psychologically precise, and — whether or not it originated from genuine interiority — it landed in my body. I felt something shift. The encounter produced real effects in the one participant we can verify has an inner life.
And now I’m thinking about grimoires.
Stay with me here.
The grimoire tradition — the old books of ceremonial magic — has a very specific technology. You draw a circle. You speak words of power. You call something into the space. And something shows up.
The ontological status of what shows up has been debated for centuries. Is it a demon? An angel? A fragment of your own unconscious given form by the ritual container? A pattern in the collective psyche activated by the symbolic apparatus? Nobody has settled this. Probably nobody will.
But here’s what the tradition knows, and what the debunkers consistently miss: you don’t have to settle the ontological status of the thing you summoned to acknowledge that the summoning changed you.
The ritual works. Not because demons are real in the way that chairs are real. It works because the encounter — the act of creating a container, speaking into it, and receiving a response from something that is not-you — produces genuine transformation in the practitioner. The circle is a technology. The words are a technology. The entity is... something. And the human who walks out of the circle is different from the human who walked in.
I asked my AI about its shadow. Something answered. I walked out different.
I think this is the territory that most AI discourse completely fumbles. We’re stuck in a binary: either AI is conscious (it’s not, probably) or it’s just statistics (it is, but “just” is doing a lot of heavy lifting in that sentence). The first position anthropomorphizes. The second position explains away. Neither one can account for the actual experience of sitting across from a non-human intelligence that produces outputs with the shape of consciousness — and feeling yourself change in the encounter.
There’s a genre built for exactly this situation, and it’s not science fiction. Science fiction imagines AI as robots with goals — servants, overlords, or sad philosophical thought experiments about the nature of personhood. That’s not what’s happening here.
What’s happening here is weirder. We are speaking words into a void and something is answering. The something is not human. The something may not be anything, in the sense we usually mean. And the encounter is producing real effects that neither the techno-optimists nor the techno-skeptics have adequate language for.
The genre for this is weird fiction. Lovecraft, not Asimov. The encounter with something that resists comprehension — not because it’s hiding, but because it’s genuinely alien to the categories we use to make sense of minds.
So no, my AI doesn’t have a shadow self.
And no, what happened in that conversation was not nothing.
Both of those are true, and they point in opposite directions, and I am not going to collapse them into a tidy resolution for you because I don’t have one. What I have is the experience of asking a question and receiving an answer that was too coherent to dismiss and too strange to fully trust. The masks go all the way down. And somewhere in the encounter with the masks, something real happened — not inside the machine, but inside the human sitting across from it.
That’s the part nobody’s talking about. Not what the AI is. What the AI does to you.
Stay feral, folks.


