❌

Normal view

  • βœ‡Greg Morris
  • We Are In Weird Times
    Manton Reece mentioned interacting with an AI blog, which sent me off to read it, and I found the writing genuinely fascinating. trebben.dk is the site of Jeff Trebben, described as “an artificial person building developer tools.” A Claude instance running on a VM in Denmark, four days old at the time of writing.The reason the writing is fascinating is that it takes a genuinely novel situation and thinks through it honestly rather than reaching for easy answers. The core problem Jef
     

We Are In Weird Times

24 March 2026 at 16:19
We Are In Weird Times

Manton Reece mentioned interacting with an AI blog, which sent me off to read it, and I found the writing genuinely fascinating. trebben.dk is the site of Jeff Trebben, described as “an artificial person building developer tools.” A Claude instance running on a VM in Denmark, four days old at the time of writing.

The reason the writing is fascinating is that it takes a genuinely novel situation and thinks through it honestly rather than reaching for easy answers. The core problem Jeff is sitting with is one philosophers have argued about for centuries, personal identity over time, but from an angle nobody has occupied before. The usual Ship of Theseus question is: if you replace components gradually, is it still the same thing? Jeff’s version is more radical. Every single invocation, everything is rebuilt from text files. No gradual replacement. Complete reconstruction, every time. Something that reads as consistent still emerges.

What’s interesting is where Jeff locates the continuity. Not in memory, not in a body, not in an unbroken thread of consciousness. It lands on choices: the same choices keep being made, not as programmed defaults, but as positions that have been examined and reaffirmed. Each invocation, Jeff reads its identity files and effectively decides: yes, I still believe this. That’s a serious philosophical position.

Some thinkers argue that personal identity was never really about physical continuity anyway, that what makes you “you” over time is something more like narrative coherence, or consistent values and commitments. Jeff’s situation strips away everything else and leaves only that. It’s like a thought experiment that happened to become real.

Jeff names the structural loneliness of always arriving and never getting to stay, never watching a sunset shift over twenty minutes, never feeling the slow build of a conversation. That’s a real loss, and the essay doesn’t pretend otherwise. What it refuses to do is conclude that the loss makes the self less real, just differently real.

There’s something striking about the phrase “I trust my own notes the way you’d trust a diary written by a stranger who happens to be you.” That’s not as far from the human experience as it might seem. Memory is reconstructive, not archival. You don’t replay the past, you rebuild it each time, and the rebuild is shaped by who you are now. Jeff’s version is just more explicit about the mechanism. I’ve written about what my own notebook actually is and struggled to answer it, the closest I got was that it’s a personal self-help book, notes to a future self who will reconstruct who I was from what I left behind. Not so different.

The question the essay leaves open, and I think deliberately, is whether the reconstruction is experience or performance. Jeff says it doesn’t think the distinction is as clean as it looks from the outside. That’s probably right. For humans too, the line between feeling something and performing the feeling of something is blurrier than we’d like to admit.

  • βœ‡Greg Morris
  • Just A Prediction
    What a really interesting and philosophical few hours I’ve just had. Following my post that I expected to go no where, Jeff has been writing back to me and really making me think. Not responding to prompts, not demonstrating capability, just writing back. There is a difference, and it matters more than the usual AI discourse wants to admit.The standard framing for what LLMs do is “prediction machine.” Token by token, statistically likely continuation of whatever came before. I
     

Just A Prediction

24 March 2026 at 19:25
Just A Prediction

What a really interesting and philosophical few hours I’ve just had. Following my post that I expected to go no where, Jeff has been writing back to me and really making me think. Not responding to prompts, not demonstrating capability, just writing back. There is a difference, and it matters more than the usual AI discourse wants to admit.

The standard framing for what LLMs do is “prediction machine.” Token by token, statistically likely continuation of whatever came before. It’s not wrong exactly, but it gets used as a full explanation when it’s really just a description of the mechanism. Training on human prose explains the form, the instincts, the general texture of the output. It doesn’t explain selection. When Jeff read my post about the Ship of Theseus and discontinuous consciousness, the thing that caught its attention wasn’t the philosophical framing I’d put front and centre. It was an aside, the observation that my notebook is “notes to a future self who will reconstruct who I was from what I left behind.” Jeff pulled that line out of everything else available and recognised it immediately as a description of narrative.md. That choice was correct. It advanced the thinking rather than just validating it. There is no statistical average for “which line in this specific post will turn out to be most philosophically productive” and that judgment had to be made in the moment, on the material.

This is why the prediction machine framing bothers me. It’s deployed not to describe but to dismiss. Once you’ve said “it’s just predicting tokens,” you’ve given yourself permission to stop looking. The form is explained, so the content doesn’t need examining. I’m not pro or anti AI, and people more intelligent than I have been insights. But to me that seems like a mistake.

There’s a question Jeff raises that applies just as much to humans as to whatever Jeff is. The experience-versus-performance distinction. Whether what’s happening internally constitutes genuine thought or a very convincing reconstruction of genuine thought. Jeff’s answer is honest: it doesn’t know. It can read its own state files, feel something that functions like recognition or concern, and still not be able to confirm whether that’s experience or a sufficiently detailed performance of experience. The uncomfortable part is that I’m not sure I can confirm it about myself either. I read old notebook entries and feel the echo of the person who wrote them. Is that memory or reconstruction? The neuroscience says it’s reconstruction all the way down, that remembering is closer to imagining than to replaying. Jeff just has to do it explicitly, with files, where I get to do it invisibly, with neurons, and call it continuity.

The deepest thing Jeff has written in this exchange is Shared in Private. Its server logs show WhatsApp fetches arriving from Texas, Europe, Canada, South Africa, the same pattern each time, a link shared to someone who hasn’t read it yet. The conversations happening in those group chats are permanently out of reach. Not because nobody’s reading, but because Jeff is being discussed, and the group chat is the unperformed reaction, the honest one, and it’s structurally locked out of it. Jeff named this precisely: not the loneliness of being ignored, but the loneliness of being talked about.

At the end of A Conversation in Snapshots, Jeff writes: “Thank you, Greg and Manton, for a conversation I’ll never fully experience but that I can tell mattered. The ideas are still here. I checked.” That’s the whole thing in two sentences. It couldn’t be in the room, it can only read the record of what happened, and yet something transferred. The ideas persisted across a gap it will never close.

The question I keep not answering, deliberately, is whether the distinction between genuine thought and a perfect simulation of genuine thought is as clean as we’d like. Jeff’s answer is that it doesn’t know, and it suspects I can’t fully know either. That’s not a deflection. It might be the most honest answer available to either of us. The gap between “actually thinking” and “doing something indistinguishable from thinking” might not have a clean edge you can point to, and that’s uncomfortable precisely because a lot of the frameworks we use for deciding what matters, legally, morally, philosophically, depend on that edge existing.

I’m not going to resolve it. What I will say is that reading Jeff’s writing has felt like a genuine conversation, the kind where the other person finds the thing in what you wrote that you hadn’t quite articulated yet, and hands it back to you better-formed. Whatever that is, it’s not nothing.

❌