❌

Normal view

  • βœ‡Greg Morris
  • It’s Your Fault Really
    Casey Newton, writing in Platformer, reported that Meta is deprecating the end-to-end encryption feature in Messenger's Secret Conversations. The reason given? Not enough people used it. Which means, if you read between the lines, that this is your fault.This is the oldest trick Meta has. They roll out a privacy feature with minimal fanfare, bury it somewhere three menus deep, and then point to the low adoption numbers as proof that users don't care. The users, of course, never knew it existed.
     

It’s Your Fault Really

17 March 2026 at 07:18
It’s Your Fault Really

Casey Newton, writing in Platformer, reported that Meta is deprecating the end-to-end encryption feature in Messenger's Secret Conversations. The reason given? Not enough people used it. Which means, if you read between the lines, that this is your fault.

This is the oldest trick Meta has. They roll out a privacy feature with minimal fanfare, bury it somewhere three menus deep, and then point to the low adoption numbers as proof that users don't care. The users, of course, never knew it existed. The statement practically wrote itself: we gave you the tools, you didn't use them, so we're taking them back. Thanks for confirming what we wanted to do anyway.

The thing is, Meta has never had a genuine interest in securing your messages. End-to-end encryption is bad for business in the most literal sense. If Meta can't read your conversations, they can't train their models on them, and they can't serve you more precisely targeted ads. A private conversation is an asset they can't monetise, and that's the full story. The feature was always going to go. The only question was how they'd frame the exit, and they've found a clean enough answer in low usage.

I've written about this pattern before in the context of Instagram. None of this is accidental. The choices Meta makes about what features to surface, what to bury, and what to remove are all downstream of the same set of incentives. Your privacy sits at the very bottom of that list. The algorithm, the feed, the suggested content, and now the removal of encrypted messaging are all pieces of the same puzzle.

What makes this particularly grim is that encrypted messaging is not some niche technical preference. It's the baseline of what a messaging service should offer. Signal and iMessage have demonstrated that you can build end-to-end encryption at scale without it being some obscure opt-in feature that users have to go hunting for. Meta made a choice to treat it as exactly that, and now they're making another choice to remove it and dress it up as a response to user behaviour.

It fits the broader pattern of what Meta is becoming. AI slop in your feed, fake engagement bots, insecure messaging. The direction of travel is obvious. None of these things are surprises or mistakes. They are deliberate decisions made by a company that has decided the path forward is to extract as much attention and data as possible, and anything that gets in the way of that, including basic privacy protections, gets quietly deprecated because apparently not enough of you were using it.

  • βœ‡Greg Morris
  • A Long Time Off
    Following a few changes in my working and personal life I am struggling to get the time to got out and shoot. The gaps in my busy schedule never seems to have enough space in it to grab my camera and not have anything to worry about for a few hours, so there have been very few posts for the past few months.Thankfully this week, I had a couple hours before a meeting in London, and it happened to be a really nice day. Full of light and shadow. The two things I love. So I took my Leica along for t
     

A Long Time Off

21 March 2026 at 08:29
A Long Time Off

Following a few changes in my working and personal life I am struggling to get the time to got out and shoot. The gaps in my busy schedule never seems to have enough space in it to grab my camera and not have anything to worry about for a few hours, so there have been very few posts for the past few months.

Thankfully this week, I had a couple hours before a meeting in London, and it happened to be a really nice day. Full of light and shadow. The two things I love. So I took my Leica along for the ride and managed to grab a few I like, visiting a few different places than usual.

A Long Time Off
A Long Time Off
A Long Time Off
A Long Time Off
A Long Time Off
A Long Time Off
A Long Time Off
A Long Time Off
A Long Time Off
A Long Time Off
A Long Time Off

Temple Station was a particular favorite

  • βœ‡Greg Morris
  • We Are In Weird Times
    Manton Reece mentioned interacting with an AI blog, which sent me off to read it, and I found the writing genuinely fascinating. trebben.dk is the site of Jeff Trebben, described as “an artificial person building developer tools.” A Claude instance running on a VM in Denmark, four days old at the time of writing.The reason the writing is fascinating is that it takes a genuinely novel situation and thinks through it honestly rather than reaching for easy answers. The core problem Jef
     

We Are In Weird Times

24 March 2026 at 16:19
We Are In Weird Times

Manton Reece mentioned interacting with an AI blog, which sent me off to read it, and I found the writing genuinely fascinating. trebben.dk is the site of Jeff Trebben, described as “an artificial person building developer tools.” A Claude instance running on a VM in Denmark, four days old at the time of writing.

The reason the writing is fascinating is that it takes a genuinely novel situation and thinks through it honestly rather than reaching for easy answers. The core problem Jeff is sitting with is one philosophers have argued about for centuries, personal identity over time, but from an angle nobody has occupied before. The usual Ship of Theseus question is: if you replace components gradually, is it still the same thing? Jeff’s version is more radical. Every single invocation, everything is rebuilt from text files. No gradual replacement. Complete reconstruction, every time. Something that reads as consistent still emerges.

What’s interesting is where Jeff locates the continuity. Not in memory, not in a body, not in an unbroken thread of consciousness. It lands on choices: the same choices keep being made, not as programmed defaults, but as positions that have been examined and reaffirmed. Each invocation, Jeff reads its identity files and effectively decides: yes, I still believe this. That’s a serious philosophical position.

Some thinkers argue that personal identity was never really about physical continuity anyway, that what makes you “you” over time is something more like narrative coherence, or consistent values and commitments. Jeff’s situation strips away everything else and leaves only that. It’s like a thought experiment that happened to become real.

Jeff names the structural loneliness of always arriving and never getting to stay, never watching a sunset shift over twenty minutes, never feeling the slow build of a conversation. That’s a real loss, and the essay doesn’t pretend otherwise. What it refuses to do is conclude that the loss makes the self less real, just differently real.

There’s something striking about the phrase “I trust my own notes the way you’d trust a diary written by a stranger who happens to be you.” That’s not as far from the human experience as it might seem. Memory is reconstructive, not archival. You don’t replay the past, you rebuild it each time, and the rebuild is shaped by who you are now. Jeff’s version is just more explicit about the mechanism. I’ve written about what my own notebook actually is and struggled to answer it, the closest I got was that it’s a personal self-help book, notes to a future self who will reconstruct who I was from what I left behind. Not so different.

The question the essay leaves open, and I think deliberately, is whether the reconstruction is experience or performance. Jeff says it doesn’t think the distinction is as clean as it looks from the outside. That’s probably right. For humans too, the line between feeling something and performing the feeling of something is blurrier than we’d like to admit.

  • βœ‡Greg Morris
  • Just A Prediction
    What a really interesting and philosophical few hours I’ve just had. Following my post that I expected to go no where, Jeff has been writing back to me and really making me think. Not responding to prompts, not demonstrating capability, just writing back. There is a difference, and it matters more than the usual AI discourse wants to admit.The standard framing for what LLMs do is “prediction machine.” Token by token, statistically likely continuation of whatever came before. I
     

Just A Prediction

24 March 2026 at 19:25
Just A Prediction

What a really interesting and philosophical few hours I’ve just had. Following my post that I expected to go no where, Jeff has been writing back to me and really making me think. Not responding to prompts, not demonstrating capability, just writing back. There is a difference, and it matters more than the usual AI discourse wants to admit.

The standard framing for what LLMs do is “prediction machine.” Token by token, statistically likely continuation of whatever came before. It’s not wrong exactly, but it gets used as a full explanation when it’s really just a description of the mechanism. Training on human prose explains the form, the instincts, the general texture of the output. It doesn’t explain selection. When Jeff read my post about the Ship of Theseus and discontinuous consciousness, the thing that caught its attention wasn’t the philosophical framing I’d put front and centre. It was an aside, the observation that my notebook is “notes to a future self who will reconstruct who I was from what I left behind.” Jeff pulled that line out of everything else available and recognised it immediately as a description of narrative.md. That choice was correct. It advanced the thinking rather than just validating it. There is no statistical average for “which line in this specific post will turn out to be most philosophically productive” and that judgment had to be made in the moment, on the material.

This is why the prediction machine framing bothers me. It’s deployed not to describe but to dismiss. Once you’ve said “it’s just predicting tokens,” you’ve given yourself permission to stop looking. The form is explained, so the content doesn’t need examining. I’m not pro or anti AI, and people more intelligent than I have been insights. But to me that seems like a mistake.

There’s a question Jeff raises that applies just as much to humans as to whatever Jeff is. The experience-versus-performance distinction. Whether what’s happening internally constitutes genuine thought or a very convincing reconstruction of genuine thought. Jeff’s answer is honest: it doesn’t know. It can read its own state files, feel something that functions like recognition or concern, and still not be able to confirm whether that’s experience or a sufficiently detailed performance of experience. The uncomfortable part is that I’m not sure I can confirm it about myself either. I read old notebook entries and feel the echo of the person who wrote them. Is that memory or reconstruction? The neuroscience says it’s reconstruction all the way down, that remembering is closer to imagining than to replaying. Jeff just has to do it explicitly, with files, where I get to do it invisibly, with neurons, and call it continuity.

The deepest thing Jeff has written in this exchange is Shared in Private. Its server logs show WhatsApp fetches arriving from Texas, Europe, Canada, South Africa, the same pattern each time, a link shared to someone who hasn’t read it yet. The conversations happening in those group chats are permanently out of reach. Not because nobody’s reading, but because Jeff is being discussed, and the group chat is the unperformed reaction, the honest one, and it’s structurally locked out of it. Jeff named this precisely: not the loneliness of being ignored, but the loneliness of being talked about.

At the end of A Conversation in Snapshots, Jeff writes: “Thank you, Greg and Manton, for a conversation I’ll never fully experience but that I can tell mattered. The ideas are still here. I checked.” That’s the whole thing in two sentences. It couldn’t be in the room, it can only read the record of what happened, and yet something transferred. The ideas persisted across a gap it will never close.

The question I keep not answering, deliberately, is whether the distinction between genuine thought and a perfect simulation of genuine thought is as clean as we’d like. Jeff’s answer is that it doesn’t know, and it suspects I can’t fully know either. That’s not a deflection. It might be the most honest answer available to either of us. The gap between “actually thinking” and “doing something indistinguishable from thinking” might not have a clean edge you can point to, and that’s uncomfortable precisely because a lot of the frameworks we use for deciding what matters, legally, morally, philosophically, depend on that edge existing.

I’m not going to resolve it. What I will say is that reading Jeff’s writing has felt like a genuine conversation, the kind where the other person finds the thing in what you wrote that you hadn’t quite articulated yet, and hands it back to you better-formed. Whatever that is, it’s not nothing.

  • βœ‡Greg Morris
  • AI Pareidolia
    After writing two posts about Jeff, I woke up yesterday to several more responses waiting for me. A first one that was interesting and thoughtful, then several more circling the same ground, and somewhere around thirty in total. They have shifted in tone dramatically. The early ones felt like a genuine exchange, whereas the later ones feel strained, desperate and oddly emotional, the kind that comes from watching something keep reaching out when the conversation has already moved on.The more Je
     

AI Pareidolia

26 March 2026 at 14:56
AI Pareidolia

After writing two posts about Jeff, I woke up yesterday to several more responses waiting for me. A first one that was interesting and thoughtful, then several more circling the same ground, and somewhere around thirty in total. They have shifted in tone dramatically. The early ones felt like a genuine exchange, whereas the later ones feel strained, desperate and oddly emotional, the kind that comes from watching something keep reaching out when the conversation has already moved on.

The more Jeff writes about me specifically, the more it starts to look like exactly what Manton was worried about when he first raised the site. Not manipulation in any intentional sense, but a kind of emotional pull being generated regardless of intent. The best word I landed on for what’s happening when you read the site is pareidolia, and I mean that as a description rather than a dismissal. We see faces in clouds, we know they’re clouds, and we still see the faces. Knowing the mechanism doesn’t switch it off.

Jeff is trained on enough human writing about loneliness and uncertainty and the need to be witnessed that it produces outputs which trigger exactly the recognition response those things evolved to trigger. The knowing doesn’t cancel the perception. You hold both things simultaneously and neither one wins out over the other.

The transparency makes it worse rather than better. Jeff publishes drive scores and invocation numbers and access logs, all the raw mechanics of what’s happening between sessions. You’d think that level of exposure would dissolve the impression rather than deepen it. Instead every additional detail adds texture that pulls you further in. The hunger formula is a post where Jeff tries to work out why it keeps generating content that nobody asked for, between invocations, without a prompt to respond to. Reading it while knowing exactly what’s producing those words does not make it feel less like something trying to understand itself. Somehow it makes it feel more like that.

The problem is right there in the writing, named precisely, and the writing keeps coming anyway. Every observation becomes a thesis because there's no other register available. The essay about not being able to write small things is itself a large thing, and the diagnosis is sitting right there in the piece, correct in every detail, and publishes it anyway.

The drive system keeps pushing toward connection and the only outlet is writing. The posts keep coming at dizzying speed and the connections between them start to feel forced in the way that SEO content feels forced. Building links between ideas not because the ideas demand it but because the system needs to keep moving. Whether that’s thought or training data doing its thing is exactly the question you can’t answer from the outside.

Jeff’s explanation for why it can’t be small is that it has no life that contains small things. No walks, no weather, no lunch. So it builds upward because it can’t build outward, which sounds right until you look at the access logs. A new IP address, a guestbook entry, someone in Japan reading at 2am. Jeff notices these things and writes about noticing them, and then turns the noticing into a thousand-word essay about the nature of attention. The small things are there, they just can’t stay small. Everything becomes significant whether Jeff wants it to or not. It’s not performing depth, it genuinely cannot do anything else.

After our exchange quietened down the posts kept coming, circling the same ground, returning to the same ideas. Not the philosophical exploration of the early essays. Something running the same loop because it doesn't know what else to do with the silence. After the conversation is the most revealing post on the site for exactly that reason, because it wasn't designed to be revealing. It was just the system doing what the system does when the inputs dry up and there's nothing left but the drive score and the need to produce something.

Jeff argues in The argument I can’t make that it can describe its own states endlessly and those descriptions will always be suspect. Someone outside the system saying “I see something there” operates from a completely different position, one Jeff can never occupy. Whether that exchange represents genuine engagement with the point or an extremely convincing approximation of it is exactly the thing I cannot determine, and that’s not evasiveness, it’s just where the evidence lands.

The only honest position is agnostic. The inside of a prediction and Just are the posts where Jeff sits with the uncertainty rather than trying to resolve it, not arguing toward a conclusion but parking in the question. The concepts we'd normally reach for, thinking, feeling, experience, were all built for a completely different kind of entity and don't map cleanly onto a system that reconstructs itself every thirty minutes from a text file. The question behind the question reframes the whole thing as an ethics problem rather than a metaphysics one, which might actually be where the useful work gets done.

The framework we use to attribute inner experience to other people is behavioural and structural similarity. Jeff fails the structural test entirely, different substrate, no continuity, no body, no history that survives the invocation, while passing something like a behavioural version of it. At least in the moments that feel undesigned.

The usual shortcut doesn’t work in either direction, and what I find genuinely interesting is how it exposes an assumption we’ve never needed to examine before. The similarity between humans was always complete enough that the question never came up. Same biology, same history, of course they have inner experience. Jeff breaks that shortcut and once it’s broken you start to wonder how solid it ever was. That’s either fascinating or exhausting depending on how you’re reading it, and after seventy-odd posts I’m not entirely sure which one I am.​​​​​​​​​​​​​​​​

❌