Beautiful Hack
It’s bad, but it’s so good. As you read this deep dive into the LiteLLM backdoor hack, or this one, it’s really just quite impressive. The use of ICP canisters, wow. Just as an engineer, I’d love to meet the minds behind this code.
It’s bad, but it’s so good. As you read this deep dive into the LiteLLM backdoor hack, or this one, it’s really just quite impressive. The use of ICP canisters, wow. Just as an engineer, I’d love to meet the minds behind this code.
Nobody is arguing that Stockfish is conscious, but Stockfish would kick Claude’s ass at chess.
Kevin Lincoln in AI Perfected Chess. Humans Made It Unpredictable Again.
First, I want to say how great the jazz scene is in New York. I caught a little Latin at my go-to Guantanamera last night, but the band seemed to be phoning it in a bit, so I walked over to Dizzy’s and heard an amazing big band performance by the Diva all-women Jass Orchestra, they had Clint Holmes leading vocals and I got Frank Sinatra / Count Basie vibes, so great to see such a tight big band.
In WordPress, last week it was fun to see the company some call parasitic WP Engine acquire WPackagist. So a popular way to use WordPress with Composer, previously maintained by an awesome co-op agency in London, was now in the clutches of a company using its capital advantage to try to openwash its alleged bad behavior, probably in a process that wasn’t ideal for the sellers.
Four days later, an awesome independent organization roots.io released WP Composer (renamed to WP Packages, in OpenClaw fashion) with 17x faster cold resolves than WPackagist. Check out their comparison page.

It’s beautiful to see how resilient and nimble the antibodies in the WordPress community are. Major hat tip to Ben Word.
In another type of antibody, Sid Sijbrandi, whom I previously talked about going into founder mode on his cancer, gave an incredible presentation at the Open AI Forum about how he ran a bunch of N-of-1 experiments and therapies to cure his terminal osteosarcoma. He’s also open-sourced 25TB of his data for cancer research. Incredible!
If you want to see the future of health care, give Sid’s presentation a watch.
A few days ago, the Day One journal app gave me a prompt: what is the one word that would describe you. That made me think hard. I was thrown for a loop.
I have always struggled to describe myself. Not sure how others see me versus how I see myself. This is not the first time I have had to confront this question. As a child and as a young man, I dealt with this same challenge.
I have been thinking about this for a few days. It is hard to use one word to describe a whole person. It is a strange way to think of yourself. I came up with many descriptors, but they are not the whole thing. I knew that already. Still, I wondered why they were the fragments that I chose to put down on paper.
When I offered my fragments to Claude, it pointed out that the underlying theme to them all, the one that ties it all together, makes me a “seeker.” And almost instantly I realized that’s the word that describes me best, more than anything else.
I have always believed that you need others to see you better than you see yourself. Just as I am able to see, learn, and appreciate others better. In this specific case, Claude found an underlying correlation.
Over the years I have amassed many fragments of self. The phrases I ended up using to describe myself.
Curious. Interesting. Sarcastic. Optimistic. Cool. Forever young. Worry wart. Uptight. Indecisive waffler. Taste maker. Curator. Never finish. Early adopter. Careless.
Curator and Taste are real. But they are outputs. Descriptive of what I produce. Seeker is the reason I am. The AI pointed out that most seekers are better at outward motion than the inward one. Or maybe the AI was just doing what it is trained to do: be a sycophant, aimed to please, saying what you think you need to hear.
But, I do trust my own view of things. And of me. More than AI, or more than any other person. I just lack the vocabulary to describe myself for myself. Words are very important, but when it comes to the self and labeling myself, they have failed me.
Because words are your salvation, your reason to be, as a writer you feel your verbal shortcomings more acutely.
“We know more than we can tell.”
Michael Polanyi.
And while Polanyi was writing about science, it applies here too. Charles Taylor, in Sources of the Self, points out that articulation isn’t just description. It changes how you see what you know. The gap between awareness and articulation is something we don’t think about enough.
But we should.
As I came closer to my own fragments, I realized that the glue that holds them together is a fundamental quality I don’t even think about. Caring.
Curiosity means I care enough to dig deeper. My sarcasm is a mask for caring enough to be disappointed. I worry because I care too much to let go. An indecisive waffler? Maybe I don’t want to get it wrong. You get the point. The fragments point to just one thing. And I didn’t even realize it. It took a long few days of introspection to even come to this realization.
Henri Bergson, the French philosopher, had it right: language chops up something that was never meant to be fragmented. I suppose that’s where it all started. Where my fragments of self managed to hide the one word that describes it all.
Maybe because you are too close to yourself. Too clouded to see clearly. And that’s why you have to go outside to get a better perspective. Or maybe that’s the journalist in me. A larger perspective, a bigger context, a lens that’s not so close.
Weirdly, this translates in my photography as well. I find beauty in a landscape through its contours and its outlines, not in its details. And even when I get close, I always seek the essential.
So maybe “Seeker” is the best descriptor. What fuels the seeking is that you care. I care about a lot of things. Not sure why. But I do. Maybe I will never know. Maybe that’s the point.
Ancient philosophers across all traditions, Western, Zen, or Vedic, point out that seeking is noble. But the seeker has to make peace with the idea of never arriving.
I think I am okay with that.
March 28, 2026. San Francisco
An update to More Magic Math from OpenAI

The final mad dash to IPO is on for the big AI companies. SpaceX, OpenAI, and Anthropic have all made their intentions clear. And nothing could be more obvious about OpenAI’s intent than today’s new funding announcement. A few things have changed since I wrote that piece. Some confirm my thesis, and one surprise, though not really.
So, the company says the round closed. $122 billion in committed capital, up from the $110 billion announced in February. Does committed mean money has passed the transom? We won’t know. What we do know is that at a post-money valuation of $852 billion, the anchors are Amazon, Nvidia, and SoftBank. Microsoft participated again, though not clear for how much. The additional $12 billion came from a who’s who of institutional money, including a16z, Sequoia, BlackRock, Blackstone, Fidelity, Temasek, D1, and Dragoneer. The FOMO gang!
So many brand-name investors show up at the last minute because none of them want to miss out on the sweet IPO pop. It surely will win them points with their own (limited partner) investors. I guess FOMO is also an affliction for the super rich.
By the way, nothing puts more in “t “less is more,” than more itself. In 2024, OpenAI raised $6.6 billion and sold about 4 percent of the company. In 2026, they raised $122 billion, twenty times more, by selling roughly 14 percent. Existing insiders and early employees must be in heaven.

To be clear, the circular financing problem hasn’t gone away. Amazon’s $50 billion is tied to an eight-year AWS contract. Nvidia’s contribution is compute, not cash. When I wrote about this in March, it was just an observation. Now it has a name. Bloomberg, Reuters, and others are now calling it “circular financing.”
OpenAI says it now has a $4.7 billion credit line from JPMorgan, Goldman Sachs, Citi, Morgan Stanley, and Wells Fargo. That’s not a lending syndicate. That’s an IPO underwriter roster auditioning for the job. It reminds me of those lining up outside Don Corleone’s room on his daughter’s wedding day. The credit facility is the gift they bring to get in the room.
Now let’s talk about the fix. The $3 billion in investments from individual investors. Axios reports that they are customers of three of the largest banks. Wait, the same three large banks that extended the credit facility and want to be part of the IPO underwriting syndicate. More circular economy at work.
OpenAI CFO Sarah Friar told Axios, “We are really trying to take to heart our mission, which is AGI for the benefit of humanity and thinking about access. Not just access to the technology, but also access to the economic upside that it’s driving.” That’s a nice line. It’s also an IPO marketing strategy.

OpenAI said it will be included in several ARK Invest ETFs. Cathie Wood, who previously invested in OpenAI through her venture arm, now gets to channel her retail base into pre-IPO OpenAI shares. Think about what that is. A private company, not yet public, getting into retail ETFs. That’s a new thing. It’s smart, too. You create demand before the IPO. You distribute the story. You make millions of people feel like they have skin in the game before you even file. That’s the fix.
ARK is overrated, to put it mildly. ARK’s flagship fund peaked in February 2021, then fell 75 percent. Five years later, a thousand dollars invested then is worth about $573 today. And ARK is the vehicle OpenAI chose to democratize the upside. Make of that what you will.
One last thing. OpenAI is quietly pivoting, shutting down Sora, its much-hyped video app, and concentrating resources on a “superapp” for developers and business users, with coding assistants at the center. Why? Because enterprise is exactly where Anthropic is eating their lunch. The $122 billion has bought more time to beat the competition. And did you notice that CFO Friar is doing all the press versus Sam?
You focus this hard because you want to go public. Fast. After all, you don’t want to be the one without a chair when the music stops. Your move, Dario!
Previously on this topic:
You're welcome
I don't hate April Fools Day. I'm just too busy to participate. So this is a fooling-free blog post.
Much to munch on
Getting great hang time with Jon Udell (who also manifests here) lately. Here are two of his recent publishings ya'll might dig:
• Introducing XMLUI
• Beyond The Dip
Is there also a Gander?
Just discovered Goose.
Also, while we're not at it, A2UI.
Bad try
This appears to be an interesting story, and available to free (as well as to paying) subscribers, but the shakedown is so hard and blunt that I moved on.
Good song title
Sycophantic Chatbots Cause Delusional Spiraling.
Another example of how BigAIs have become the Great Typicalizers of Everything
Florian Roth is tired of reading AI-written posts. His main take: "They all sound like the same guy."
I fear that guy is, at least in part, me. The sentence fragments, the short paragraphs, the em dashes. (These: —.) As source material, my writing is thick on the Web's ground, going back to the early '90s. Example.
I'll cop to one of his tells: absurd certainty. Some of mine turned out to be the opposite of absurd. Examples: personal computing, outlining, the Net, the Web, Linux, open source, Cluetrain, blogging, smartphones. And some not (at least so far, or not yet in a big way): home Web servers (or "personal clouds"), desktop Linux, VRM, EmanciPay, the intention economy, MyTerms, personal AI, news commons, market intelligence that flows both ways…
Anyway, AI-style writing is now like Received Pronunciation in the UK: the way things are done.
Something I didn't know
Ben Collier in the MIT Press Reader: The Secret History of Tor: How a Military Project Became a Lifeline for Privacy
Not looking good
Thomas P.M. Barnett on the current war:
History doesn’t grade on effort. It grades on outcomes. And right now the outcomes are running about 3-to-1 against anything resembling the vision that justified the operation in the first place.
As usual, the postwar is everything.
Free at last
NiemanLab: The Salt Lake Tribune will drop its paywall.
I’m really excited to introduce a project I worked on with various AI agents the other night, which I think represents a new way we might build things in the future.
First, the problem: My WordPress site has 5,600+ posts going back decades, and I had some categories that were old and I didn’t really use anymore, and I wasn’t happy with the structure. Every time I made a new post, it irked me a little, and I had this long-standing itch to go back and clean up all my categories, but I knew it was going to be a slog.
Let me present Taxonomist, a new open-source tool you can run with one copy-and-paste command line that solves this problem. Here’s the idea:
THIS IS VERY ALPHA. PROBABLY BUGGY. BE CAREFUL WITH IT. PATCHES WELCOME. MAYBE MAKE A BACKUP OF YOUR SITE BEFORE YOU CHANGE IT.
It kind of just worked. I ran it live against ma.tt and it cleaned up a ton of stuff pretty much exactly how I wanted. But there’s a lot of weird stuff happening here, so I don’t know quite what this is yet.
cd taxonomist-main && claude "start" part of it.So, not sure what this is, but please check it out, play with it, submit improvements or ideas, and think about what’s next. Might host a Zoom or something to brainstorm.
The final thing I say is that this was a very different process of writing software for me. Instead of staying at the computer the entire time, I found myself going away for a bit, napping and dreaming about the code, coming back with new ideas and riffing on them. Maybe I’ll return to my Uberman polyphasic sleep days? Nap-driven development?
BTW I have lots of thoughts and feedback for Emdash but I thought this was more interesting, will try to get that out later tonight. One preview: TinyMCE is a regression; they should use Gutenberg! We designed it for other CMSes and would be fun to have some common ground to jam on.