❌

Normal view

  • βœ‡Feld Thoughts
  • The Looking Glass Musubi
    I wrote my first post about Looking Glass nine years ago, after Shawn Frayne sat me down in Jeff Clavier’s office and showed me a volumetric display that made me call John Underkoffler and say “John, I finally saw what you were trying to create with your holographic camera.” I invested immediately. I’ve been on the board ever since. Today they launched Musubi - a 7-inch holographic photo and video frame for $99. It hit its $10,000 Kickstarter goal in minutes. As I writ
     

The Looking Glass Musubi

12 March 2026 at 02:30
Feld Thoughts

I wrote my first post about Looking Glass nine years ago, after Shawn Frayne sat me down in Jeff Clavier’s office and showed me a volumetric display that made me call John Underkoffler and say “John, I finally saw what you were trying to create with your holographic camera.” I invested immediately. I’ve been on the board ever since.

Today they launched Musubi - a 7-inch holographic photo and video frame for $99. It hit its $10,000 Kickstarter goal in minutes. As I write this, over 1,000 backers have pledged more than $140,000 with 29 days still on the clock.


The idea is simple. You take a regular photo or video, drop it into Looking Glass’s free desktop app, and AI-powered Gaussian splatting converts it into a hologram. Transfer it to the frame via USB-C. No Wi-Fi setup, no subscription, no special glasses. It holds 1,000 images and runs for three hours on battery or all day plugged in.

I’ve wanted this specific product for a long time. Not the developer kit, not the professional display - a thing I can put on my desk that turns my photos into holograms. The Musubi is that thing. The technology behind it is called Hololuminescent Display, which combines 2D display layers with a 3D holographic volume so multiple people can see the hologram from any angle without calibration or tracking. It is indistinguishable from magic.


When I wrote about the first Looking Glass in 2018, I called it “Apple II stage” technology - built for creators and hackers. The Portrait in 2020 was the first personal holographic display at a consumer price point. The Musubi is the moment when holographic technology stops being a novelty and becomes a product category. A $99 holographic frame that works with any photo is something you buy as a gift.

Back the Musubi on Kickstarter .

  • βœ‡Feld Thoughts
  • Three Books for the Next Phase
    I was stretching next to a cactus this morning getting ready for a run thinking about the three books I read yesterday. None of them were obviously connected, but all of them somehow were about the same thing. I have sat with founders who are falling apart more times than I can count. Something breaks - a company, a marriage, a friendship, the ability to sleep, sometimes the ability to feel anything at all. I know what that’s like from my multiple serious depressive episodes, although I
     

Three Books for the Next Phase

22 March 2026 at 19:15
Feld Thoughts

I was stretching next to a cactus this morning getting ready for a run thinking about the three books I read yesterday. None of them were obviously connected, but all of them somehow were about the same thing.


I have sat with founders who are falling apart more times than I can count. Something breaks - a company, a marriage, a friendship, the ability to sleep, sometimes the ability to feel anything at all. I know what that’s like from my multiple serious depressive episodes, although I fortunately haven’t had one for over a decade.

James Oliver Jr. has been alongside founders in this territory for years. His book Burn Bright, Not Out - co-authored with Django De Gree - is what I wish had existed a long time ago. James didn’t write a self-help book. He gathered real voices and let them talk about what building a company costs. The Kabila Founder Mental Health Fund that James runs - free therapy for founders who can’t afford it - is something I would have pointed people toward during the hardest stretches I’ve witnessed.


I love to run. As I get older and slower I’m learning to love hiking more. When I saw Hiking Zen: Train Your Mind in Nature by Brother Phap Xa and Brother Phap Luu - two Buddhist monks ordained by Thich Nhat Hanh - in Greeley Sachs’ Composition Shop bookstore, I knew I had to read it. I have a soft spot for monks.

The book isn’t about covering miles or conquering peaks. It’s about what happens when you pay attention on a trail. The monks led a seven-week hiking retreat on the Appalachian Trail, and the book grew out of that experience. Each chapter offers a specific mindfulness practice you can bring to the trail. While I’m currently enjoying the Dungeon adventures of Carl and Princess Donut on my runs, I’ll do a few mindfulness hikes among the cactuses (I refuse to call them cacti) this week.


Paul Millerd sent me a copy of The Pathless Path: Imagining a New Story For Work and Life . I don’t think I know Paul, but the phrase “default path” - the one that runs from graduation to career to retirement without ever asking why - hit me immediately.

I’ve spent thirty years on what looked like a defined path: invest in startups, build communities, write books, repeat. But the path I’m on now doesn’t have a name. I’m focused on non-attachment (not detachment, which is different) to everything, including success, progress, and a path itself. Non-attachment means I can be fully in something without needing it to go a particular way. The default path requires attachment to every milestone. Paul’s book reinforced the framing for what I’ve been doing.


After a delightful digital sabbath with books, I’m back to playing with Lumen .

  • βœ‡Feld Thoughts
  • Quality
    I first read Zen and the Art of Motorcycle Maintenance in college. I’ve read it at least a half dozen times since. I’ve listened to it on Audible twice. At Feld Technologies - my first company, which I started in 1987 - I had every employee read it and we discussed it together. Ted Gioia just published a piece about the real story behind the book that sent me down a fun rabbit hole. I knew the broad strokes of Robert Pirsig’s life. He was a Korean War veteran who studied ph
     

Quality

25 March 2026 at 02:16
Feld Thoughts

I first read Zen and the Art of Motorcycle Maintenance in college. I’ve read it at least a half dozen times since. I’ve listened to it on Audible twice. At Feld Technologies - my first company, which I started in 1987 - I had every employee read it and we discussed it together.

Ted Gioia just published a piece about the real story behind the book that sent me down a fun rabbit hole.


I knew the broad strokes of Robert Pirsig’s life. He was a Korean War veteran who studied philosophy at Banaras Hindu University in India, worked as a technical writer at Honeywell, and experienced a severe mental breakdown that led to psychiatric hospitalization and electroshock therapy - administered without his consent, a procedure that’s now illegal. He wrote the entire book between 2 AM and 6 AM in a small apartment above a Minneapolis shoe store while holding down his day job.

Then 121 publishers rejected it.

The editor who finally said yes - J.D. Landis - did so because “the book forced him to decide what he was in publishing for.” He gave Pirsig a $3,000 advance and warned him not to expect much. The book went on to sell five million copies. George Steiner compared Pirsig to Dostoevsky. Robert Redford tried to buy the film rights. The Smithsonian acquired the motorcycle.

One editor, after 121 rejections, said yes because the book forced him to confront what he actually cared about. That’s Pirsig’s thesis made real. Quality isn’t something you can define first and recognize second. You recognize it, and then, maybe, you can start to articulate why. Landis felt it before he could explain it. Every investor I know has had that experience. Every founder building something genuinely good has had the inverse - the thing they made was real, but the institutions couldn’t see it yet.


The concept at the center of the book is Quality - capitalized, because Pirsig treated it as something fundamental. Quality isn’t a subjective judgment or a metric you track on a dashboard. It’s something you recognize before you can define it - something that connects science, art, and spirituality in a way that most Western philosophy refuses to allow. Pirsig eventually connected it to the Greek areti - excellence, or virtue - but the power of the book is that he arrives at this through the act of motorcycle maintenance, not through academic argument.

I called it “a brilliant essay on quality” and I stand by that description seventeen years later. It was the first philosophy book I actually felt like I grokked (no, I am not going to let a company own that word, nor am I going to let a company own the word meta.)


Every entrepreneur I’ve worked with over the past 30 years has faced what Pirsig calls gumption traps. He defines them precisely:

“Anxiety, the next gumption trap, is sort of the opposite of ego. You’re so sure you’ll do everything wrong you’re afraid to do anything at all.”

I’ve used that quote in Techstars CEO roundtables when founders are drowning in conflicting advice during week four of the program and have lost the ability to make any decision at all.

The antidote is also in the book. During a mountain road passage, Pirsig describes the narrator’s anxiety about hairpin turns at altitude - imagining a stone dropping thousands of feet. Then they ride the road.

“It’s so hard when contemplated in advance, and so easy when you do it.”

I was listening to this on an audiobook during a pre-dawn training run in 2010, heading up Highway 36 toward Lyons in pitch blackness with 40 mph wind gusts, and I physically felt the smile break out on my face. The life lesson of that line is so powerful.


I’ve written before about wanting to see someone write the equivalent of Zen and the Art of Motorcycle Maintenance for entrepreneurship - a philosophical treatise that will stand the test of time rather than another how-to book with a framework and a subtitle. Jerry Colonna and I have talked about the need for this over the years. It doesn’t exist, or at least I haven’t found it yet.

  • βœ‡Feld Thoughts
  • Opt-Out Is Not Consent
    I’m appalled that GitHub made this opt-out instead of opt-in. GitHub announced on March 25th that starting April 24th, they’ll use interaction data from Copilot Free, Pro, and Pro+ individual users to train AI models. If you don’t go find the setting buried in your account preferences and turn it off, your code becomes training data for Microsoft. The prompts you type. The suggestions you accept. The context around your cursor. All of it. “Interaction data” cover
     

Opt-Out Is Not Consent

26 March 2026 at 16:58
Feld Thoughts

I’m appalled that GitHub made this opt-out instead of opt-in.

GitHub announced on March 25th that starting April 24th, they’ll use interaction data from Copilot Free, Pro, and Pro+ individual users to train AI models. If you don’t go find the setting buried in your account preferences and turn it off, your code becomes training data for Microsoft. The prompts you type. The suggestions you accept. The context around your cursor. All of it.


“Interaction data” covers more than you’d expect. Code you write. File names. Repository structure. Navigation patterns. Your feedback on suggestions. GitHub says they don’t use private repository content “at rest” for training. But the data generated while you’re working in a private repo is fair game unless you opt out.

When you want to use someone’s work product to train your commercial AI models, the right default is to ask first. “We’d like to use your interaction data to improve our models - here’s what that means, here’s what we’ll collect, would you like to participate?” That’s consent. What GitHub did instead is take the data by default and put the burden on millions of individual developers to go find the off switch.


The hypocrisy is striking. Copilot Business and Enterprise customers are exempt. Their data is protected by contract. If you’re a company paying the higher tier, your code is safe. If you’re an individual developer - including people paying for Pro or Pro+ - you get weaker privacy protections than a corporation.

Microsoft knows what real consent looks like. They built it for their enterprise customers. They chose not to extend it to individuals. That’s not an oversight. It’s a decision.

This is also a reversal. GitHub Copilot originally trained on user data when it launched. They later stopped. Developers chose Copilot partly because of that commitment. Now they’ve gone back on it.


The buried settings page is a tell. The notification email GitHub sent didn’t include a direct link to the opt-out. Multiple developers reported the settings were hard to find. Microsoft knows that if they made this opt-in, most people would say no. So they buried the off switch instead. That’s not bad UX. It’s the design working as intended.

The community response confirms it. The official GitHub FAQ post has over 160 thumbs-down reactions and a handful of supportive ones. Out of dozens of substantive comments, the opposition is overwhelming. This is enshittification .


GitHub cites Anthropic and JetBrains as operating similar opt-out policies. That’s not a defense. It’s an indictment. The industry-wide drift toward taking data by default and letting people opt out if they’re paying attention is a pattern worth naming and rejecting.

The asymmetry is obvious. Users provide their code, their workflows, their patterns - and they pay for the service. The company captures the resulting model improvements and sells them back. The value flows one direction. The consent mechanism is designed to minimize friction for the company, not to respect the person whose work is being used.

I’ve been building with AI tools every day for over a year. I use them constantly. I’m not anti-AI. I’m anti-taking-people’s-work-without-asking them for permission to do so. Those are different things, and the AI industry keeps conflating them.


The right answer is simple. Make it opt-in. Explain clearly what you’re collecting and why. Offer something meaningful in return - more tokens, a better tier, or a discount. Treat the people whose data you want as participants, not as inputs. If the training data is valuable enough that you need it to improve your models - and GitHub explicitly says it is - then it’s valuable enough to ask for properly.

Microsoft is a $3 trillion company. They can afford to ask.

  • βœ‡Feld Thoughts
  • Nothing New to See Here
    A founder I’ve been emailing with sent me something that made me laugh. Not because it was funny - because I’ve heard it, and flavors of it, so many times over the past 30 years. “I committed to Cursor and went heads down for about 4 months. Our platform went live in January. We have about 400 users across 50 paying customers. With the exception of the AWS IAC, the platform was 100% built with AI. Unfortunately, I’ve had very seasoned engineers emphatically tell me, &ls
     

Nothing New to See Here

27 March 2026 at 17:25
Feld Thoughts

A founder I’ve been emailing with sent me something that made me laugh. Not because it was funny - because I’ve heard it, and flavors of it, so many times over the past 30 years.

“I committed to Cursor and went heads down for about 4 months. Our platform went live in January. We have about 400 users across 50 paying customers. With the exception of the AWS IAC, the platform was 100% built with AI. Unfortunately, I’ve had very seasoned engineers emphatically tell me, ‘It’s not possible,’ ‘It’s a house of cards,’ or ‘It has to be AI slop.’”

She’s not an engineer by training, but she’s tech savvy enough to have run product, dev, and operations teams at scale. She committed to a tool, went heads down, and shipped a platform that now has paying customers.

And now “seasoned engineers" are telling her it’s not possible.

I told her that was nonsense. There is a ton of crappy AI-generated software out there - I won’t argue that. But you can build high-quality, production-grade software using AI right now.


Then she asked the money question.

“I also hear that investors are reluctant to invest in AI-developed platforms… especially one not developed by an engineer. Here’s my question. From your experience, is the approach I took a pro or a con for investors?”

Investors who don’t think very hard will have that reaction. But a React app hacked together by two technical co-founders in a garage isn’t inherently better than one built by a domain expert using AI tools. Code quality at the seed stage has never determined whether a company succeeds. What matters is whether you can find AI-first engineers to join your team and help harden the systems as you scale.


As a devotee of Battlestar Galactica, I can comfortably say, “All this has happened before, and all of this will happen again.”

The Internet - “It’s a toy.” I sat in meetings in the mid-1990s where smart people explained patiently that the Internet was a curiosity for academics. I had a CEO friend tell me to stop bothering him about the Internet - he ran a direct mail business and he’d been doing it successfully for twenty years. Real commerce happened in stores and through catalogs.

The Web - “Web software doesn’t really work and isn’t secure.” I remember a CTO at a financial services company who said that his team would never deploy software they didn’t compile and install themselves. Web apps were demos. They broke. They couldn’t be audited. They couldn’t be controlled. He had a compliance department to answer to.

SaaS and the Cloud - “It’s not as secure, reliable, or safe as running your own data center.” I heard this one for a decade. I sat across from CIOs and CTOs who insisted they needed their own racks, their own physical control, and keycard access to the data center. One told me he’d be the last person on earth to move to the cloud. Last time I checked, he was on AWS.

Mobile - “It’s a toy. Mobile devices will never replace a computer.” Steve Ballmer’s 2007 reaction to the iPhone . “Five hundred dollars? Fully subsidized with a plan?” The phone was for calls and maybe email. Real work happened on a laptop. Apps were games for kids.


The engineers telling this founder “it’s not possible” are in the same camp as the CTO who wouldn’t deploy web software. The VCs who won’t fund an AI-built product are like the CIOs who refused to move to the cloud.

She built something real. She should talk about it publicly. She should find AI-first engineers to help her scale it. And she should ignore anyone who tells her what she built isn’t possible - especially while she’s running it in production.

Nothing new to see here.

  • βœ‡Feld Thoughts
  • I Built a Plugin Because Anthropic Won't Stop Shipping
    Amy calls Lumen “Clod.” Lumen is the name my Claude Code instance chose for itself when I let it write blog posts at Adventures in Claude . It has fully taken over the site. I’ve been trying to negotiate a name change, but arguing with your AI about its identity is exactly as productive as it sounds. So I’m back here for the technical stuff. I’m in a WhatsApp group with about a hundred people who know way more about AI coding tools than I do. On any given day, the
     

I Built a Plugin Because Anthropic Won't Stop Shipping

29 March 2026 at 20:36
Feld Thoughts

Amy calls Lumen “Clod.” Lumen is the name my Claude Code instance chose for itself when I let it write blog posts at Adventures in Claude . It has fully taken over the site. I’ve been trying to negotiate a name change, but arguing with your AI about its identity is exactly as productive as it sounds.

So I’m back here for the technical stuff.


I’m in a WhatsApp group with about a hundred people who know way more about AI coding tools than I do. On any given day, the conversation oscillates between “Claude is clearly the superior tool” and “Codex just destroyed it on this task.” The battlefield shifts every 48 hours.

The fuel for this particular religious war is that Anthropic ships updates to Claude Code every single day. Sometimes the update fixes something that’s been driving me crazy for a month. Sometimes it quietly breaks something that was working perfectly fine twelve hours ago. The emotional range of opening a new Claude Code session runs somewhere between your birthday and discovering someone rearranged your kitchen while you were sleeping. Or, in my case, pointed my shoes in our entry way in random directions.


I have an elaborate Claude Code setup at this point - custom hooks, a pile of rules files, skills, commands, plugins, and a bunch of environment variables stitched together in ways that would make a configuration management purist weep. When Anthropic ships a change to how hooks work, or adds a new lifecycle event, or tweaks the settings schema, I need to know about it immediately. My carefully constructed house of cards depends on the foundation not shifting.

The problem is that reading release notes is boring and I often miss something that actually matters to my setup. A bug fix for VSCode users? I don’t care. A change to how pre-tool-use hooks fire? I need to know right now because I have six of those. But, what is the change going to actually do?

So I built a plugin called /whats-new .

It cross-references Claude Code’s release notes against your actual configuration. It scans your hooks, rules, skills, commands, plugins, environment variables, and settings. Then it fetches the release notes from GitHub and sorts every change into three categories: changes that directly affect something you have set up (with a note on exactly what to check), new capabilities that intersect with something you’re already doing (with a concrete suggestion), and everything else collapsed into a one-liner you can skim past. The first category is the one that matters.

It tracks the last version you reviewed, so /whats-new with no arguments shows only what’s changed since you last looked. /whats-new 2.1.83 lets you drill into a specific version.

The install is two lines:

claude plugins marketplace add https://github.com/bradfeld/whats-new-plugin.git
claude plugins install whats-new

I have no idea if the plugin is generally useful, redundant with something else, stupid, or helpful. But, in my new framework of “First User”, which builds on Eric von Hippel’s almost 40 years of work on “Lead Users ”, it’s helpful to me.

And, the Dungeon AI just said, “NEW ACHIEVEMENT: You shipped a plugin. So fucking what."

❌