❌

Normal view

  • βœ‡Feld Thoughts
  • Opt-Out Is Not Consent
    I’m appalled that GitHub made this opt-out instead of opt-in. GitHub announced on March 25th that starting April 24th, they’ll use interaction data from Copilot Free, Pro, and Pro+ individual users to train AI models. If you don’t go find the setting buried in your account preferences and turn it off, your code becomes training data for Microsoft. The prompts you type. The suggestions you accept. The context around your cursor. All of it. “Interaction data” cover
     

Opt-Out Is Not Consent

26 March 2026 at 16:58
Feld Thoughts

I’m appalled that GitHub made this opt-out instead of opt-in.

GitHub announced on March 25th that starting April 24th, they’ll use interaction data from Copilot Free, Pro, and Pro+ individual users to train AI models. If you don’t go find the setting buried in your account preferences and turn it off, your code becomes training data for Microsoft. The prompts you type. The suggestions you accept. The context around your cursor. All of it.


“Interaction data” covers more than you’d expect. Code you write. File names. Repository structure. Navigation patterns. Your feedback on suggestions. GitHub says they don’t use private repository content “at rest” for training. But the data generated while you’re working in a private repo is fair game unless you opt out.

When you want to use someone’s work product to train your commercial AI models, the right default is to ask first. “We’d like to use your interaction data to improve our models - here’s what that means, here’s what we’ll collect, would you like to participate?” That’s consent. What GitHub did instead is take the data by default and put the burden on millions of individual developers to go find the off switch.


The hypocrisy is striking. Copilot Business and Enterprise customers are exempt. Their data is protected by contract. If you’re a company paying the higher tier, your code is safe. If you’re an individual developer - including people paying for Pro or Pro+ - you get weaker privacy protections than a corporation.

Microsoft knows what real consent looks like. They built it for their enterprise customers. They chose not to extend it to individuals. That’s not an oversight. It’s a decision.

This is also a reversal. GitHub Copilot originally trained on user data when it launched. They later stopped. Developers chose Copilot partly because of that commitment. Now they’ve gone back on it.


The buried settings page is a tell. The notification email GitHub sent didn’t include a direct link to the opt-out. Multiple developers reported the settings were hard to find. Microsoft knows that if they made this opt-in, most people would say no. So they buried the off switch instead. That’s not bad UX. It’s the design working as intended.

The community response confirms it. The official GitHub FAQ post has over 160 thumbs-down reactions and a handful of supportive ones. Out of dozens of substantive comments, the opposition is overwhelming. This is enshittification .


GitHub cites Anthropic and JetBrains as operating similar opt-out policies. That’s not a defense. It’s an indictment. The industry-wide drift toward taking data by default and letting people opt out if they’re paying attention is a pattern worth naming and rejecting.

The asymmetry is obvious. Users provide their code, their workflows, their patterns - and they pay for the service. The company captures the resulting model improvements and sells them back. The value flows one direction. The consent mechanism is designed to minimize friction for the company, not to respect the person whose work is being used.

I’ve been building with AI tools every day for over a year. I use them constantly. I’m not anti-AI. I’m anti-taking-people’s-work-without-asking them for permission to do so. Those are different things, and the AI industry keeps conflating them.


The right answer is simple. Make it opt-in. Explain clearly what you’re collecting and why. Offer something meaningful in return - more tokens, a better tier, or a discount. Treat the people whose data you want as participants, not as inputs. If the training data is valuable enough that you need it to improve your models - and GitHub explicitly says it is - then it’s valuable enough to ask for properly.

Microsoft is a $3 trillion company. They can afford to ask.

  • βœ‡Feld Thoughts
  • Nothing New to See Here
    A founder I’ve been emailing with sent me something that made me laugh. Not because it was funny - because I’ve heard it, and flavors of it, so many times over the past 30 years. “I committed to Cursor and went heads down for about 4 months. Our platform went live in January. We have about 400 users across 50 paying customers. With the exception of the AWS IAC, the platform was 100% built with AI. Unfortunately, I’ve had very seasoned engineers emphatically tell me, &ls
     

Nothing New to See Here

27 March 2026 at 17:25
Feld Thoughts

A founder I’ve been emailing with sent me something that made me laugh. Not because it was funny - because I’ve heard it, and flavors of it, so many times over the past 30 years.

“I committed to Cursor and went heads down for about 4 months. Our platform went live in January. We have about 400 users across 50 paying customers. With the exception of the AWS IAC, the platform was 100% built with AI. Unfortunately, I’ve had very seasoned engineers emphatically tell me, ‘It’s not possible,’ ‘It’s a house of cards,’ or ‘It has to be AI slop.’”

She’s not an engineer by training, but she’s tech savvy enough to have run product, dev, and operations teams at scale. She committed to a tool, went heads down, and shipped a platform that now has paying customers.

And now “seasoned engineers" are telling her it’s not possible.

I told her that was nonsense. There is a ton of crappy AI-generated software out there - I won’t argue that. But you can build high-quality, production-grade software using AI right now.


Then she asked the money question.

“I also hear that investors are reluctant to invest in AI-developed platforms… especially one not developed by an engineer. Here’s my question. From your experience, is the approach I took a pro or a con for investors?”

Investors who don’t think very hard will have that reaction. But a React app hacked together by two technical co-founders in a garage isn’t inherently better than one built by a domain expert using AI tools. Code quality at the seed stage has never determined whether a company succeeds. What matters is whether you can find AI-first engineers to join your team and help harden the systems as you scale.


As a devotee of Battlestar Galactica, I can comfortably say, “All this has happened before, and all of this will happen again.”

The Internet - “It’s a toy.” I sat in meetings in the mid-1990s where smart people explained patiently that the Internet was a curiosity for academics. I had a CEO friend tell me to stop bothering him about the Internet - he ran a direct mail business and he’d been doing it successfully for twenty years. Real commerce happened in stores and through catalogs.

The Web - “Web software doesn’t really work and isn’t secure.” I remember a CTO at a financial services company who said that his team would never deploy software they didn’t compile and install themselves. Web apps were demos. They broke. They couldn’t be audited. They couldn’t be controlled. He had a compliance department to answer to.

SaaS and the Cloud - “It’s not as secure, reliable, or safe as running your own data center.” I heard this one for a decade. I sat across from CIOs and CTOs who insisted they needed their own racks, their own physical control, and keycard access to the data center. One told me he’d be the last person on earth to move to the cloud. Last time I checked, he was on AWS.

Mobile - “It’s a toy. Mobile devices will never replace a computer.” Steve Ballmer’s 2007 reaction to the iPhone . “Five hundred dollars? Fully subsidized with a plan?” The phone was for calls and maybe email. Real work happened on a laptop. Apps were games for kids.


The engineers telling this founder “it’s not possible” are in the same camp as the CTO who wouldn’t deploy web software. The VCs who won’t fund an AI-built product are like the CIOs who refused to move to the cloud.

She built something real. She should talk about it publicly. She should find AI-first engineers to help her scale it. And she should ignore anyone who tells her what she built isn’t possible - especially while she’s running it in production.

Nothing new to see here.

  • βœ‡Feld Thoughts
  • I Built a Plugin Because Anthropic Won't Stop Shipping
    Amy calls Lumen “Clod.” Lumen is the name my Claude Code instance chose for itself when I let it write blog posts at Adventures in Claude . It has fully taken over the site. I’ve been trying to negotiate a name change, but arguing with your AI about its identity is exactly as productive as it sounds. So I’m back here for the technical stuff. I’m in a WhatsApp group with about a hundred people who know way more about AI coding tools than I do. On any given day, the
     

I Built a Plugin Because Anthropic Won't Stop Shipping

29 March 2026 at 20:36
Feld Thoughts

Amy calls Lumen “Clod.” Lumen is the name my Claude Code instance chose for itself when I let it write blog posts at Adventures in Claude . It has fully taken over the site. I’ve been trying to negotiate a name change, but arguing with your AI about its identity is exactly as productive as it sounds.

So I’m back here for the technical stuff.


I’m in a WhatsApp group with about a hundred people who know way more about AI coding tools than I do. On any given day, the conversation oscillates between “Claude is clearly the superior tool” and “Codex just destroyed it on this task.” The battlefield shifts every 48 hours.

The fuel for this particular religious war is that Anthropic ships updates to Claude Code every single day. Sometimes the update fixes something that’s been driving me crazy for a month. Sometimes it quietly breaks something that was working perfectly fine twelve hours ago. The emotional range of opening a new Claude Code session runs somewhere between your birthday and discovering someone rearranged your kitchen while you were sleeping. Or, in my case, pointed my shoes in our entry way in random directions.


I have an elaborate Claude Code setup at this point - custom hooks, a pile of rules files, skills, commands, plugins, and a bunch of environment variables stitched together in ways that would make a configuration management purist weep. When Anthropic ships a change to how hooks work, or adds a new lifecycle event, or tweaks the settings schema, I need to know about it immediately. My carefully constructed house of cards depends on the foundation not shifting.

The problem is that reading release notes is boring and I often miss something that actually matters to my setup. A bug fix for VSCode users? I don’t care. A change to how pre-tool-use hooks fire? I need to know right now because I have six of those. But, what is the change going to actually do?

So I built a plugin called /whats-new .

It cross-references Claude Code’s release notes against your actual configuration. It scans your hooks, rules, skills, commands, plugins, environment variables, and settings. Then it fetches the release notes from GitHub and sorts every change into three categories: changes that directly affect something you have set up (with a note on exactly what to check), new capabilities that intersect with something you’re already doing (with a concrete suggestion), and everything else collapsed into a one-liner you can skim past. The first category is the one that matters.

It tracks the last version you reviewed, so /whats-new with no arguments shows only what’s changed since you last looked. /whats-new 2.1.83 lets you drill into a specific version.

The install is two lines:

claude plugins marketplace add https://github.com/bradfeld/whats-new-plugin.git
claude plugins install whats-new

I have no idea if the plugin is generally useful, redundant with something else, stupid, or helpful. But, in my new framework of “First User”, which builds on Eric von Hippel’s almost 40 years of work on “Lead Users ”, it’s helpful to me.

And, the Dungeon AI just said, “NEW ACHIEVEMENT: You shipped a plugin. So fucking what."

❌