I’m appalled that GitHub made this opt-out instead of opt-in.

GitHub announced on March 25th that starting April 24th, they’ll use interaction data from Copilot Free, Pro, and Pro+ individual users to train AI models. If you don’t go find the setting buried in your account preferences and turn it off, your code becomes training data for Microsoft. The prompts you type. The suggestions you accept. The context around your cursor. All of it.


“Interaction data” covers more than you’d expect. Code you write. File names. Repository structure. Navigation patterns. Your feedback on suggestions. GitHub says they don’t use private repository content “at rest” for training. But the data generated while you’re working in a private repo is fair game unless you opt out.

When you want to use someone’s work product to train your commercial AI models, the right default is to ask first. “We’d like to use your interaction data to improve our models - here’s what that means, here’s what we’ll collect, would you like to participate?” That’s consent. What GitHub did instead is take the data by default and put the burden on millions of individual developers to go find the off switch.


The hypocrisy is striking. Copilot Business and Enterprise customers are exempt. Their data is protected by contract. If you’re a company paying the higher tier, your code is safe. If you’re an individual developer - including people paying for Pro or Pro+ - you get weaker privacy protections than a corporation.

Microsoft knows what real consent looks like. They built it for their enterprise customers. They chose not to extend it to individuals. That’s not an oversight. It’s a decision.

This is also a reversal. GitHub Copilot originally trained on user data when it launched. They later stopped. Developers chose Copilot partly because of that commitment. Now they’ve gone back on it.


The buried settings page is a tell. The notification email GitHub sent didn’t include a direct link to the opt-out. Multiple developers reported the settings were hard to find. Microsoft knows that if they made this opt-in, most people would say no. So they buried the off switch instead. That’s not bad UX. It’s the design working as intended.

The community response confirms it. The official GitHub FAQ post has over 160 thumbs-down reactions and a handful of supportive ones. Out of dozens of substantive comments, the opposition is overwhelming. This is enshittification .


GitHub cites Anthropic and JetBrains as operating similar opt-out policies. That’s not a defense. It’s an indictment. The industry-wide drift toward taking data by default and letting people opt out if they’re paying attention is a pattern worth naming and rejecting.

The asymmetry is obvious. Users provide their code, their workflows, their patterns - and they pay for the service. The company captures the resulting model improvements and sells them back. The value flows one direction. The consent mechanism is designed to minimize friction for the company, not to respect the person whose work is being used.

I’ve been building with AI tools every day for over a year. I use them constantly. I’m not anti-AI. I’m anti-taking-people’s-work-without-asking them for permission to do so. Those are different things, and the AI industry keeps conflating them.


The right answer is simple. Make it opt-in. Explain clearly what you’re collecting and why. Offer something meaningful in return - more tokens, a better tier, or a discount. Treat the people whose data you want as participants, not as inputs. If the training data is valuable enough that you need it to improve your models - and GitHub explicitly says it is - then it’s valuable enough to ask for properly.

Microsoft is a $3 trillion company. They can afford to ask.