subscribe to updates via Atom or RSS

Ban Large Language Models

February 24, 2026

Not another AI blog post! Unfortunately, I've seen a lot of (sometimes grudging) AI acceptance from holdouts in the tech space lately. You can only fight back against top-down directives and endless UI 'suggestions' for so long. I gave LLMs a serious shake earlier this year, so I wanted to add a slightly different take to the conversation.

My LLM experience

I've been chewing on LLM usage in the tech industry for a couple of years now. Like many, I started out skeptical, thinking that LLMs were just another cryptocurrency/metaverse/NFT level scam. Earlier this year, I gave LLMs a try for documentation work. After a few months, I emerged unimpressed.

My assessment? LLMs aren't as much of a scam as blockchain, but they're overhyped somewhere adjacent to "cloud" in the mid-2010s: proposed as the solution to a million problems, but mostly a sidegrade with a few niche advantages. Sure, you can run now easily run a startup without any physical server infrastructure. Is that a good thing for humanity? Have we done anyone a favor by automating away tens of thousands of server admin jobs, and consolidating the remaining jobs into Big Tech companies like Amazon, Google, and Microsoft? Scalability has certainly improved, making it easy for companies to grow from 10,000 users to 10,000,000 users by simply paying a bigger cloud bill (and paying software engineers a lot of money to design Rube Goldberg-inspired Kubernetes-centric architectures, and paying SREs a lot of money to respond to the inevitable incidents when scaling doesn't go according to plan). But it doesn't feel like "cloud" delivered on the world-changing hype.

My justification for my assessment of LLMs? With the right guardrails, LLMs can write OK code, especially for problems that have already been solved. But most of those guardrails are just software developers doing a lot of code review. It doesn't feel sustainable long-term, because code review is a different skillset from writing code: if I spend all of my time reviewing code and none of my time writing code (because I'm asking an LLM to do it for me), my code writing skills get worse. How do I know this? Because I work in documentation, and I've noticed that when I work jobs that involve a lot of writing code, I'm better at reviewing and understanding code. And when I work jobs that involve less code writing, I get worse at reviewing and understanding code. Code review is some sort of derivative of coding skill: the better you are at writing code, the better you'll be at reviewing it (assuming you've mastered the soft skills of not being a jerk during the review process). As programmers shift to writing less code and reviewing more code, I believe they'll get worse at reviewing code. And when LLMs throw armies of plausible-looking code and legions of believable tests at reviewers, you need to be really good at code reviews.

In documentation work, LLMs have been even less whelming. In short: they don't write good. LLMs are token generators by nature, so no matter how many layers of quality control and style guides you throw at them, they're prone to padding their output. Every LLM user ought to be familiar with the way they pad chats with sycophantic drivel, even when you instruct them to cut the bullshit, or keep responses short, or talk like Computer or Data from Star Trek. Documentation is teaching in text form. It's all about abstraction, and building blocks, and empathy for the user, and about advocating for users during design. Padding has no role in the teaching process, let alone the tertiary roles of documentation like testing and design, so LLMs struggle to write great docs. They might produce something that looks a whole lot like mediocre documentation. But as someone who's reviewed a lot of slop documentation pull requests, I know the instructions are probably wrong, the code samples probably don't work right, and the abstraction is... funky.

So what are LLMs useful for?

  • They're OK at helping me get past the 'first page problem' when I write documentation infrastructure.
    • I'm fine at writing JavaScript and CSS and HTML, but throw in the rats' nest of modern frontend frameworks and it's easy to feel paralyzed. Prompt an LLM to create an example of how you might implement something and suddenly you're tweaking instead of writing from scratch.
    • Documentation is full of little one-off scripts that do things like transforming data from one format to another. All of these problems have been solved before. LLMs ain't bad at generating a 90% solution so I can hack the remaining 10%.
  • When the creative juices are low: brainstorming and rubber ducking (but you're mostly just talking to yourself, so maybe just go for a walk, grab a snack, chat with a coworker, and/or talk to a rubber duck instead?)
  • LLMs read really fast. When you have huge quantities of data (as in impossible for a human being to actually read all of), like a million-line-long log file, an LLM might be able to help you find patterns. I have found LLM summaries of human text woefully inadequate for the same reasons LLMs suck at writing documentation: they don't read good. But they do read fast!

So LLMs are basically just an interactive UX for Stack Overflow and Reddit, and sometimes a way to avoid talking to your coworkers.

The ethical elephant in the room

There's no waving away the ethics of LLM usage, unfortunately. This point has been belabored, but to sum things up:

  • LLMs require massive amounts of compute, which requires massive amounts of silicon, electricity, and to a lesser degree water; venture capital-fueled companies have insane amounts of money that is devastating pricing in all of those markets. More expensive electricity is bad for everyone. More expensive computers mean a world that is more expensive when everyone has to own a smartphone and a car, both of which are computers.
  • LLM training uses all of the text on the internet and a lot of text from books. Your conversations. Your emails. Your photos (Google and Facebook don't store those out of the goodness of their hearts, you know!). Your comments. The text you are currently reading on this blog. Companies are consuming all of it, munging it into a giant neural network, and selling it. I write my blog and communicate on the internet because I want to interact with people. If a company wants to use my communication to make a profit, they should compensate me. And I should be able to opt out of being consumed at all.
  • All LLMs, even self-hosted ones, are created by the global 0.01%. Inequality is as bad as it has ever been since the French Revolution. LLM creators claim that LLMs will replace swaths of office work. So the sales pitch of LLMs is "fire middle class workers to enrich the upper class."

Ban LLMs

So what's going to happen with LLMs over the next few years?

  • At best, they'll reduce the workforce by a fraction and increase the review workload for everyone else.
  • At worst, top-down managers will fire lots of people and replace them with LLMs, only to later realize that LLMs aren't producing quite the same output (how much this matters is left as an exercise to the reader).
  • If you're a True Believer, you probably believe that LLMs will only get better, and most of my skepticism is a result of a skills issue, and that I haven't used the latest models the right way to get the best results. So you believe that LLMs will replace all white collar workers over the next couple of years, and probably blue collar workers too since surely robotics automation is within the grasp of LLMs.

Let's assume that all of these are valid possibilities. What's the right move?

  • Maximize LLMs as much as possible to win the LLMs race?
  • Restrict LLMs to preserve jobs?
  • Stop using LLMs entirely?

The USA is currently trying the 'maximize LLMs' option. We've seen the first wave of societal fallout: skyrocketing electricity bills, crazy prices for computer components, layoffs, use-LLMs-or-leave directives, LLM video and photo-assisted misinformation on social media, LLM-enhanced phishing, and what I can subjectively say is the most hostile and buggy year of consumer software I've ever experienced.

The EU is attempting the 'restrict LLMs' option. They have a track record of balancing innovation and exploitation with rules like GDPR; unfortunately, the wheels of government are slow, far slower than the tech industry, so enforcement and even basic rules often happen well after a lot of people get hurt. To make great regulation, you have to:

  1. study a problem
  2. explore different ways to fix the problem
  3. agree on a set of rules that fix the problem
  4. enforce those rules (returning to step 2 whenever something goes awry) When an industry has as much money and power as tech right now, it's like trying to fight a disease with exponential growth and adaptation. Which explains why our LLM regulation is currently about as successful as most of our covid-19 precautions.

Has anyone tried the final option and stopped using LLMs entirely? I don't just mean not pumping money into LLMs, I mean actually banning LLMs completely and imposing fines for anyone who uses one. I can think of the following repercussions:

  • Anywhere that data centers can't build would have cheaper electricity costs, since data centers are by far the largest driver of rising electricity prices; remove them, and electricity rates would return to the pre-2020 normal.
  • You wouldn't lose any jobs to LLM automation because it would be illegal to do so.
  • Workers wouldn't be able to use LLMs, so they would lose any productivity benefits that LLMs provide.
  • All companies that sell LLMs (or repackaged LLMs as a service) would have to stop providing LLM access under penalty of law, refocusing on other parts of the business (if they have any). Companies whose entire business model is based on LLMs would either shut down, pivot, or move somewhere where they can legally do business.
  • As other parts of the world continue to use LLMs, any enclaves that choose to ban LLMs would miss out on directly reaping any benefits or hazards of LLM usage: if LLMs start driving best-in-class CPU design, or drug design, or improving public transit, it couldn't happen in your enclave because it would be illegal!
  • As other parts of the world continue to use LLMs, any enclaves that choose to ban LLMs would remain vulnerable to indirect benefits and hazards of LLM usage: if other countries run giant botnets to spread propaganda across the internet, users in your enclave would still have to deal with the impact since we all use one internet; if LLMs cause countries to build more coal-fired power plants to fuel data centers, we all have to breathe the same air; if LLM-run defense forces get into a nuclear war, we all get stuck in nuclear winter

This is by no means a comprehensive list of results. But one thing seems clear: the long-term effects of LLMs (if any) are going to happen worldwide regardless of who develops them. But you might be able to protect against the short-term effects by banning LLMs. If that preserves jobs right now, isn't that worthwhile? I don't see many benefits for 99.9% of people in the short term. But I see a LOT of hazards.

If LLMs are a good thing, you can always un-ban them in the future once you have a way to protect the 99.9%.

If LLMs are a bad thing, you spared the 99.9% from unnecessary layoffs.