There are lots of schpielen like it, but this one is mine.
I don’t like AI. I really don’t. Large language models annoy me, diffusion models disgust me, whatever those video models are freak me out. I don’t want the little golems touching my blog, my hardware, my projects, or my online interactions.
It will never stop irking me that this is what “ai” has become! Despite my college student status, I’m not too young to have forgotten the time when ai did things like gene research and traffic optimization. But now when people say “ai” they don’t mean an image processing subsystem to clean bad pictures of QR codes. They mean ChatGPT, or Claude Code, or Midjourney, or one of a million other deeply enshittified generalists.
What they can do is really pretty impressive. I just wrote a letter in French, uploaded a crappy webcam photo to ChatGPT, and it successfully translated! It even got my approximated formatting right:

My handwriting (and my French) is terrible, but the AI translated it flawlessly in a few seconds. This is incredibly cool. Did anyone in 2015 think that we’d ever have a computer program that could do something like this? Probably! OCR and statistical translation have been around for a while. Still, I am impressed.
So why don’t I like it?
The little golems are powerful, and I think it’s irresponsible how ChatGPT has masked the costs. Completely ignoring the environmental impact (we’ll get to that in a bit – it’s too important to cover only briefly), running large language models is expensive. Ed Zitron has posted a variety of articles on this very topic: the TLDR (and it is very long) is that OpenAI is not profitable, Anthropic is not profitable, Cursor is not profitable, and none of them ever will be because the costs are too high and are not falling.
Brief Aside: B-but the cost of inference is coming down! Theo made a video on this. Just watch it, you nerds.
The unfortunate fact is that GPUs are expensive. Building datacenters is expensive. Even power… is expensive. Tom’s Hardware kindly reports on the estimated power consumption of GPT-5: about 0.04 watt-hours per token. A small query in the range of a few hundred tokens, like the one I just executed (ignoring the image processing model), is still using 5-10 watt-hours of energy. The current viberank leaderboard slot is 23.9 billion Claude tokens generated in the last 24 days. This works out to about 4 megawatt-hours per day, or a continuous draw of 1.6 megawatts. Where I live, that’s almost $500 worth of energy every day. I think anyone reading this blog is smart enough to know how insane that is. Claude Code’s highest subscription level is $100/month. That one guy is costing them about $15,000 a month. How much dollar-kneading do we have to do to get this down? Datacenter electricity really is about 10x cheaper than residential, so $1,500 a month. For Claude Code to come out winning, they need to have some magic efficiency factor that makes them more than 15x cheaper to run than ChatGPT in electrical costs alone.
Does anyone not see how insane this is? The difference in electrical costs between Claude and GPT-5 might really be more than 15x. But it probably isn’t. Claude Code isn’t profitable.
Viberank claims that the total usage of everyone combined is worth $800k (notably more than the $26.5k they make if every single one of those developers has the highest plan). This is a debatable statistic because it’s calculated in terms of the cost of tokens, which is subsidized by venture capital. When VC money stops rolling in, token prices will go up. Probably by quite a lot. Assuming Claude 4 Opus is priced at something like the fair rate (which is not a safe assumption – it’s still likely a lowball) the real viberank numbers should be on the order of $35m.
This is a big part of why I don’t like AI. As Time reported, AI use may be making us dumber – pushing for incremental boosts in productivity at the expense of my long term ability to function isn’t my style. It is painfully obvious that this is a bubble, and I don’t want to shift my entire skillset over to AI tools and then lose the ability to code when the bubble pops.
Wait, these improvements aren’t incremental!
Well then.
I’ve heard from many sources, including some I actually respect, that AI is a massive productivity boost to people who use it successfully. Thomas Ptacek, who is not a source I respect, compared it to literally drinking rocket fuel: which I assume to mean “expensive and bad for the stomach” (if you haven’t read Ludic’s contrapoint to Ptacek’s article, go do that). Apparently, per every single fucking LinkedIn poster, AI is such an incredibly huge speed boost to development that all hiring is going to end forever and we’re all going to rot in an LLM-powered utopia. And finally, per some engineers I take seriously, they’ve managed to massively speed up boring tasks: writing boilerplate, cleaning up code, doing code reviews, etc.
Are you all stupid?
If AI were actually speeding this stuff up, why haven’t we seen the boost? Product launches don’t seem faster in 2025 than they were in 2022. Most software doesn’t have more useful feature growth than I’d expect from normal developers (lots of software does seem to have sprouted pointless LLMs, though). LLMs have had a year or two of adoption. We’ve been around the block a couple times. All the recent releases have been disappointing (note that Lil Sammy himself admitted that GPT-5 kinda sucks), so there probably isn’t some gigantic improvement just around the corner that will give us an actual 10x increase in developer productivity.
While I’m probably not experienced enough to say for sure, I largely agree with the consensus among nonidiots that LLMs don’t speed up programming very much because most of programming is not typing. This is a very neat open-and-shut argument. But wait, there’s more! Ever in the spirit of exploration, I decided to actually try vibe coding myself.
I used GPT-5 initially to do some basic tasks: finding and comparing library alternatives. It did a pretty good job of this! I needed to swap out bevy-rapier2d in an online game I’m making because bevy-rapier2d sucks (by no fault of rapier2d’s own, to be clear; rapier2d was never designed to work in an ECS environment), and it successfully convinced me to use avian2d. It didn’t warn me of any of the caveats of this switch (instability because it’s a comparatively new project, different scaling messing up my kinematics, etc), but that’s understandable. I swapped avian2d in for bevy-rapier2d successfully, and GPT-5 was even pretty helpful there! It hallucinated quite a bit when I needed to fix my dependencies (different parts of the project required different bevy-ecs versions… it was a bad situation), but its suggestions were ultimately helpful. It certainly didn’t consume so much time on hallucinations and generation overhead that I completed the task slower, so I don’t regret it.
Note that at no time during that refactor did I actually use any code it generated.
Lil’ Tommy Ptacek did snidely note in his article that lots of AI detractors like myself plug our task into ChatGPT, get annoyed by the hallucinations, and decide that all AI is bunk. Being a bit too much of a goof to simply accept that I have become the target of a strawman in an article I didn’t even like, I strapped Gemini 2.0 to my zed (I love zed. it’s so much less annoying than codium) and asked it to do a simple task: finishing the implementation for an API I wrote out in Rust. The end goal was a websocket server with upgrade functionality; this is not a complicated thing to implement. It had thoroughly-commented classes and several nice, simple examples.
The Good Part
It actually worked! After a bunch of re-prompting, Gemini was able to build a Rust websocket server that functioned. It was able to handle HTTP upgrading and send and receive messages. I was actually pretty impressed. Building such a server is not hard – the end result was about 300 lines, and it would have been about two hours work for me – but doing it in ten minutes is impressive. I saw some potential.
The Bad
It ignored my goals and wrote terrible code. Rather than implementing a sane buffering interface, like any normal person would, it used fixed buffers that were too small to hold all my browser’s headers. Its validation code was mediocre. In an early reprompt, it included a hard-coded Sec-Websocket-Accept header: if you’re not aware, this is the kind of incredibly basic mistake one would expect from a newbie who didn’t read the manual, not the machine god end of all human computer programming.
I asked it to use poll
to support multiple clients. Guess what it did? Yeah, it blocked the thread on accept
. And on every read and write. Clients connecting would have to wait until the last one was finished.
The obvious retort is that you can’t expect the ai to get perfect results like that, but… I kinda can? I can use Linux system calls properly. If the AI is going to replace me, shouldn’t it? If the AI is going to even avoid massively slowing me down, shouldn’t it at least know to ask me nicely before writing code I don’t want?
Maybe I’d get better results from Claude code. I certainly didn’t get them from Gemini 2.5 pro, which hung forever, or GPT-5 reasoning, which inexplicably wrote 700 lines of badly organized and error-filled Rust and tried to implement sha1 hashing on its own. I am disappointed. If the whole point of this is to make me faster, why is the AI producing worse results that will certainly take me longer to fix than if I’d just built it myself?
I’m not going to deny that it’s impressive that we have a computer that can generate 700 lines of mostly correct Rust, including some fairly complicated software legerdemain. That’s something that nobody thought would exist five years ago. It really is very cool. But this is not cool enough. For it to be cool enough to justify the hype, and the throat-cramming, and the layoffs, the code would have to actually work. Incremental improvements.
Also, code reviews? Seriously? Are you a complete moron? You want an artificial “intelligence” that tried and failed at base64 encoding, to find incredibly subtle bugs and quality issues in your complicated embedded programs? I really hope they weren’t serious about that.
This is yet another reason why I don’t like AI. Any task complex enough for it to be useful is too complex for it to do, and it can’t tell you it’s confused.
Let’s Talk About The Environment
There are some tactics to make AI work pretty well! Someone in a group I take seriously sent this little gem, where Claude was put in a REPL and performed admirably. Everyone I know who’s seen that is skeptical, but I’m less so! It seems like this is something AI excels at. Python and TypeScript+React are both incredibly common in AI datasets, and transcribing code between them is the kind of simple, boring, hard-to-fuck-up task that no developer actually wants to do. The agents weren’t perfect, but they actually did pretty well, and human developers only had to intercede a small amount. I would totally put this under the “actual 10x productivity” window.
But what of the cost…
This little query loop cost a whopping $800 to run overnight. That’s at subsidized prices. We’ve already talked about how the token prices are mostly subsidized; let’s ignore that point for now, and consider the environmental impact.
Per this MIT article, AI datacenter construction is likely to add several hundred terawatts of power to worldwide datacenter usage. In other words: datacenters, which will be about 40% AI by power draw in 2026, are going to use more energy than France. And Saudi Arabia. Combined. They will probably overtake Japan. Is this not monumentally stupid? We’re at the point where datacenter usage, a huge amount of which is for AI, is so significant that we have to compare it to entire countries.
And most of this power is nonrenewable.
Personally I would rather not live in a flaming hellscape for the rest of my life. Humans as a species need to start thinking about where our precious, limited energy resources should go: we can’t scale forever. This is probably better suited for another post; regardless, I don’t think “an agent that can fuck up Rust code” is high enough on our needs list to justify this level of consumption.
The hard part is that there are no controls: nobody’s forcing datacenters to report their carbon footprint, let alone do anything to reduce it. This is yet another reason I don’t like AI. The VC-funded “move fast and break things” ideal has real consequences that hurt real people, and yet somehow we’ve all collectively decided that it’s more important to have a ChatGPT that can find us recipes online, or write really awful code.
Developers Aren’t The Target
Or, have some fucking compassion.
We developers are not going to be the ones harmed the most by AI. As a field, we’re pretty much used to gigantic paradigm shifts happening every couple years: consider how quickly we switched from C++ to Java to Python to JavaScript for backend tasks (the right answer is, of course, Rust, or at least Go, but I digress). Consider how quickly we went from “rent hardware” to “rent a VM” to “host a docker container”. Consider how quickly we went from “trust the programmer” to “let the type system do the work”. These are monumental changes, of the sort that literally no other field has ever experienced. Computer programmers have become desensitized to the idea that technology will be completely different next year, and so we’ve gotten really good at handling paradigm shifts. In the end, it doesn’t matter that much if AI turns out to be actually good: we’ll adapt insanely quickly.
But what about the other people?
Art, for instance, has remained comparatively unaffected by technology. Digital art is fast and convenient, but not so much so that traditional art has been replaced. You could show Leonardo da Vinci your iPad with ProCreate installed, and he would figure it out pretty quick. But now we have a tool that actually does change things: artists can now simply ask the AI to generate an image, and it does a pretty good job. As much as I hate AI art, I’ve got to admit: a skilled user can make some stunning pieces. How would da Vinci feel if you told him that there’s no brush: there’s just a little keyboard and a submit button? Mama mia!
I’m not an artist by any means, but I do have some basic understanding from running a webcomic (no, you don’t get a link) and following some of Drawabox. Many of my friends are traditional artists, and I’ve seen them at work. On a high level it appears to be a fairly simple, portable process: pick a color, pick a tool, make a stroke, repeat. Doing it right is about having lots of practice and being very, very creative. A competent artist can context switch to a different medium fairly quickly; many modern artists use tools like Krita alongside their traditional art, running clever hybrid workflows that produce masterpieces. Ultimately, though, none of art is defined by the medium. A competent artist can work in MS Paint, or colored pencils, or chalk, just as easily as they can work in oil on canvas. My point is that the process and the skills are mostly agnostic of the tools. But what about now? Artists can type in what they want and it… magically appears? And you don’t even have to be an artist! You don’t even need special hardware! I’m a pretty boring left-brain-ass software guy, but my unartistic personality and a really old radeon GPU was able to generate this in about 20 seconds:

There is no way in hell I will ever be able to paint anything like that organically. How are human artists supposed to compete with this? Their industry is not used to upheaval, and now it is being upheaved.
Some absolutely sick people think this is good! “Democratization of creativity” and suchlike. These people are assholes. The image gen models were built by training off human work without paying for it – which is known as theft. The companies stole from artists, and now are trying to eliminate them. When you’ve devolved to the point of making a fair-use argument about literally ruining people’s lives, you’ve already lost the debate.
Writing is less fucked. One of the major skills of human authors is being able to switch style on a whim: some of the greatest authors in human history (think Mark Twain or Ernest Hemingway) made style a major part of their practice! And fortunately, AI models are terrible at this. They can work in one, maybe two styles at once: they aren’t good enough at it to sound remotely human. Do you really think ChatGPT can manage to be a sarcastic narrator and a cockneyed Mississippi boy at the same time? No. No it cannot. If you prompt it to, it will fail, and it will fail spectacularly.
The unfortunate thing is that writing is still definitely fucked. Good writing is something I appreciate, and something most of the people I choose to interact with appreciate, but unfortunately the majority of people really don’t think that way. To have a successful career writing, you have to write things that people like enough to keep paying for it, and we don’t live in a society where people care enough about good writing to bother. AI can fool plenty of people (shugGUP you nerds), and “plenty” is enough. Authors like Quan Millz manage to use stylistic parody so excessive that it probably couldn’t have been AI generated, but it’s like putting on way too much makeup: for anyone who has the ability to notice, it ruins the effect. Welcome to the age of enshittification.
Perhaps the people that should be the most scared, and who probably are given how obviously they’re projecting on LinkedIn and elsewhere, are unskilled middle managers. Not all management is pointless – I’ve had firsthand experience with managers who know the craft well enough to actually do useful work – but plenty of them are, and how exactly are they going to survive AI? AI is writing their emails, AI is making their presentations, AI is better than them at writing about Agile/Scrum (sidenote: despite my tone, I actually like agile! Working with other engineers in an agile way is incredibly rewarding. problems are introduced when nonengineers try to make engineers more agile without regard to the actual project needs). The griftey middle-management paper pushers that smarter tech bloggers than me complain about can be cheaply automated.
Yet another reason why I don’t like AI. Developers are gonna be fine, but developers need to have some fucking compassion for all the people that won’t. Maybe it’s easier to promote the tools when you don’t directly know anyone who’s being replaced with them, but the sort of nerds who act this selfishly really make the rest of us look bad. Everyone who has ever glossed over the harms to fields they aren’t in is part of the problem.
Self-Respect
I’m going to commit some of the hubris that I routinely mock other developers for. I don’t believe the tools can write better code than me. There, I said it, though the gods of humble engineering strike me down yet I shall never relinquish my self-respect. It’s quite possible that I’m much worse at computer programming than I think I am, and that the tools actually are better than me, but you know what? Fuck that, I’m not listening.
It’s pretty well known that developers have imposter syndrome. It is, in fact, considered universal. I would argue that this is because we have no baseline: nobody really knows what good code is until they write it. If it works even as complexity increases, great, it’s good code! A Philosophy of Software Design (freely available at that link, worth purchasing to support good writing) is a great book that you should totally read, but even that cannot teach you what good code is; it just gives you some ways to tell what it isn’t and some guardrails to maximize your chances. It might even be wrong! Who knows! This sort of confusion makes it hard for developers to be confident in our abilities. It feels pretty wrong to say “I am good at code”, or even “I am better at code than ChatGPT”, when you’ve no idea what it means to be good at code. Even writing this is tweaking my don’t-be-too-cavalier reflex.
AI hype promotes “I am not good enough” thinking in developers. We’re being constantly told that we’re so bad at code that predictive text model can do it ten times faster and better. Smart bloggers we read are saying this. Industry leaders are saying this. Every social media app is full of this crap. And so we start to believe it, and when we try AI and it’s not that good… surely we’re simply not using it right! We need to get a newer bigger model, or try a new prompt, or install a fancier agent, or attend some grift conference, or switch our project language to something the AI understands better. The problem is never allowed to be the model – the problem is that the developers simply aren’t competent enough at developing.
It’s quite pernicious. I imagine many of the “success stories” are cases of developers being fooled by the marketing and lying to themselves that the AI agent has made their work better, because they must be wrong: they can’t even let themselves notice that the AI isn’t making their work faster, despite the research, because everyone else says AI is going to make their work faster. It’s classic social conformity, a sort of mass gaslighting; it becomes easy to doubt your own experience when everyone else disagrees with you as loudly as possible.
We need to take pride in our individuality, and have some self-respect at all. We developers are the only people who can actually make statements about AI, because we’re the only people who understand it. Marketing executives don’t understand it. Your uncle jeff who installed new RAM in his computer one time doesn’t understand it. Trust your own analysis, and don’t let the herd drag you along.
I think this reason for hating AI needs little summarization, but here goes anyways. When a tool requires people to have a poor opinion of themselves and doubt their own experience, that tool is bad.
I’m Not Angry Enough
I sound pretty angry. In fact, I am pretty angry. I don’t like what the AI industry is doing to the planet, to the mental health of those I respect, and to the human race at large. I really don’t like it.
And yet, compared to lots of anti-AI guys? I’m really not angry at all.
I don’t want state regulation to kill AI. I don’t want mass boycotts. I don’t want research funding killed and GPUs destroyed. Because, after everything, I still see some potential.
Earlier I mentioned that I’ve used GPT-5 for minor code tasks, like doing structural analysis of libraries: I intend to keep using AI for this where information is scarce, because it’s actually good at it. Searching the internet and performing C/B analyses is one of the things it does incredibly well. I would like to see AI improved in this direction: rather than blindly generating code, using the available information to make serious decisions about structure.
I like AI research. It’s brought us some awesome things that aren’t LLMs, and I don’t think the field should be constrained. Folding@Home is a great example of an AI project that was actually good. Most drug research was using AI long before ChatGPT, and architectural advances can get us better vaccines sooner. Image recognition can be wasted on translating my shitty French handwriting or used to violate our privacy, but it can also be used to drive an adaptable robot through a disaster area to rescue earthquake victims without putting first responders at risk, or to detect potential hurricanes faster than a human meteorologist.
I also don’t think trying to kill technology through brute force is ever a good idea. Pandora’s box has been opened, and it doesn’t matter how tightly we try to shove the lid back on – nobody’s going to get rid of my downloaded Gemma and Stable Diffusion finetunes. You can’t kill datacenters with regulation, because there is always more VC money: if you go far enough that it could stop this industry, you seriously endanger the civil rights of everyone not in the AI industry. I don’t like the idea of a government that can arbitrarily take ownership of a datacenter and dissolve the company that built it. I especially don’t like the idea of a government that can force me to decrypt my drives and surrender write access to a glowie.
Boycotting AI will not put incentives on companies to make models that don’t damage the environment, it’ll incentivize them to sell to governments and companies outside of the boycott, which hurts everyone involved. Call me laissez-faire, but I really don’t want to be in a bubble looking outwards at all the cool stuff people are doing with improved AI.
So what should we do?
The most obvious thing is to start using open models. Open models are usually just as good, if not better for some tasks, than the big models, and they’re way better for the environment. If you have a modern GPU, you can run Qwen or some other model that can do tasks for you. I am not going to stop you from using AI to code, but I strongly recommend you don’t; regardless, there are lots of cool things you can do with AI that don’t hurt anyone. I have a few agents running small open models locally (or on a groq free plan… but that needs to change soon) that roleplay as interesting people: for instance, the Carl Rogers bot in Purpose 42. These are nice, legitimate uses! People enjoy talking to Carl and co, and they aren’t being abused by an evil corporation.
We also need to make theft illegal again. AI companies steal people’s work with impunity, paying nary a dime to licensing and not even bothering to give credit. This is plagiarism, pure and simple, but the kind that actually hurts people’s livelihoods, not the kind that gets you a bad grade on a paper. This kind of legislation should be super easy to pass, but we live in the absolute worst timeline: fighting the AI lobby will be hard, but the real final boss is the fucking morons in the Republican party who will do literally anything to make Elon Musk like them again. It won’t be simple, but it is necessary. We as a society need to have some respect at all for people who create things.
The next thing is to stop paying for bad products. The AI bubble needs to pop. We need to get it over with before Trump and his goons kill our economy to the point that AI is the only thing propping it up (we got dangerously close to this in Q1 and Q2 2025 – some estimates suggest that the economy would have actually shrunk without AI investment). If we stop paying them for bad products, they’ll have to either dissolve, or make better products. I would totally pay an external service that hosts whatever model I want on some transparent pricing model (perhaps something like https://vast.ai/), assuming they can give me a guarantee of environmental impact and data privacy.
Finally, stop lying. Lots of people are saying that modern AI is something that it is not – a completely ethical replacement for artists and writers, a way to make developers 10 times more productive, the harbinger of an AGI future – and it is distracting from what AI is: a general solution to the hard problem of natural language. There are so many cool things we could be doing if we stopped bullshitting. Some people are making strides in good directions – see, Microsoft’s 1-bit quantization designed for embedded CPU inference, or Google’s 270 million parameter Gemma, the RWKV architecture. These are stepping stones on the path to something that people actually want: efficient, environmentally-friendly, local inference. Even Sam Altman, a dunce large enough to make antivaccers think he’s maybe a little strange, recognizes the value in embedded AI, even if everything about his plan is terrible.
There is a beautiful, stable future that we can have with AI. Legislation needs to be passed to make theft illegal, people need to stop lying to themselves and others, and the bubble needs to be popped as soon as possible. But once that happens? Anything. Low-attention or attentionless architectures quantized to 1 bit per parameter running on ARM (or hopefully RISC-V, but that’s a different topic) CPUs in your pocket. An actual drop in inference costs. A world where the AI are defined by our wants and needs, rather than our wants and needs being defined by AI. A world where datacenter deconstruction floods the market with cheap high-end GPUs that we can put in our homelabs.
I don’t hate AI
I don’t. I really don’t.
It’s easy to fall for the trap. If you’re not for us, you’re against us! But the reality of this is, of course, messy. I mostly don’t use AI to code because it’s icky and I have some self-respect. But if you’ve decided that your AI workflow is a good thing that really is improving your code? I shan’t question your superior understanding of your own workflow. Just make sure to be ethical about it.
It’s easy to get stuck in a mindset of “everything with AI on it is bad”. It’s also easy to drink the kool-aid (side note: when people say this, they’re referencing the Jonestown Massacre, executives on LinkedIn thinking it’s a good thing will never stop being funny) and turn into super ultra 10x developer. Neither of these are healthy ways to interact with AI. There needs to be a degree of balance. I’ve used AI for a lot of things, albeit not as constantly as some, and I’ve paid attention: it has limitations, and it has applications, and it is not a binary situation. If the word “ai” weren’t so heavily marketed, we’d all just be super excited about the general solution to the NLP problem that we can run on embedded devices. It really is cool!
Most of the developers I know have a pretty healthy relationship with AI. I don’t vibe code, but some people I take seriously do, and they get pretty good results. That said, my lil group seems to be strongly in the minority, with most people being either absolutely terrified that AI is going to make them worthless, or guzzling amphetamines like they’re water and 10xing the shit out of their workflow, or haughtily blogging about how everyone who ever touches AI is a stupid whore. Just… just breathe. Don’t pretend that it’s something it isn’t. Be reasonable, and be compassionate. Can we ask for anything else?
Four hours and 5300 words later (apparently I blog at about 0.46 tokens per second), I’ve said what I wanted to say, and discrete math beckons. Maybe I’ll write something about that over on https://deadlyboringmath.us/. See ya soon, and keep coding!