So why do I feel so angry about this whole AI thing?
Some notes on the bullshit maximiser.
What follows originally went to paying subscribers in November. If you’d like to become one of them but can’t currently afford it, just hit reply and ask I will give you a freebie, no questions asked. If you can afford it, though, I would currently be – for reasons I shan’t bore you with again – exceptionally grateful:
We begin, as we almost certainly have before, with a quote from Douglas Adams. This one’s about age and technology:
1) Everything that’s already in the world when you’re born is just normal;
2) Anything that gets invented between then and before you turn 30 is incredibly exciting and creative and with any luck you can make a career out of it;
3) Anything that gets invented after you’re 30 is against the natural order of things and the beginning of the end of civilisation as we know it
That comes from a 1999 essay for the Sunday Times News Review, in which Adams railed against the way the BBC’s John Humphrys pronounced internet addresses like some strange, incomprehensible alien language – a fact which makes it feel just wrong somehow that Humphrys would continue to be bothering the listeners of the Today programme for another 20 years while Adams would be dead inside two. If it’s ringing any bells, that’s probably because it’s been widely reproduced all over the internet – though often, strangely, with slightly different wording which pushes the cut off to 35, in a manner that makes it look hilariously like some tech-savvy 33 year old somewhere felt miffed.
I digress. The reason I start with this is because I want to note that, wherever it is you put the boundary, I am at this stage quite a long way beyond it. It thus feels important to acknowledge that it is at least possible that my instinctive distrust of AI and all its works is an age thing. I’ve always been quite online for my cohort – I started early, nigh on 30 years ago, and have worked primarily on the internet for over a decade – so have rarely indulged in the regular moral panics about new tech. If this one feels different to me, then, I should at least glance at the possibility that it’s less about the technology itself than it is about my own inexorable journey towards the grave.
But I don’t actually believe that one second, because here are all the real and important reasons that AI makes my teeth itch.
It’s shit. Look, the fact it can’t count fingers is amusing, but it doesn’t seriously matter. The fact it means hallucinations are popping up in previously broadly reliable technologies like search engines really, really does. On a quite fundamental level, AI does not, at present, work.
To be fair, lots of things don’t work at first, and technology tends to get better over time. There’s compelling evidence, in fact, that replacing functional but old-fashioned things with revolutionary ones, not quite ready to ship but improving, fast, is one of the main sources of productivity growth – a thing I learned from Cesar Hidalgo’s fascinating book The Infinite Alphabet and the Laws of Knowledge. So the fact AI doesn’t work very well would matter less, were it not for the next two problems.
It’s being forced on us anyway. It’s on Google, even though it is making search worse. It’s on WhatsApp, even though I absolutely do not want “homework help” or “relationship advice” from my messaging app. It’s in seemingly every new device on the market today; the chatbots that stand between you and the company you’re trying to complain to; the booths fast food companies now make you use to order your food. In the year of our lord 2026, it is everywhere.
And it doesn’t work.
Worse:
It’s polluting the information environment. Okay, you probably get by now that AI doesn’t actually know anything: it is just, as tech writer Kate Bevan once explained to me, “spicy autocomplete”. That’s why it hallucinates – “Hey, this seems a plausible next character string! I’ll go with that.”
Fine. The problem is that, firstly, not everyone does get that, which is having all sorts of horrible consequences. Secondly, even those of us who do know it find it hard to remember to chase down every reference and check what the source actually says, rather than simply accepting the information we’re given, because we just wanted to do a quick search for “will this dried fruit kill my dog”1 and that search engine probably knows what it’s talking about, right?
All that means that AI hallucinations are inevitably going to find their way into the dataset, like lead into the water supply. They’re more likely to end up written down somewhere by some hardpressed content monkey on what looks like a credible source – so that, eventually, even when you do chase down the reference, it might have started out as something made up by a computer because it looked a bit like it could be the answer to the question. Great stuff.
While we’re on this one:
It’s destroying the media. Because it is making people even less likely to click through than they were before. Journalists report and write; tech bros get the payday. Woo and hoo.
This one’s quite tiny violin, though, and also what else is new, so moving swiftly on:
It’s destroying people’s sanity. People are falling in love with AI chatbots. People are destroying marriages for AI chatbots. People may even be killing themselves because of AI chatbots.
If a drug or a musician or a video game franchise was doing these things, the British press would be rampant with hysteria. And yet apparently this time it’s just… happening? And people think it’s fine? And the government – which that press normally needs absolutely no excuse to attack over every little thing – is going around saying this thing is the key to future productivity growth, and everyone’s apparently okay with that?
Never mind the people with the AI chatbots, I feel like *I’m* losing my mind here.
It’s destroying the actual environment. One of the reasons I’m writing this now is because I recently explored the environmental costs in a New World column. I learned all sorts of horrifying things from that: the data centres on which AI relies are now responsible for 1.5% of global electricity use, and 1% of global carbon emissions already; one estimate suggests a single one can consume as much water as a town of 50,000; and so on.2
But perhaps the most horrifying thing I learned is how much more energy intensive AI is than what it’s replacing. A simple factual enquiry burns 23 times as much resources when run through AI than search. That’s bad enough already – but ask a question where it has to think about the answer from first principles (“How many raisins can you fit in a swimming pool”, rather than “what is France”), and that goes up by an order of magnitude. Three seconds of generative AI video takes 15,000 times more energy than a Google search. That one made me feel sick! It’s so obviously not worth it!
And yet it keeps happening, because…
The shoutiest parts of the tech industry keep directing AI towards creative tasks it sucks at, rather than practical ones it can actually do. AI will genuinely do all sorts of clever things that require organising or finding patterns in large data sets – spotting cancers and so forth. I believe this.
But a lot of the ads for it suggest we can use it to make art instead, presumably to leave all the world’s creatives more time to toil away sifting large data sets. I get, from a sales perspective, why they’re doing this – art is just cooler. But the way a lot of online tech bros talk about it suggests that another motive here might be a genuine resentment of the fact they can’t paint or tell jokes, and other people can, and that’s just incredibly icky.
I’m not saying that’s why this happened...
It’s built on the back of work produced by millions of people, including myself, and we probably aren’t gonna see a penny from it. But I’m just not saying that it isn’t.
It might well f*ck up economy with no thought about how to deal with it. I don’t think it’ll destroy every job in existence: that’s overdone, and anyway while previous industrial revolutions wiped out jobs they also created better jobs. Might not be fun to live through, but this is how we make progress. Fine.

But it is starting to look like AI will probably destroy a lot of the entry-level jobs involving donkey work (paralegal jobs, rubbish online journalism ones where you churn out the stories), but where you get to learn how to do the bigger stuff. In the creative sectors, it will take out the gigs people do for money (copywriting, advertising jingles) to give them space to focus on the work they’re actually passionate about.
It feels to me like this might possibly have some consequences some way down the line. Perhaps we should be thinking about that!
Because that’s happening at a point when it is, as mentioned, shit, it is making everything worse. It’s bad enough we have AI-generated slop all over our social media feeds. But we’re starting to see AI-generated news stories with hallucinations in them, too, and that’s an order of magnitude worse.
You better hope the law firms with AI paralegals have thought this through before you ask one to do your conveyancing.
No really, it might well f*ck up economy with no plan for how to deal with it. There are anecdotally already a fair few cases of employers moving to reduce headcount because AI can pick up the slack – even though, in many cases, it can’t. Some of this is employers blaming cuts they’d have made anyway on the tech revolution du jour, sure – but where jobs really are being replaced by robots, it’s bad for those who lose their jobs, those who don’t lose their jobs, and the shareholders dependent on actually existing profits all at once. Great work, team!
It’ll make us lazy. Another terrifying stat I got from Hidalgo’s Infinite Alphabet is quite how quickly institutions and nations forget how to do things if they don’t keep doing them. Knowledge atrophies at a rate of around 3-6% a month, with the result that “a shipyard that stopped producing ships would lose half of its knowledge in about a year”. (This explains a fair bit about Britain’s economy and infrastructure.)
I don’t know how to quantify the implications for what will happen if we keep outsourcing more and more stuff to AI – but I don’t think they’re very good.
I can’t stress this enough: it might well f*ck up economy, and there has been very little serious thought about how to deal with that. Last November, the chief exec of Google gave an interview in which he said that the bursting of the AI bubble could have consequences for the entire economy. It’s hard not to read this as “if we’re going down we’re taking everybody with us”, in roughly the manner of the banking sector back in 2008.
At least with that global financial crisis, though, we’d first had a long boom and a bunch of people had managed to buy houses. What the hell have any of us got out of this one? Also could we perhaps go half a decade without a once-in-a-century level financial crisis? Please?
There are basically no guardrails. One of the most famous thought experiments ever thought, Nick Bostrom’s Paperclip Maximiser, is about the dangers of runaway AI. So is half the science fiction ever produced. The AI we are getting is not the Terminator – it is in many cases just a rebadging of stuff that already existed, and even where it’s new, it is not the holy grail of “artificial general intelligence” (AGI). We don’t need to worry about the rights or desires of robots or the idea they might turn on their masters.
Nonetheless, given the implications of all this – for the economy, environment, society – you’d think that the people involved in making this happen might be giving more attention to making sure that they’re not about to accidentally wipe out civilisation as we know it. And yet, at no point has anyone involved apparently said, “Hey, should we maybe slow down, to make sure we’ve built those guardrails?” The constant competition for VC funding and headlines means, if anything, they’re doing the opposite - that’s why things keep appearing on our devices against our will, even though they don’t work. Even as these tech guys are giving excitable speeches about how this thing they’re building might wipe out humanity (it won’t), they are enthusiastically continuing to build it! It’s as if the Manhattan Project was staffed entirely by Edward Teller!
And so:
I just don’t trust big tech. Look, we know by now who these people are: they’re weird Nazis who have all the money and power in the world, yet remain visibly upset that they didn’t get to sit with the cool kids in high school. Not all of them, sure; but enough of them for it to be a problem how much control they have over our lives. So no, I am not minded to trust what comes from these people.
Even if I were, though:
It’s just really annoying.
Nothing I have said is going to top this post and I know it. So I’m going to stop.
There’s slightly more to that Adams quote I started with, which doesn’t tend to feature in the more viral versions and which I, monster that I am, decided to cut too. Here’s the full version of point three as presented in the Sunday Times:
“Anything that gets invented after you’re 30 is against the natural order of things and the beginning of the end of civilisation as we know it until it’s been around for about 10 years when it gradually turns out to be alright really.”
That is surely true. We will likely reach a point at which we’ve worked out what AI can do well and what it can’t, and everyone below a certain age will have grown up with it as previous generations did the internet or TV or radio or newspapers, and anyone older still holding out will look as silly Regency era critics panicking about the moral consequences of the novel (or, come to that, as silly as John Humphrys did talking about the internet in 1999). That almost certainly is going to happen.
What worries me, though, is what we’re going to destroy on the way. In her brilliant book Don’t Burn Anyone at the Stake Today, Naomi Alderman explores the idea of the information crisis – the way changes in the way we transmit and consume information have a tendency to overthrow existing social orders. In the long term, that can be good – the printing press gave us the Enlightenment.
In the short term, though, it gave us the Reformation, which led in turn to a century of ruinous religious conflict, including the single most destructive war in history up to that point. It is not to come out in favour of the eternal authority of the Catholic Church to suggest there might also have been some downsides to that.
Perhaps I am just getting old here, yes. But really: I’d be a lot more sanguine about the risks this might give us the 21st century’s answer to the Thirty Years War if it was a bit more clear where the new Enlightenment might be coming from.
If you enjoyed that and would like to read more things by me but can’t afford a subscription, just hit reply and ask: I will say yes, no questions asked. If you can afford one, though, then please, my pipes, they are very bad:
Also you may wish to pre-order my new book, 35 Inventions That Built Our Word: The Making of Modern Life.3 It’s out in August, pre-orders really help in terms of visibility and later sales, and go on, it’ll be very motivating over the next month as I push to finish the thing.
Fun fact – probably!
I do know that’s not what the Waterstones page says: you would not believe how many times a title can change in the space of five days.


