The Sky Is Falling In!
How to survive the coming AI-pocalypse
The Eve Of Destruction
The AI frenzy continues to build. The latest flurry was boosted by a post by Matt Shumer that went viral, titled, with typical understatement, “Something Big Is Happening”. I’m not going to link to it because it’s alarmist nonsense and it’s also hard to read because he used an AI to write it.
“Who’s Matt Shumer?”, I hear you ask. Well, he’s not exactly an uninterested observer. He runs an AI start-up and invests in AI. He is what you might call an ‘AI Booster’.
And his piece says that AI is coming for your job (yes, you, at the back. Your job!). But you’re not helpless, all you need to do is buy all the best AI models now and start learning 24x7 how to use them.
Is that alarmist? Let’s see, he starts his piece with an analogy. And that analogy is - COVID. So, yeah, I’d say that was fucking alarmist, likening AI to the biggest mass psychological terror of the past several decades. Oh, and he has zero evidence to back up his many assertions, just his personal experience and other people’s baseless opinions.
Why does this matter? For lots of reasons, the rest of which I’ll cover in a future post, but here I want to talk about the misrepresentation of AI and the impact it is, and might, have.
We are being fed a story that AI is inevitable and it’s coming for us, so resistance is impossible, we just have to adapt (but what they really mean is submit). And this story is not true.
Much of the repeating of the story in not intentionally malicious (unlike Shumer’s piece) but a false interpretation of what’s happening to fit an erroneous, but heavily promoted, narrative.
I offer up as an example a piece by Charter on their recent ‘Leading with AI’ Workplace Summit. They say themselves:
“The headlines will focus on Sebastian’s (Siemiatkowski, CEO of Klarna) announcement that Klarna has reduced headcount 50% through attrition while growing revenue per employee from $300,000 to $1.3 million and increasing average compensation by 60%. But the real story is how he’s reversed his strategy twice in two years, and what that evolution reveals about where AI transformation is actually heading.”
So the summit was all about ‘AI’s coming for your jobs’, you might assume. Because that’s the narrative that is being reinforced here.
But you’d be wrong. And actually, I don’t agree with them about what the real story is, because to me the real story is that companies are redesigning work so that they can leverage the benefits of AI.
Microsoft, IBM, Dropbox, Thompson Reuters all had sensible perspectives on how to use AI. It’s not about adoption, it’s about outputs. It’s not about individual productivity, it’s about outputs.
In many ways, the points they made are reassuring. They can see that AI is not a silver bullet to replace jobs but a strategic opportunity to redesign the way their organisations function and the way the work gets done.
IBM are getting entry-level hires to design their own work so that it leverages the AI tools. As one leader put it.
“When you involve employees, you get the job redesign right because they’re the ones who really know it.”
OK, it’s a statement of the perfectly bleeding obvious but the fact that they said it is a massive step forward.
But, as Charter point out, the headline is going to be the guy who replaced 600 jobs with AI, because that fits the narrative.
Of course some jobs are going to disappear. AI is a technology of automation and that’s what automation does. But the real story is that jobs are going to change, and quite possibly for the better. For the employees, and for the business.
And if AI is the catalyst for that change, then some good might come from what is lining up to be a mother and father of a mess economically.
Say It Ain’t So
Perhaps what we should be worrying about is what AI is doing to the way that we communicate.
We’ve seen massive changes over the past several years already. At the start of my career, not only did we communicate by written letters and memos but some were still using the stilted language of ‘business correspondence’ i.e.
“Dear Sirs,
further to my letter of the 14th inst., and not withstanding prior agreements, we wish to communicate that we are agreeable to proceeding as laid out in the appendices attached herein. We would appreciate your written confirmation by return.
Your inestimable servant,
Mr. Quentin Ponsoby-Smythe, MA, DSO and bar.”
Today, that would probably be a WhatsApp saying “It’s a GO!🚀”
Communication has become more informal, more frequent, faster, shorter and just, well, more! The gap between business and personal communication has closed and, for some, disappeared (as a quick look at Pete Hegseth’s signal messages about Yemen show, or the emails between Epstein and Peter Mandellson).
However, in all this, there has still been a space for longer form communication. Indeed, it has seen something of a resurgence through platforms like this and even through the growing number of books being published. We see business ‘leaders’ using these platforms to share their views on what is happening and to seek to shape the conversation. The Shumer piece I refer to above is an example of this and the incredible reach available if something goes viral.
However, the tide of AI slop is about to wash over this. I don’t just mean the obvious rubbish that is being produced and shoved out over social media, such as Donald Trump dumping crap all over his countrymen from a fighter jet. Again, the Shumer piece is an example of what I am talking about, as it is clearly written with AI and is terrible to read.
Using AI to write will make writing more mediocre, more average and just worse. This is what The Register has called ‘semantic ablation’ in their opinion piece ‘Why AI writing is so generic, boring, and dangerous: Semantic ablation’
It’s not the catchiest of names is it? But it’s a real problem. AI is a prediction engine and so it looks at the distribution of probable next words and picks one out of the middle of the distribution. This means that it discards any unusual words or syntax deliberately. Over time, it trends to the mediocre, it removes any colour and quirkiness, and it stops the development of language. (I paraphrase what the article says here, so don’t @ me if you think I’m wrong, read the original!).
It’s why we find AI writing so hard to read. It’s the written equivalent of the monotone computer voice, you understand what it’s saying but you can’t listen to it for long because it’s so lifeless.
And no, it’s not going to get better, for the same reason it’s not going to stop confidently bullshitting (what the industry has euphemistically termed ‘hallucinating’) because these are features of the way it is designed.
AI takes away the fundamental qualities of some of the very best and groundbreaking writing. Like coming with new analogies. And stating sentences in ungrammatical ways. Making up new words, like ‘decrapify’ and ‘enshittification’. Deliberately playing around with sentence structure and grammar to grab attention.
All the things that make writing distinct. That give someone’s writing voice. AI will never sound truly distinctive. It will never write anything like the iconic advertising slogans, because they are, well, original. They didn’t follow precedents, they didn’t follow the rules - quite often, they deliberately broke them,
What an AI writes is beige, repetitive, and derivative. Because that’s what it’s designed to do.
That not what we want to read. And that’s not how human thought and language progresses.
On The Border
Another new phrase (to me, at least) that I heard about AI is that it has a ‘jagged frontier’. What that means is that it is stunningly good at some things and unbelievably bad at others. Yes, it can scan a mass of scientific papers, identify the ones that are relevant to your area of enquiry and summarise the findings in a matter of minutes, but it will also tell you to eat a rock a day for vitamins or use glue to get the cheese to stick to your pizza.
The solution? You have to try using it and find out what it’s good at and what it’s bad at. You have to discover that frontier for yourself through trial and error.
This is like telling someone to find out the layout of a room by shutting their eyes and walking around. Sure, you’ll bash into walls and fall over furniture but you will eventually find out where everything is! And, bonus, you won’t need to switch the light on when you go into the room at night time!! Bruised shins? That’s just the cost of learning.
What this actually means is that AI models, by which I mean LLMs, are unreliable. Worse than that, they are inconsistent. You try something and it might or might not work.
This is very bad news for adoption. We humans don’t like to play Russian Roulette every time we use a new bit of technology. We don’t trust something that is a genius one minute and an idiot the next. It’s unnerving.
AI adoption is still in the pioneers and early adopters stages. It has yet to ‘cross the chasm’ to early adopters. Already signs are that adoption by corporate users has stayed low and shows little sign of upward movement. That’s when companies have made it available to everyone, provided training and encouraged its usage. That’s despite tech companies stuffing it into every conceivable orifice of their products. That’s despite the costs of usage being unrealistically low, way below cost.
I don’t think it’s going to make it across that chasm. Certainly not in it’s current form. Better products than AI chatbots have failed.
Messy
I first wrote about AI in May 2023 (Putting The Chimps In Charge), and it’s been popping up more and more ever since. At one point I am pretty sure I said “And this is my last word on AI”, a promise I have very clearly not kept.
Now it seems almost impossible to write about the future of work without mentioning AI. This is not, however, a reflection of how AI is actually impacting the workplace. It’s a reflection of how much we are being told AI is going to affect the workplace. Not just affect it, transform it!
This is a perfect fusion of the most used bullshit word of the year with the most hyped load of bullshit of the year.
We are in a period of frenzy. The AI boosters are boosting harder than ever and everyone is chasing around trying to grab a piece of the action and be sure not to miss on ‘the next big thing’. The atmosphere of FOMO is so thick you cut it into bricks and build a castle with it.
But let’s not forget what organisations are. They are communities of people working together for a common set of outcomes. (Not the same outcomes but overlapping and mutually supportive outcomes). An organisation without people is not an organisation, it’s a filing entry.
And, as David Oks argues in his excellent substack piece ‘Why I’m not worried about AI job loss - We’re not in a February 2020 moment, and ordinary people will be fine’, people are bottlenecks.
“The world is run by humans, and because it’s run by humans—entities that are smelly, oily, irritable, stubborn, competitive, easily frightened, and above all else inefficient—it is a world of bottlenecks. And as long as we have human bottlenecks, we’ll need humans to deal with them: we will have, in other words, complementarity.”
We often underrate how inefficient things actually are, and that inefficiency is caused by us in several ways. Laws, culture, personal preferences, rivalries, politics, and simply being resistant to change, to mention a few. Those inefficiencies come down to us, we are the bottlenecks.
And as Oks points out;
“Production processes are governed by their least efficient inputs: the more efficient the most efficient inputs, the more important the least efficient inputs.”
(That’s us, by the way.)
The more useless we are, the more important we become! And AI is making us both more useless and more important. How good is that? What a delightful paradox.
This is not an entirely new insight, however. Ever since the first factory, we’ve been the weak link. We’re the sand in the gears of the machine, whether that’s a grimy Victorian mill or a sleek, futuristic office full of hipster coders.
“Ah,” you say, “but the promise of AI is to replace the humans!” Well, good luck with that. Tech boosters have pushing that promise of full automation for over a century, as I mentioned last week.
As anyone who has launched or sold a product to ‘the public’, the human capacity to utterly fuck things up is a marvel to behold. The variety and novelty of ways they find to bugger up whatever it is you’ve given them will astonish and bewilder you (and drive you to drink or drugs or probably both).
It’s our secret superpower. But let’s not tell the machines. Let them find out for themselves what a ‘jagged frontier’ looks like.



Of all the stuff in my inbox reminding me of new content waiting to be looked at, your content is the only one that consistently leads me to read to the end even when I, supposedly, don't have time for it. It is, indeed, obvious you are not using AI. If only for a sentence like "That not what we want to read", which ALSO made me smile. Great content presented beautifully. Thanks once again.
I’ve been reading a wonderful book; ‘the score’, by C.Thi Nguyen, a professor of philosophy with a thing about games in general, and board games in particular. He spends a while on our relationship with metrics and the nature of ‘value capture’; finding ourselves seeing the world through the lens of what we choose to measure, rather than what is there. The hype around ai is a good example. Learning to use it quickly is a very metrics driven obsession, as though if we don’t it will somehow evaporate. I think Stafford Beer has a point when it comes to the black box of complexity; don’t stare into it, it will drive you mad. Look at the relationship between inputs and outputs, and work with that. Observation and orientation beats fretting about the box.