Pretending it's not the problem
AI is just the latest excuse to avoid doing the work
Looking For Clues
It’s hard to read anything about the workplace today that doesn’t mention AI, in much the same way that everything talked about Work from Home in the aftermath of the pandemic.
What do these two seemingly disparate things have in common? Well, they are both external shocks on the workplace that threaten considerable disruption, and possibly transformation. They both challenge long-held assumptions about work and organisations. They both pit the interests of employees and society against those of employers and investors. They are both unavoidable, you can’t ignore them, you can’t wind back time and put the genie back in the bottle (not that it will stop some from trying).
These are all true but I don’t think they are the main point of commonality. What both COVID-enforced Work-from-Home and the introduction of AI into the workplace show is just how little organisations really understand about how the work gets done, by whom, what are the critical conditions and where the value is generated. In short, they expose the ignorance in most organisations (at least at the senior level) of what’s actually going on.
They expose the stories we tell ourselves about work as just that, stories. Comforting tales that allow us to carry on in blissful ignorance instead of doing the work needed to really understand what’s happening. Convenient delusions that allow us to avoid looking at the real issues, and to avoid facing up to them.
(I am generalising here to make a point, of course. I am sure there are some organisations who really have a deep understanding of how they work and what their people do to create value. I just don’t think there are that many of them. I certainly don’t see any evidence of their abundance.)
I started to think about this on reading about how the introduction of AI is reducing the number of entry level roles because it can do the sort of tedious, repetitive ‘grunt’ work that had traditionally been given to entrants. You know the sort of thing, the background research, the data-crunching, the Excel wrangling, the Powerpoint production. Low value work through which the new entrants familiarise themselves with the field, pick up an understating through a kind of osmosis, sit alongside experienced coworkers and absorb information.
The fear is that if AI soaks up all this work and these roles disappear, there will not be the necessary career paths for the future expert workers.
But I don’t think the problem here is AI. The problem of new entrants not having the opportunity to work alongside their more experience coworkers and learn is not new, it’s been a problem for the last 5 years. Only it’s been blamed on something different, it’s been blamed on Working from Home and hybrid schedules.
And I don’t think this problem suddenly appeared in the wake of COVID, I think junior workers finding it hard to get time to learn from their experienced coworkers has always been a problem. Getting time with boss has been a challenge for donkey’s years and it’s been getting worse due to rising workloads and broadening spans of control. COVID just made it more acute, and AI threatens to turn it up a notch or two more.
The real problem is that this is a crap way to bring people through. It’s unstructured, variable in quality and quantity, and likely to get squeezed out by other priorities. For the more experienced employees, it’s an extra, unpaid and unvalued burden.
It may be how it’s always been done but now it’s being exposed as totally inadequate. Sitting people alongside those with experience and hoping they absorb knowledge through some kind of osmosis is unlikely to be efficient and effective, is it? To take the way in which artisans learnt from their master and apply it to knowledge work in a modern organisation is not just daft, it’s almost criminally negligent.
AI is just the final nail in coffin, giving the the clearest example of how leaving people development to chance is a bad strategy because it eventually gets squeezed out by the pursuit of profits and efficiency.
The solution is to rethink and redesign the way we develop people and enable them to develop the skills and experience they need. We need to make it a conscious and deliberate part of an organisation’s operation, not leave it to chance and the good nature of the (already overworked) workforce.
They say that when the tide goes out, you find out who’s not wearing any swimming costumes. Well, AI is just lowering the water level. Just like COVID did. Instead of trying get the tide to rise again, we need to go shopping for some new swimwear.
Hysteria
We are well and truly into the hysteria phase of the AI hype cycle. Yet another piece of alarmist bullshit came out recently, “2028 Global Intelligence Crisis”, from a self-proclaimed financial analyst company called Citrini Research. I say ‘self-proclaimed’ because this piece seems to lack any real analysis and as for finance - well, it’s got some numbers in it.
I started reading it and got about 6 pages in before I hit my bullshit overload function and had to go into the garden and shout at the crows to calm myself down. It’s basically a piece of fiction based on some utterly groundless assertions, mostly derived from the self-serving utterances of the CEOs of the AI companies that want to raise billions of dollars in investment. Needless to say (but I’ll say it anyway), nowhere is any evidence offered to support these assertions.
So, to be succinct, it’s bollocks. Utterly transparent, weapons-grade bollocks. And yet it caused a sell-off of software company stocks because gamblers investors swallowed the story that AIs like Claude Code or Codex mean companies can build their own software applications (specifically software-as-a-service or SaaS) like CRM systems, HR systems or even massive ERP systems like SAP.
Dear readers, they can’t.
This is hysteria. This response shows clearly that the aforementioned gamblers investors have no idea what they are investing in. They are not being helped by a media that uncritically repeats this nonsense without any of the scrutinising, fact-finding and critical analysis they are supposedly paid to do. Which suggests they have no idea what they are writing about either, or they are too lazy or stupid to do their jobs.
AI is such a multi-layered phenomenon, I am thinking about writing a piece on just that. This story touches on a few but it certainly does not touch on what AI can actually do and what products we might actually be able to use in organisations in the future. It’s got nothing to do with the actual technology. Do not lose sight of that because that’s the only bit that matters.
Most people do not understand the complexity involved in delivering the everyday services they rely on. That’s true whether it’s getting the water to come out of your tap, the electricity to run your home, the networks that make your phone work, or the software systems you use for personal and work use. At their heart, those software systems are just a bunch of code running on a computer somewhere but that is really just a fraction of what is needed for you to be able to use them.
There is a maintenance operation to keep the code running and adapted to changing interfaces and systems around it; the technical infrastructure to keep the system available; the networks to deliver it to you. Then there’s the legal structures; the contracts; the legal conformance; governance and oversight structures; the data protection apparatus; the back up systems; the liability in instances of failure; the customer support; the billing and reporting systems; the financing and financial management.
And that’s all before we get to the daily challenges of just making the bloody thing work reliably, despite the best efforts of the meat-puppets (that’s us) to screw it all up.
But the investors gamblers and the ‘journalists’ aren’t interested in any of that. They are just interested in jumping on the next bandwagon to make money or build profile, which they will then leverage to jump to another opportunity just as this one crumbles to dust. They used to be interested in understanding, it used to matter, people used care. Today, however, it’s all about appearance and if you can front it out, you can blag your way to power and success. It’s part of the bigger picture of detachment from the reality that is blighting our political, financial and social system today.
Anyway, the sell-off created a bit of movement in the markets, which means professional investors cleaned up and the rest of us got left with the losses. And it generated a load of views and clicks and opportunities to write more bollocks about AI, so the media got their fix whilst we all got our time wasted and our anxiety ratcheted up a bit more.
‘Cushty’, as Del Boy would put it.
The Tide Is High
You know what they say, it’s difficult to make predictions, especially about the future.
The truth is that nobody knows what’s going to happen with AI and how it will impact the future. But that’s true of many other things as well. We don’t know how the current geopolitical tensions will develop, we don’t know how the economies will perform, we don’t know when and what the next pandemic will be, we don’t know how global warming will impact us (worryingly, it’s looking worse than the already alarming predictions). We don’t know a whole host of things that could turn the world upside down again.
Because I studied Economics at university, I am quite comfortable with this uncertainty. The economy is an impossibly complex thing, what happens is the outcome of billions of unknowable and separate decisions passing through billions of unmappable connections and interrelationships. Actually, it’s probably trillions rather than billions. It’s a lot, OK?
Economics made the mistake of thinking it could understand all this and model it, and so predict the future. Well, that worked out well, didn’t it? Conventional economics missed the 2008 Global Financial Crisis and still can’t explain why it happened. The way I was taught to look at Economics is to consider the forces acting upon the economy and the way they interact and relate to each other in complementary and contradictory ways (sometimes both at the same time).
I find this a useful lens to look at the future, whether generally or of work. It’s what led me to come up with my ‘Forces of Crapification’, namely
Putting profits before people
Valuing efficiency over effectiveness
An obsession with process and measurement
The spread of mobile phones and the ‘always on’ culture
Tech replacing human interaction
Increasing work loads, hours and stress
I’m tempted to come back and revise these from time to time but I find most ‘new’ things fit into these. AI is part of tech replacing human interaction and will also increase work loads and stress (in theory it could reduce hours but a) studies suggest users of AI-agents actually work more hours b) employers are always going to push for more work if at all possible). The drivers for AI adoption are in the first three of my forces.
These forces have been getting stronger over the past several decades and so the real question is ‘What is going to change the direction of travel?’. As I suggested last week, in a conclusion that surprised me as much as anyone else, AI could be a factor. However, it can only one factor in the turnaround. I wonder what the others could be?
If you have any thoughts, I’d love to hear them. It’s a line of reasoning I’m going to follow up in the coming weeks.
The Other Side
This week’s post in my ‘Surviving Corporate’ newsletter is titled ‘Uncomfortably Numb’. It’s about why we end up numbing ourselves to cope with corporate life and where that can end up (and how to avoid getting there).
Surviving Corporate is the companion to this newsletter, it’s where I look at the personal experience of being in corporate life and dealing with its aftermath (similarly to politics, all corporate careers end in failure. Well, most of them.) I do that from the view point of my personal experience, both what I went through and what I have learnt during my recovery.
I’d like to create a space for a conversation about how we deal with corporate life, and particularly the harms that many of us suffer. I believe it would be greatly beneficial to people to take part on that conversation, either as a contributor sharing their story or as an observer, where both could have their experience validated and know they are not alone. I hope it would become a place to create solutions and provide support, in time.
This conversation doesn’t happen for a few reasons. Employers discourage it because it draws attention to how potentially harmful the environment they are responsible for can be. Those who have been through be experiences and emerged the other side often want to just forget about them and pretend it never happened. And those who are suffering, either still in corporate or having been ejected, feel isolated and wrongly feel shame and guilt about their situation.
We need to break that silence and get this issue out in the open, both to help those suffering and to create pressure to make the workplace less harmful.
If you are interested in taking part in that conversation, you can register your interest on this form and I’ll get in touch shorty. Or you can subscribe to Surviving Corporate and just listen in for a bit.


