Slip Sliding Away
My LinkedIN feed came up with two consecutive posts about the same thing, Shopify’s AI policy, but with contrasting viewpoints. They both highlighted the same statement from CEO Tobi Lutke,
"If you're not climbing, you're sliding."
The first post, from Anthony Slumbers, called this ‘one of the clearest, most urgent articulations I’ve seen of what AI-First leadership really looks like.’ He said it’s about institutionalising AI, not just playing around with it.
The second post, from Catherine Stagg-Macey, said it ‘Sounds a lot like a burnout culture rebranded as innovation.’ Whilst also praising the clarity of the communication, she added ‘But from a human lens, there’s a whole lot missing.’
One positive, one negative. So which one is right?
Both.
That’s the duality that we are grappling with here. AI is clearly going to be important to how an organisations functions and how the work gets done but it also has considerable risks around degrading the human aspects of work, to the point that it could make some jobs simply unbearable.
Both these posters have their own agendas. Anthony has a course on using AI in Corporate Real Estate and is a techo-optimist, so he’s inclined to see the upsides. Catherine is an executive coach to CEOs and therefore focused on the human aspects of work and leadership.
So maybe it depends on where you're standing as to what your perspective on AI is. (Yes, it does, for everything. That’s literally what perspective is, the view from where you are standing, Colin. Well done. - Ed)
The CEO sees AI as a way to increase productivity, replace headcount, reduce costs and increase profitability (and their remuneration). The employee sees it as a threat to their job, either displacing them or degrading their experience. They are both right but that doesn’t really help us figure out how AI will impact work.
However, it does give some pointers on how it might be adopted. That’s important because that’s the near future and also because adoption is where technology either thrives of fails. It’s where the elephant traps lie.
I know this because I’ve launched products based on new technology and you never know what people will do with it until you put it in their hands. And you always underestimate how weird, creative, perverse and completely unpredictable people are.
Tearing Us Apart
It was with great interest, therefore, that I came across another post talking about the 2025 Writer AI survey ‘Generative AI adoption in the enterprise’.
The results are, er, interesting.
More that 1 in 3 executives feel Gen AI has been a massive disappointment.
The return on investment is poor, with 73% of companies investing $1million plus but only a third seeing a significant ROI.
Two-thirds of the C-suite say there has been tension between IT teams and other Iines of business.
Around 2 out of 3 executives say GenAI adoption has led to tension and division, and 42% say it is (and I really feel this needs to be in CAPS and BOLD)
TEARING THEIR COMPANY APART!!!
I’ve seen a lot of tech introduced into business but I’ve never heard of it doing that!
So, the upbeat assessment of AI may be a bit premature, perchance?
However, what I found really interesting is how employees are reacting.
Whilst those using AI tools say they’ve benefitted and are optimistic about it, and some are paying for their own Gen AI tools because the company doesn’t provide what they want (which is, in itself, a problem), these seem to be largely AI champions and enthusiasts.
What about the rest? Well, 31% of employees admit to sabotaging their company’s AI strategy!!! And it goes up amongst GenZ, to 41%.
Like I said, when people meet technology, anything can happen.
They are mostly doing this by refusing to use to the tools, go to AI training or use AI outputs. I’m sure there are several other things going on to sandbag the adoption of AI because, well, people are very creative …
Now look, all surveys have their limitations and are not always representative but they can be indicative and this one seems to suggest that not all is well in the AI paradise we’ve been told is coming.
Note: The company behind the survey describe themselves as follows:
’Writer is the full-stack generative AI platform delivering transformative ROI for the world’s leading enterprises. Its fully integrated solution makes it easy to deploy secure and reliable AI applications and agents that solve mission-critical business challenges.’
So this is a survey done by techno-optimists!
Dizzy
It would be easy to dismiss the ‘AI saboteurs’ as just another bunch of Luddites, ignorant people who are against the inevitable march of progress (this is the common misperception of the Luddites, who were in fact protesting against the unfair distribution of the benefits of new technology, away from the workers and into the pockets of the capitalists. That could work today still, couldn’t it?).
It is not simply an anti-technology backlash. It could be a response to the unfavourable power imbalance that has driven down pay rates for the past 15 years and this is seen as another turn of the screw. Or it could be because AI is actually making work harder due to the cognitive overload it creates.
This latter point is explained fully in the Workforce Futurist Newsletter titled ‘Buried by Bots: How AI is Maxing Out Your Cognitive Bandwidth’. They identify eight different ways that using AI creates additional cognitive load, which can lead to declining performance and to burnout.
Many people feel overloaded at work, faced with a firehouse of information and a never-ending stream of messages, trying to conquer an endless task list. All of that takes cognitive energy, of which we have a finite amount. We all need enough challenge in our work to engage us and keep us at our optimal level of productivity. However, if there’s too much and we get overwhelmed, our performance degrades.
AI assistants may reduce the time taken to do tasks but actually increase the cognitive load for the employee being ‘helped’ by the AI assistant. How is this?
So just imagine you’ve got an AI assistant to help you work more ‘efficiently’. You’ve trained it to do some of your tasks but it can only do them with about 90% reliability, so you still have to keep an eye on it and check the output, which means constant low-level vigilance.
And to get the best output, you’ve have to optimise the prompt, something else you’ve had to learn and sometimes you think it would be easier to do the task than to sod around getting the prompt right.
Even then, because it makes stuff up and has biases, you have to read through the output and correct errors.
And sometimes it just goes wrong for er, reasons 🤷🏻♂️, and you have to dive in and figure out what’s gone wrong, then work out a way to stop it happening next time.
To do all of this, you’ve had to understand how it works and keep that mental model in your head all the time.
And then it can churn out so much stuff that you’re drowning in bloody output and you end up staring at even more stuff than you had to before the AI came along to ‘help’ you!
Then there’s the actual content, which it sometimes puts into the most inappropriate format so that, even though all the right stuff is in it, it’s practically unintelligible and you have got rewrite it so people can make some sense of it.
That’s just with one tool. God knows how many else you’ll end up having to get on top of to be ‘fully productive’ - on top of the 14 systems you have to know how to use already!!!
Is it any wonder some employees look at AI and think, ‘No thanks’?
Even if they can’t articulate why it’s going to make their job worse, they can sense it.
And is it any wonder that executives are finding Gen AI is massively disappointing, when it was promised to make everything better and easier?
No Future
In his latest substack, Stowe Boyd expands upon the AI scepticism of the general public and the techno-pessimism of the workforce. Why is this? Well, workers have seen that the past four decades of technology in the workplace has not resulted in broad-based income growth but rather an increase in inequality as those at the top have corralled all the gains for themselves.
They also see that the effect of technology is to level up the less skilled worker to be almost as productive as the highly skilled (he refers to the example of chainsaws making average loggers as productive as a skilled axeman previously). This commoditises these roles, which increases supply and reduces wage levels. The highly skilled worker is no longer able to command a premium. Whilst this is ideal for CEOs, it is clearly exactly the opposite for workers.
We’re already seeing AI being used to write code and increase the productivity of moderate coders or simply replace them. The commoditisation of programmers is already in train, and wages will fall accordingly.
So the ‘AI saboteurs’ are acting perfectly rationally. It seems they are actually Luddites, in the true spirit of the originals, fighting for a fair distribution of the benefits of technology.
Upside Down
So where does this leave us on AI? Is it going to be a boon or a disaster?
It’s often said that technology is neither good nor bad, it is how you use it. So it remains the case with AI. Clearly, some organisations are introducing it and seeing a good ROI but many are not. That’s probably got as much to do with how they are run as organisations and the culture they have as it has to do with the technology.
I think was is clear is the promise of Gen AI is not going to be delivered on. For CEOs, it’s not going to allow you to slash the numbers of employees you have because the introduction of Gen AI also brings a whole new set of issues and requirements. You may replace some types of people with AI but you’ll have to have a bunch of new people to deal with the problems that arise from using it, so the reductions are going to be smaller (and possibly not justified by the cost, which is unlikely to come down in the near future).
For employees, it’s not going to make your job easier because of the new layer of complexity and cognitive load it brings.
My mate Antony Malmo has a theory that business are using Gen AI to automate the wrong jobs. They’re focusing on “routine work”, often customer facing, but that work required reliable precision and dealing with novel situations. AI is not good at this, hence the need for oversight, hence the cognitive load increases.
However, if the job of leadership is to enable better decision making by co-ordinating across functions and integrating silos, that’s work that’s better suited to to the ‘fuzzy dot connecting of AI.’ Let the AI do the ‘coordinating, scheduling, prioritising for the betterment of the organisation’.
I think Antony has a point here. It is the organisational overhead of co-ordination that grows exponentially as you grow and add divisions. It is why going from 4 divisions to 5 can practically break an organisation because it becomes so large and complicated, the co-ordination overhead increases exponentially. Not only could AI take away that burden, it could allow the existing constraints to be busted. It could enable entirely new ways of organising the work and the people and redefine the role of leadership.
So there’s no way those in charge are going to let that happen.
Both Sides Now
There’s been plenty of hype around Gen AI (primarily around ChatGPT) but we are getting to the point where the potential is coming into contact with reality and we should remember - reality always wins.
Anthony Slumbers also posted this week that ‘AI will do more AND less than you think.’, how it will be smarter than us in a couple of years (well, he is an optimist. I, as you may have gathered, am a skeptic.) but will never replace human-centric experiences. It will amaze us but we will get bored with it because it can’t touch our souls. Anthony’s example of the latter was ‘Whistlejacket’, Stubbs’ famous portrait of a horse that hangs in the National Gallery.
Artificial Intelligence may be amazing but it doesn’t beat Real Intelligence, as in living breathing human beings (who are weird, unpredictable, stupid and brilliant, often all at the same time).
Good and bad. Amazing and banal. Liberating and restrictive. More and less.
Duality. You could think of it as more cognitive load. Or as a ‘Buy One, Get One Free’ conceptual offer.
It depends on your perspective.