Could It Be Magic
There’s been another flurry of news around AI.
PM Kier Starmer says that AI can double productivity in a few years.
Musk’s DOGE (a rebrand of the existing US Government Digital Services) will cut bureaucracy using AI to automate government services and save a trillion dollars.
‘Stargate’ has been announced to invest eye-watering amounts of money in data centres to massively increase ‘compute’ (a rather ungainly way of saying computer power that seems to be a la mode).
These follow the dominant narrative that AI is going to change our world beyond all recognition, starting tomorrow.
Pardon me whilst I put on my sceptical pants and ask to poke my fingers through the holes in the Messiah’s hands.
Some of this smacks of magical thinking. Productivity has been flat-lining since the Global Financial Crisis in 2008, whilst ‘government waste’ is a perennial stick to beat governments with and whinge about paying tax. Oh, but here comes the Business Fairy to wave her wand (AI enhanced, of course) and all will be fixed!!!
Announcing billions of dollars of expenditure is simple. Spending it, not so much. It is easier to assemble the money (it’s just shifting digits around, fundamentally) than it is to build data centres.
And, as I have outlined in my missive of a few weeks ago, there are some very crunchy questions around the business case for AI and whether the promised technical developments are actually achievable.
But setting those issues aside for now, my scepticism is also based on something I have learnt the hard way as a product person - introducing technology and getting people to use it is really hard.
And I’ve seen very little consideration given to this in respect of AI.
So, let’s don our virtual headsets, jump on the blockchain and ride into the metaverse … oh, hang on. They aren’t really here yet, are they. Damn, I’m sure I was told we’d all be using them by now.
Oh well, we’ll have to stick to black squiggles absorbed by our optical input devices instead.
99 Problems
Just to balance my scepticism here, I do believe there are AI applications that will have a massive impact in specific areas. We’re already seeing some of these in medicine and science generally. Greatly improved accuracy in cancer screening, or much faster drugs research. There’s going to be a lot more of these and an acceleration of development, which has to be a good thing.
What I’m talking about is what I’ll refer to as ‘General AI’. These are the tools that we will supposedly all be using to enhance our productivity, or maybe our employers will be using to replace us. This includes the LLMs like ChatGPT, Claude, Gemini and the adaptations of these like Microsoft’s Co-Pilot.
These tools, it is promised, will automate the drudgery of work and free us up to focus on higher level activities. We will have a range of ‘AI assistants’ to support us, our own little team of digital interns to run around doing all the crap jobs, so we can be more productive.
Here we run into our first hurdle. We’ll have to manage these AI agents. So are organisations going to train everyone to do that? When they don’t bother to train people to manage actual humans, or to use the general software tools that they already have, it seems unlikely. Let’s be honest, a lot of managers can’t cope with managing a distributed team, which is why their bosses are dragging people back into the office (rather than teach them how and to change their ways of working).
This is an adoption problem. No matter how great the technology, there are may barriers to adoption that are hard to navigate. Some are just practical matters, like building the infrastructure, overcoming hard engineering and resource issues. However, the hardest are the ones that seem the softest. They are called ‘people’ and the systems of organisation they create.
This probably best summed up in the age-old saw, “You can lead a horse to water but you can’t make it drink”. And, when it comes to technology, trying to get them to will probably drive you to drink instead.
These are the things that, in my experience, will stop people adopting a technology.
If it requires habit change.
If it requires them to learn a skill or technique.
If it is not like something they already do or know.
If it requires them to change the way they work (or, at a higher level, means changing the workflow).
If it requires them to see the world differently (to change their paradigm).
“Ah, but if they see the benefits outweigh the costs, then they will adopt the technology” you say.
There are two problems with this. The first is that we naturally discount future benefits and exaggerate short-term costs. We bias the equation in favour of sticking with the status quo.
The second is that we are not rational beings, so we don’t do the calculation in the first place. We just go with our gut reaction, and that is mostly that change is bad and not worth the effort.
On top of that, if the benefits don’t accrue to us directly, then we’re not even going to consider them.
Technologies that do get adopted tend to be substitutes for something that we are familiar with. That means we can understand what it does, we don’t have to make a big habit change to use it, we can use our existing skills (even if we have to add to them) and it fits into how we work already. In short, we get it and see how it can help us.
I worked on email when it was a new technology. It took off because it was like sending a letter, but a lot less painful. If you didn’t experience work before computers, you will not really appreciate just how painful it was to communicate in writing with someone else. It was barely any easier if the person was in your organisation.
You had to write your letter or memo out by hand and send it to the typing pool. A day later, maybe longer, you would get a draft, which you would correct and send back. After a day or so, you got another draft where they had corrected the mistakes but probably introduced some new ones as well. After a few drafts, you gave up trying to get it perfect and sent it anyway. It took days, I tell you, days!! Weeks, even!
So when you could get on your Personal Computer and send an email which the other person got in minutes, if not seconds, it was revolutionary. Yes, you have to learn how to type, but even two-finger pecking was vastly superior to the alternative.
The test to apply to AI is does it replace something we’re familiar with? Because if it’s a very different way of doing things, it’s got a problem.
And if it requires some extra kit, it’s also got a problem. Email took off because Personal Computers and networks were being rolled out anyway. A case of the lower levers of change being ready to be pulled (see last week’s missive on points of leverage).
The Riddle
There’s also the question of whether the tech addresses the underlying problem. As Stephen R. Covey pointed out “If the ladder is not leaning against the right wall, every step we take just gets us to the wrong place faster”.
We’ve had personal productivity tools for decades, but we’re actually working longer hours. Our productivity may have increased but the tools have just created more work for us to do. Email made it way easier to send a written communication, so the volume that we send and receive has grown exponentially.
So let’s look at AI tools that are available right now. One that does seem useful is an AI note taker. You may already be using this or been in a meeting where one is in use.
It summarises the conversation and produces the notes at the end of the call. This would have been a boon when I was running lots of projects, I spent a huge amount of time writing up the minutes and sending them out. With this, I at least have a record to work from. A few quick edits and I could be done. Maybe over time it will get to learn which parts of the discussion are important, pick out the action points and write the whole thing in my style. That would be great.
But what do I think will happen? Everyone will get access to these tools, so everyone will have their own AI note taker. If you’ve got one, why not use it in all your meetings? (These will now all have to be on Zoom or Teams because otherwise your AI agent can’t be present). Now we’ve got notes for all sorts of meetings that we didn’t have them for before. You will get notes for every meeting you attend, even the ones that are just conversations. Even when it’s just two of you.
So now you have loads of notes of meetings to wade through between all the meetings, which are all virtual even if you’re in the same office, increasing your Zoom fatigue,
Does the solve the problem or create some new ones?
A big problems is that there are too many meetings in the first place, so does this solve that? No. In fact, if you’re using an AI agent to do diary co-ordination, along with one to take the minutes, you’ve removed two sources of friction that make people think twice about calling a meeting in the first place. What do you think will happen? My guess is that people will call even more meetings that probably shouldn’t be happening.
The other problem is that many meetings are badly run and ineffective. This is not because people lack the right tools, it’s because they are not trained to run meetings. Or to take good minutes and identify and allocate actions. Does an AI agent help with these? Er, no. It could get worse because people will think that sending out the AI summary is the same as doing proper minutes with allocated action points, so the quality will get worse.
But it will help project managers, right?
Well, it will help them comply with the process. All their documentation will be up to date because the AI agent will do the heavy lifting. That’s going to help, isn't it?
Hmmm. Let me tell you about a day in the life of a project manager. We’ll call him ‘Mark’. He’s checking in with a member of his project team.
Mark: Hey, Paul, I’m just checking up on your progress with your actions from the last project meeting.
Paul: What actions? I didn’t have any actions.
Mark: Yes, don’t you remember? They’re in the meeting minutes I sent you.
Paul: I haven’t had any meeting minutes.
Mark: Er, yeah, you have. I sent them out last Thursday.
Paul (quickly checking through his emails): Oh yeah, those. (Gruffly, to cover his embarrassment)
Mark: So?
Paul: I haven’t read it. I haven’t had time.
Mark; Oh. So when do you think you can get around to starting to work on them?
Paul laughs hysterically: Mate, I won’t even be able to think about looking at these before the next meeting! Do you know how busy I am?
This was a fairly typical exchange, in my experience. A lack of resources, people overworked and overwhelmed with information. Are the AI tools going to help with that? Or are they just going to fill any space they release with more stuff created by people using AI tools?
The way you make things happen in organisations is through relationships, not through processes or minutes or meetings. AI doesn’t really help with that.
So I’d like to see a lot more discussion about how AI technologies will be adapted, in the current workplace by actual people doing actual jobs. Because, believe me, people are weird and they will do unbelievable things to anything you give them, things you never imagined would even occur to a sane human being.
And I’d like a lot more discussion about the problems that AI will actually address. The ‘use cases’ for the technology. Where the rubber hits the road, as a US colleague of mine used to like to say. Because, right now, I’m not seeing it.
Chain Of Fools
You may, like me, be slightly mystified by the hype around AI. It sort of comes with the territory with any technology but it seems to be stratospherically high with AI and I wasn’t quite sure why.
I came across an article in Inference Magazine titled ‘How much economic growth from AI should we expect, how soon?’, which answered some of my questions. I’ll come back to the growth bit but first let’s look at the ‘AI fervour’, which it summarises and that will make your hair stand on end.
They begin by saying ‘In the dominant intellectual framework at the AI labs, artificial intelligence is the most important technology in the history of our species.’
There’s then an explanation of the ‘intellectual framework’, which is too long to include and too concise to easliy summarise! It’s worth reading that part of the piece yourself.
They conclude with the statement ‘This belief structure is much like a religion—the superintelligence has been deified, existential risk is the flood, and the AI labs are our ark.’
The authors don’t analysis the ‘intellectual framework’ or dismiss it, they simple acknowledge it. Their silence speaks volumes.
They don’t comment on it, so I will. It is batshit crazy stuff and these people are overconfident fools. When ideas like this become ‘like a religion’, it tends to lead to disaster.
The logic from the AI fanatics goes like this: once everything is in the AI, you can run millions of instances in parallel, so exponentially accelerating human development. (I’m obviously paraphrasing here, and leaving out the bit about running multiple instances of consciousness, or 100% of human tasks being replaceable leading to a state of complete material abundance and other wacky stuff.)
We actually see some of this already and how it’s rapidly speeding up drug development, so there’s some logic behind the proposition. The problem is with the bit ‘once everything is in the AI’. It’s just assumed that’s possible without any explanation of how that will happen or what is meant by ‘everything’.
This is very familiar to me, as an Economics grad. Economists love to assume away difficult challenges or wrinkles to their theories. The result is a bunch of neoliberal fairy tales that can’t predict or explain things like the Global Financial Crisis. Theoretically perfect, practically useless.
The paper goes on to analyse what possible level of economic growth could be from AI. In the process, they identify a number of constraints on growth, unresolved trade-offs (that the AI evangelists either ignore or assume away), and practical details that the authors say ‘matter a lot’.
They explain that AI research may well be highly automatable but that this does not necessarily apply to other fields of science because the capture of data and the reasoning applied is not automatic (as it is in AI research) and hardly done at all right now.
They talk about lag effects and physical constraints, as well as the costs of AI. Why replace people if they are not much more expensive than their AI equivalent?
They also point out that productivity leaps in one part of the economy doesn’t lift the whole economy nearly as much as you’d think because it is ‘unbalanced development’ (this is a well known economic effect).
The TL:DR is that they think it will increase output by between 3% and 20% over the next 20 years. That’s less than the doubling others have been talking about, and much less than the much wilder claims from the AI heads in Silicon Valley.
A couple of other points. They also say:
‘Superintelligence seems like bad business’, which is another way of saying the business case doesn’t stack up right now.
And ‘Right now, implementing current AI systems requires businesses to reconfigure their processes’, which is a big barrier to adoption as I’ve said above. Hell, some businesses are having a pink fit about people working from home part of the time because they don’t want to change the way they operate and manage. How are they going to cope with AI?
This is a very bitty summary of the article, which is worth reading, or at least skimming through, but I hope I’ve brought you the key points. I’d summarise these as follows:
The people behind AI have a fantastical vision of the future and a quasi-religious belief in AI and their role in developing it (which is itself somewhat Messianic). This does not inspire confidence.
There are several bottlenecks to AI deployment, which are physical, legal, social and economic, that will diminish its impact.
The business case doesn’t add up, it’s not clear how the current levels of expenditure on research can be sustained and how profit be generated from usage.
It will have a ‘transformative, but not explosive’ impact on growth of 3-9%.
I’ll finish with one of their concluding statements:
“The fundamental thesis—that AI research output will be automated; that humanity will create ‘superintelligent’ systems; and that AI systems will do science that create greater and faster technological progress than humans could ever have done—will be borne out in the fullness of time. But this vision has to make contact with reality, and reality can act as a weird breaking mechanism: Meta wants to build AGI, but they couldn’t use a nuclear power plant for their datacentre, because of some rare bees.
These bits never make it in the sci-fi novels, and so it’s easy to see far into the future, but miss the (frankly bizarre) hurdles along the way.”
I’ve encountered enough ‘bizarre hurdles’ in my time to set great store by their analysis.
In the end, the bees will win.
Two parts leap out at me with this. First is the "Jevons Paradox" angle of efficiency creating more demand than the efficiency can handle. The second, more fundamental is the idea that AI will free us up to do more high level thinking. Where, i ask, is the high level thinking now? And, when we give the freedom to do higher level thinking to the paper clip optimisation manager, what will happen?
I think AI has huge potential, but its not instant coffee.
We face a bigger challenge- the nature, design, and intent of our organisations.
Lets start there....