It’s All In The Game
What’s the best way to encourage employees to use AI, to get them to embrace this transformative and paradigm shifting technology that’s full of unknown opportunities and possibilities?
I know, let’s gamify it! Let’s offer them money if they click on it. That’s bound to work.
This must have been the gist of a conversation at London law firm Shoosmith’s because they are offering employees a share of a £1m bonus pot if the firm collectively racks up one million Microsoft Copilot prompts in the coming year.
There’s so many types of wrong here it’s hard to know where to start.
Let’s start with the question “Will it work?”.
In one sense, probably. The “clear and ambitious” annual target of one million prompts is achievable and no doubt there will be a lot of internal communication about the target to keep it at the forefront of people’s minds. You can see it becoming a topic of conversation and a matter of collective pride and purpose (a pretty shallow purpose but then so is hitting any target number).
Shoosmiths have worked out that if every staff member uses its AI just four times per working day, the target “will be comfortably” surpassed. That, in itself, is quite funny because that’s not really how usage is likely to work, it’s much more likely to have peaks and troughs and to be unevenly distributed across the workforce. A large amount of usage will be driven by a small number of ‘power’ users. Unless …
What we know about targets is that they have unforeseen consequences. Employees start to focus on actions to hit the target rather than actions that best serve the desired outcome. We also know that people will game the system in order to hit the target and an LLM like CoPilot is absolutely perfect for that. Given the crudeness of the metric, what’s going to stop people putting in a load of trivial or, heaven forbid, frivolous prompts? Like asking it to write parody profiles of the board members who dreamt up this nonsense. Or the best places to go and spend this fabulous bonus.
If we assume the number is hit, we can still ask “Will it work?”. Only we have to make another assumption here, which is that the real objective is to encourage the adoption of Microsoft CoPilot by employees into their daily workflow. That’s a big behavioural change, one that requires the investment of significant cognitive effort. What’s the motivation for employees to do this? Well, Shoosmiths have decided it’s a financial reward but we know that motivation is much more complex than that and that intrinsic factors are much more important. Simple financial rewards are not going to drive major behavioural change, that’s just not how it happens.
Besides, the ‘reward’, which is dependent on all your colleagues also contributing to the effort, is about 1% of salary. Are people going to put effort into something new for that or are they going to put that effort into what they know and get rewarded for already?
I Need A Dollar
Is the real objective here to encourage the adoption of AI by employees or is it to drive cost reductions (which is the other way of saying productivity gains) through AI? Perhaps with the subsidiary objective of justifying the substantial investment in Microsoft CoPilot.
Given the facile nature of this scheme, I think we can assume the real objective here is cost reduction. It’s the ongoing obsession of organisations, it’s the reflective twitch of our dying form of late-stage capitalism. The constant drive for more profit can be satisfied in two ways. One is to innovate and deliver new products and services and to address new markets. The other is to reduce costs. We know which is easier and best understood (and rewarded) by the financial markets. It’s also the one the hordes of MBAs that have invaded boardrooms and consultancies have been trained to do.
The longer term objective, then, is to replace lawyers with AI. So using an AI like CoPilot is effectively training it to take over more of your job. So why are Shoosmiths employees going to do this? For one percent of their salary - perhaps? I think maybe not.
We already know there is considerable resistance to AI adoption, for this reason, amongst others, as I mentioned in this recent missive.
People are not stupid, they can see the way that the wind is blowing. The motivation behind these initiatives, the reasons why organisations are so keen to push AI into the workplace, are apparent. It’s clear who the benefits will be going to and it’s not the employees. The consequence might be a little bonus this year but it’s likely to be a P45 in the future.
Minute By Minute
There’s also a paradox here. Lawyers famously charge for their time, for each phone call or letter, for each 5 minute segment. There is a perverse incentive here for them to take longer than necessary to complete a task, although they assure us that their professional standards ensure they don’t do this. There are also mechanisms in place to police this and rule against excessive costs.
The fact remains that time is important to them. And yet AI, er, reduces the time it takes to do something. So using AI could actually reduce the billings a lawyer can make (they are expected to charge out a percentage of their time, as much as 80%). The benefits of AI seem to be pulling in the opposite direction to their own interests and existing incentives.
This paradox is especially stark in the legal world but is true for all employees. If I use AI to save time, and yet I am being evaluated based on presenteeism and hours I work (or appear to), why am I going to use it?
If I do save time, who does that benefit? If I get that time for myself, then I will be judged to be ‘slacking’ by my bosses, and ‘cheating’ by my colleagues. (Research showed this attitude also applies to wellbeing. Managers noted that employees who took a break and disengaged for a period - ie. for a weekend or holiday - came back more refreshed and productive but they also considered them less committed and less suitable for promotion).
If it goes to the organisation, I’ll just get a bigger workload, which will probably be bring a cognitive load that exceeds my capacity and push me towards burnout. (The tedious tasks that AI replaces can also be welcome downtime in a week that already exceeds our capacity for cognitively taxing work.)
Fever
This also shows a fundamental misunderstanding about how the adoption of new ideas and new behaviours actually works. It may not surprise you that the assumption is that there is a simplistic mechanism that operates when it is, in fact, a more complex process. (There’s a bit of theme going on here, isn’t there?)
The mistake is to think that the way that ideas spread through an organisation is the same as the way information spreads through an organisation.
The paradigm for information spread is that of a virus, a person passes it on to those they are in contact with, so one person ‘infects’ multiple people with the information. We know the truth of this, we talk everyday of ‘virality’ with regard to social media, we see memes explode across the internet everyday. We spend time with friends showing each other things on our phone that have been shared with us.
Ideas don’t work like that because the adoption of an idea requires behavioural change and going against prevailing norms. That’s hard work, whereas receiving and passing on information is trivial. Indeed, we revel in it, we love to gossip! But we don’t really like change and going against the herd behaviour.
Damon Centola calls this spreading of ideas and behaviour “complex contagion”, as a counterpoint to the simple contagion of information. The way to encourage complex contagion is to find the early adopters and connect them together, creating centres of excellence. This enables them to support each other and resist the social pressure to abandon the new idea and revert to the norm. It also allows them to share information and practice and so speed up their learning and adoption. (How Behavior Spreads - The Science of Complex Contagions: Damon Centola)
The adoption of AI will not happen through simple contagion, which is the underlying logic of Shoosmiths approach (which, to be fair to them, is pretty much commonly held across business). Simply throwing AI at people and compelling them to use it will not lead to adoption. It certainly won’t unlock the possibilities we cannot foresee at the moment. At best, people will use AI to help them work in the existing way, not to explore completely new ways of working. And that will probably only be amongst AI enthusiasts.
Whether that will be a enough of a win to justify the expense to Shoosmiths of having the AI available (which almost certainly doesn’t cover the actual cost of providing it) will remain to be seen.
And even then …
I’m So Bored With The USA
Another piece of research has shown that using AI can increase productivity but lead to reduced engagement. (H/t to Stowe Boyd for this one. I recommend his Substack ‘Work Futures’ to you.)
It seems that professionals can produce high-quality work in less time by collaborating with an AI but they experience a drop off in intrinsic motivation and have increased feelings of boredom when they turn to tasks that don’t have this technological assistance.
This is rather confounding, isn't it? Applying the usual caveats that this is only one study and keeping a healthy scepticism about the widespread applicability of these types of findings, it still raises more questions than it answers. It shows that we know very little about how AI will be adopted (I’ve written before about the problems of technology adoption and how humans can react to it in bizarre ways) and that all we can be certain of is that it’s going nuanced and unpredictable.
The logical thing to do would be to progress cautiously, carrying out limited experiments, learning and re-iterating and refining the approach as we go along. A bit like the way we introduce a new drug.
But no, we’re throwing AI at everything in a feeding frenzy for ‘productivity’ (aka cost reductions aka more profit). Our ‘leaders’ are running up to Pandora’s Box and gleefully ripping off the lid.
And Shoosmiths are but one example.
The Payback
The conversation on the future of work is very focused on AI at the moment, which is not surprising given the forecasts of massive job reductions its introduction will cause.
However, what we see in the way it is being introduced are familiar themes, mistakes we’ve seen before, simplistic approaches to complex issues. Much as COVID and the lockdowns threw into stark relief what was already known, so AI does also. It’s an external shock that makes some harsh realities impossible to ignore.
Organisations are excessively focused on efficiency and profitability (or an equivalent financial measure for non-commercial organisations).
People are considered expendable and replaceable resources. The ultimate goal is a profitable organisation without any people.
It’s all about the numbers. If there isn’t a number for something, then they will create a proxy, regardless of whether it’s a good one or not.
Business orthodoxy applies simple solutions to complex problems. It’s a reflex.
Few organisations understand systems thinking. Even fewer actually apply it.
Cutting is easier than creating.
There is little understanding of group psychology.
Organisations say they want engagement but always prioritise productivity, profitability and growth over it.
I could go on. They are locked in a doom loop of chasing ever-increasing goals with tactic and strategies that we of decreasing effectiveness. They are an oroborus, eating their own tail with increasing speed and hunger. The sooner they disappear up their own balance sheet the better.
Brilliant job. When I talk about this stuff I call everything on your bullet point list “employment thinking” and the antidote … the paradigm that needs to shift to decrapify work “enrolment thinking”. Because exactly as you say the underlying thinking that “zero employees is ideal” and “efficiency” or “max profit” as the goal are directly in opposition to “employee engagement” or human sustainability. So either the people paradigm has to shift or we’ll just keep seeing more of the same disengagement, burnout and hopefully one day a “John Galt” style movement where people just disappear away from the corporate slavelands.
Yeah, that Shoosmiths idea is a doozy. When I worked for the railways, the onboard employees weren't that keen on reporting, mainly because they thought it was a waste of their time, and nothing every changed despite their reports.
To encourage employees to report when their trains were late, the people with the highest number of reports went into a draw to win money.
You, me and the dog down the road know that this was likely to encourage onboard employees to make the train late on purpose, so they could report it. And that is what happened.
Assuming that the Shoosmith incentive is to save time, it's like a big red poster telling the lawyers that in a year, a third of them will be laid off. Who is going to buy into that?
All the copilot hits will probably be about job hunting and matching CVs to job vacancies. Surely it would be better to embrace AI in a more thoughtful manner?