Kodachrome
When you want to use the camera on your phone, you tap the icon with a picture of a camera, right? You may never have actually possessed a camera but you know what one looks like. Some tourists still walk around with SLR cameras slung around their necks, professional photographers use them, you’ve seen them so you know what it means.
To get to the clock functions, you tap the icon that looks like a clock face. You don’t actually possess a clock that looks like that and, if you’re under 30, you may not be able to read it properly, but you get it.
If you want to look at your files, you tap the icon that has a - wait, what the f*** is that? Is it profile view of a toilet? A storage box with a lock on it? Whatever it is, it says ‘Files’ underneath, so it must be the right one.
Those of us who have been around for a bit know is supposed to be a file folder. Offices used to be full of manilla files that were used to collect papers together and to file them away. I know, quaint, isn’t it? I’ve spent many happy hours moving folders around in filing cabinets. Happy days! (No, I’m joking. It was hell.) So why is this incomprehensible symbol still used today?
These are all examples of skueomorphs, which Wikipedia describes as follows:
‘A skeuomorph is a derivative object that retains ornamental design cues (attributes) from structures that were necessary in the original. Skeuomorphs are typically used to make something new feel familiar in an effort to speed understanding and acclimation. They employ elements that, while essential to the original object, serve no pragmatic purpose in the new system, except for identification. Examples include pottery embellished with imitation rivets reminiscent of similar pots made of metal and a software calendar that imitates the appearance of binding on a paper desk calendar.’
Steve Jobs and Jonny Ive used skueomorphs to make the iPhone easier to understand and to use. They tied it into our existing frames of reference, which made this new technology seem somewhat familiar and less threatening. I think we can agree it was a success.
But they persist beyond their frame of reference and take on a different meaning for newer generations that don’t have those original reference points. They don’t make the connection of the symbol with the manilla file (partly because a subsequent design refresh has confusingly changed it to blue) but they know it’s the ‘Files’ icon. Similarly, they know what the disk icon is, even though it is often a graphic of a floppy disk, which I doubt anyone under 50 has ever used.
The point is that we see the future through the lens of the present. We can imagine our phone being a multi-function device by conceptualising it as the different devices. We retain the existing boundaries, the existing mental models that we have. We find it confusing when those boundaries get blurred or obliterated.
Younger generations see a different present, so their mental models and conceptual boundaries are different. But they are still the lens through which they try to imagine the future.
And so we come to AI. What’s the icon for AI? What skueomorph can we come up with to link the future with the past? To make the adoption more palatable? Well, it’s not obvious, is it? What exactly does AI incorporate or replace? Everything? Or nothing specifically?
The choice that many seem to be going with is a magic wand. I wonder if that is wise? Do we really want to surrender our autonomy to magic? Is that why many people are scared of AI, and even actively shunning or sabotaging it?
And suggesting it is magic is, well, setting a high expectation, isn’t it? Arthur C. Clarke famously said “Any sufficiently advanced technology is indistinguishable from magic” but is today’s AI actually that advanced? It certainly replicates some magical thinking when it hallucinates but using it can sometimes be a less than magical experience.
Of course, magic is also about sleight of hand and confidence tricks. So maybe the AI icon is on the money after all.
It’s Like That
There was a time when email was a new thing. In fact, back then we called it electronic mail. You know, like letters, but electronic. It does what it says on the tin.
Only that didn’t quite work as an analogy because people had different understandings of what mail was and not all of those translated into email. Physical mail arrived through your letter box or turned up in your in-tray (oh boy, we’re really getting in office archaeology here, aren’t we?). It came to you. Email, back then, didn’t work like that because you had to go and get your email.
We had to find another way to describe it. So we talked about pigeon holes. A pigeon hole was a place where people could leave things for you (letters, memos, notices, promo pieces), which you collected at your leisure. They were common in some offices, in colleges, in shared accommodation.
We explained, then, that your ‘electronic mailbox’ was like a pigeon-hole. People left messages for there for you to collect at a time that was convenient to you. People understood that and it overcome a big point of confusion. It created an expectation of how email would work by linking it to an existing concept. It was a simple analogy.
As you have probably noticed, people caught on to email quite quickly and it became rather popular. Now we see email as just another stream of messages.
To use those original analogies now would be a bit daft. I mean, I felt I had to explain what a pigeonholes system is and I suspect many haven’t received a letter through the mail for years.
Now email is the reference point, the mental model that we use for framing. The skeuomorph for email, an envelope, can now be applied to any message, however it is created, distributed or read. It could be created automatically by an AI, sent to our phone and read out to us. It has no connection with the physical mail system, with the GPO, the penny black, with red pillar boxes - and yet, it is a future that we can only imagine with reference to our past and our experience.
When we are able to send message to each other with thoughts only, will we still think of an envelope? Probably. But you’d never get to telepathy if you started thinking about what you could do with an envelope.
Which is to say, we can only see how the future will be different to what we know. We can’t see a whole world of possibilities that are radically different and have no connection to today, we just can’t. Maybe in broad concepts but not in any meaningful detail.
Makes me wonder why I bother writing about the future of work at all. Maybe what we’re really talking about is the near future.
Right Here, Right Now
Still, that doesn’t stop AI gurus telling us what the future we can’t imagine will be like! ‘AI 2027’ is a paper that modestly seeks to predict the impact of superhuman AI over the next decade. It will not surprise you that they predict it ‘will be enormous, exceeding that of the Industrial Revolution.’
Apparently, one of the authors made some predictions about AI five years ago that have been pretty much on the money and so we are encouraged by some to take these predictions seriously.
I haven’t read the earlier predictions but I am happy to accept they were quite accurate. However, that does not mean we should take these new ones on faith. 5 years ago, AI was still in the labs. Don’t forget, ChatGPT has only been out in the wild for 2 years (and it was arguable released in a pre-beta state). So the predictions would have been largely about the technical development.
We are now in the period where the rubber hits the road. It’s all very well to get something working in the lab but turning that into a real world product, a proposition that has traction and a business model that is viable - well, that’s a whole different ball game.
I started to read the paper (reluctantly, because I knew it would annoy me intensely) and in the first couple of pages I found several assumptions that are, at best, optimistic. I would call them fanciful. No, actually, I would call them bollocks.
I studied Economics, I am used to reading papers in Economic Theory, that explain how they relate to the world as we observe it and how they might impact the future (although Economics is not actually about predicting the future, that’s an overreach that is having dire consequences). What I know is that the assumptions that the model is based on are critical. It is not unknown for the assumptions to be chosen to make the espoused theory actually work, even if those assumptions bear little relationship to reality or exceed the boundaries of probability (I’m looking at you, Milton Friedman).
One of the more egregious examples of a fantastical assumption is that ‘the biggest datacenter the world has ever seen’ is going to be built, and this will double capacity in 2026. Do they have any idea how long it takes to build and commission a data centre? If you started now, you’d be lucky to get it up by 2030. Against this backdrop, Microsoft and others who would be building these data centres (they are known as the hyperscalers) are actually cutting back their data centre expansion plans.
As AI interfaces with the real world, physical constraints start to apply, human unpredictability introduces drag.
So I really struggled to get beyond the first couple of pages. I managed to get about half way through but then I began to lose the will to live. It’s a cross between a techie wet dream and cheese-induced anxiety attack. Not my favourite genre
What really upsets me about this is the uncritical way that this AI nonsense is accepted by commentators and journalists. People I normally have great respect for repeat things like ‘AGI (Artificial General Intelligence) will be here by 2030’ and ‘AI is coming for all our jobs’ without any critical analysis of how we get from here to there. Their usually sharply attuned bullshit filters seem to get completely bypassed by the AI hype.
“Oh but if you talk to the people in the AI industry, who really know about this stuff and have the inside gen, it’s what they say.” is a frequent counter to criticism. To paraphrase what Mandy Rice-Davies famously said, “Well they would, wouldn’t they?”.
You know, a lot of people aren’t worried about losing their jobs to AI, because their jobs are shit. What they are worried about is losing their salaries. Maybe focusing on making jobs less shit would yield better outcomes than chasing AI fever dreams, and improve a lot of lives along the way?
(If you do want a critical analysis of AI, I suggest you follow Ed Zitron. He’s a bit ranty, a bit sweary, very impassioned and bloody angry at the time, money and effort being wasted to make the Tech billionaires richer rather than actually addressing the world’s problems. You might be surprised to know I rather like his stuff.)
One Way Or Another
And finally …
says David Allen Green on Bluesky.
It appears that lawyers used ChatGPT or some other LLM to do their legal research and it generated five completely fake cases. The case references looked perfectly real but the cases simple didn’t exist. The judge was not best pleased.
Using an LLM like ChatGPT is going to be cheaper than a human doing proper legal research, so it’s going to be tempting. But clearly, it is not reliable. In this case, no advantage was gained as there was plenty of existing case law to support the legal points being made. In fact, it was disadvantageous as the lawyers had won the case anyway, due to failures on the defendant’s side, but the judge refused to allow costs, which would normally be routinely awarded.
And one more thing…
My wife was complaining about autocorrect, as she has been messaging a friend about having coffee on Tuesday or Wednesday but when she typed ‘We’, autocorrect offered her ‘Weekend’.
“That doesn’t make any sense, does it?” She said.
“Ah, that’s because it’s not intelligent enough to know that when you put in ‘Tuesday’, the most likely next choice is ‘Wednesday’, I reasoned (somewhat smugly, I have to confess).
“Oh. I didn’t put Tuesday, I put ‘Tues’,” she replied.
She duly typed in ‘Tuesday’, in full, and autocorrect gave her ‘Wednesday’.
On the first analysis, she had to type an extra three letters, ‘day’, for autocorrect to save her typing another nine. A net saving of six letters. Efficiency gain. Result.
But now she has to think about what to type in order to get autocorrect to suggest the word she actually wants. Now she’s thinking about how to phrase a message that she would have previously just dashed off, taking out common abbreviations so as not to confuse autocorrect. That’s extra cognitive load, with every message! Big efficiency loss.
This stuff is supposed to help us, isn't it?
And AI (well, LLMs like ChatGPT) is at heart a turbo-charged autocorrect. Can you imagine how ‘helpful’ that might turn out to be?
Great stuff! You got a lot further in the AI 2027 article than I did. 👍
Great way to start a Monday, thank you. I notice that these conversations happen between individuals, not organisations. Organisations just seem to drink the Kool nAid, or maybe they’re mainlining. Makes yu wonder, when a small network of individuals can access AI to go round organisations, what are most organisations for now?