noelbraganza.com
< Back

We've been here before

by

Noel Braganza

On chatbots, calculators and why humans always take longer than expected to realise what they're holding

April 3, 2026

When calculators became widely available in the 1970s, a significant number of people used them primarily to check arithmetic they had already done by hand. Not to go further, not to attempt calculations that would have been impossible before, just to verify the answer they already trusted more. The machine was faster, certainly. But the mental model hadn't moved. The calculator was a tool for arithmetic, and arithmetic was something humans did, so the calculator slotted in as a slightly more reliable human, not as something that changed what was possible.

This is not a story about people being slow or unimaginative. It is a story about how humans actually adopt new things.

We tend to understand new technology through the lens of whatever it most resembles. The first cars were horseless carriages. The first television programmes were radio with pictures. The first digital cameras were film cameras that happened to use a memory card instead of a roll of film. The new thing gets understood as a faster, cleaner, more convenient version of the old thing. And for a while, that's more or less what it is. The infrastructure, the habits and the expectations all converge around the familiar use case, and the less familiar possibilities sit quietly on the periphery, waiting.

AI, right now, is mostly being used as a very good search engine. A chatbot. Something you ask a question, get an answer, and close the tab. This is useful, genuinely useful, in the same way that using a calculator to check your arithmetic is useful. It saves time. It reduces friction. It is a real improvement over what came before it. But it is also, in historical terms, the horseless carriage moment. The thing being optimised for is still a version of what already existed.

I notice this not with frustration exactly, more with a kind of interested recognition. Because the pattern is so consistent across technology history that it starts to feel less like a failure of vision and more like a feature of how change actually works. You need the familiar use case to get adoption. You need adoption to get the infrastructure. You need the infrastructure to make the unfamiliar use cases possible. The chatbot era of AI might be a necessary precondition for whatever comes after it, in the same way that people using the internet mostly for email was a necessary precondition for everything the internet eventually became.

But there is something underneath the pattern that I find more interesting than the pattern itself.

Every time a genuinely new capability arrives, there is a window, sometimes years long, where almost everyone is holding something extraordinary and treating it as ordinary. The people who close that gap faster aren't necessarily smarter or more technical. They're people who are willing to let go of the familiar frame before it's strictly necessary. Who are comfortable asking not "what can this do that the old thing did" but "what does this make possible that had no equivalent before."

That is a harder question. It requires sitting with uncertainty and not resolving it too quickly into something comfortable. It requires being genuinely curious about a capability rather than immediately assigning it to an existing mental slot. Most people don't do this naturally, not because they lack the ability, but because the familiar frame is easier, faster and usually good enough for the immediate task.

The calculator-checkers weren't wrong to use it that way. They just didn't need it to be more than that yet. And most of the people using AI as a slightly faster search engine aren't wrong either. They're getting real value from it. The gap between what the technology is and what they're using it for doesn't cost them anything visible today.

What it costs is harder to measure. It's the compounding that doesn't happen. The problems that don't get redefined. The assumptions that don't get questioned because the tool that might have questioned them got used to confirm them instead.

History suggests that this window closes eventually. The infrastructure improves, the use cases multiply, the early adopters demonstrate enough value that the frame starts to shift for everyone else. The calculator stops being a faster way to check arithmetic and becomes the thing that makes new kinds of mathematics accessible to new kinds of people. The internet stops being a better postal service and becomes something that has no postal equivalent at all.

We are somewhere in that window right now with AI. It's not uncomfortable exactly, but it is recognisable if you've seen it before. The extraordinary thing sitting in an ordinary frame, waiting for enough people to notice the gap.

It has always taken longer than expected. And it has always, eventually, closed.

About the author

Noel Braganza is a designer and founder based in Gothenburg, Sweden. He co-founded MuchSkills, a bootstrapped SaaS platform for skills intelligence, and Up Strategy Lab, a strategy and design consultancy. His background is in Interaction Design, with research experience at the MIT Design Lab.

Most of his work starts from the same instinct: that the inherited assumptions underneath a problem are usually more interesting than the problem itself. That's true of how organisations think about skills, how founders talk about growth, and how people are starting to make sense of AI.

MuchSkills is profitable and growing without external funding. That shapes how he thinks about building, what's worth optimising for, and what isn't.

He writes occasionally at noelbraganza.com.