Basil Jarrett | AI is doubling down on us
I REMEMBER the first time I was introduced to Moore’s Law.
It was the late 1990s. I was a fresh-off-the-boat graduate student in Brooklyn, NY, with a Windows 95 desktop, a thirst for knowledge, and a dial-up modem. My proudest possession at the time was a hand-me-down 486 computer, a clunky, beige dinosaur that booted up with the urgency of a pensioner at a crowded ATM. I remember saving my assignments on a floppy disk and praying to the tech gods that the disk wouldn’t corrupt before I reached the library printer.
Then came the Pentium processor. It felt like magic. It was faster, sharper, and didn’t freeze every time you opened more than two windows. We used to joke that the new Pentium chip could finish your assignment before you even wrote it. At least until the newest, latest and greatest version came out.
That’s when someone introduced me to Moore’s Law, the idea that computing power would double every two years, while the cost would correspondingly go down. It sounded like science fiction. But year after year, we watched computers shrink from desktops to laptops to PDAs to phones to watches … all while their performance and capabilities skyrocketed.
The best way to understand Moore’s Law is like this. If it was applied to automobiles, every two years your car would go twice as fast, use half as much fuel, and cost half as much to buy. Crazy, right?
But Moore’s Law wasn’t physics. It was a prediction by Intel’s co-founder Gordon Moore, and for about 50 years, it was shockingly accurate. Until now.
Because today, Moore’s Law is being outpaced, not by a faster chip, but by something infinitely more disruptive: artificial intelligence (AI).
METR’S TERRIFYING LITTLE GRAPH
Just last week, researchers at the Model Evaluation and Training Research (METR) lab in Berkeley, California, dropped what should have been front-page news. You could be forgiven for missing it if you’ve been tuned in to the other disruptive news items coming out of the US almost daily; but according to METR’s data, the length of tasks that AI agents can complete autonomously has been doubling every seven months since 2019.
Let that sink in. Every seven months, AI systems are doubling in the complexity and length of the tasks they can complete on their own, without human intervention!
That’s Moore’s Law on steroids – or 98 octane if you prefer.
If this sounds like the beginning of Terminator 2, you’re not alone. But at least in James Cameron’s movie, the robots had to come back from the future. Today’s AI agents are doing the job right here, right now, and we’re the ones handing them the keys.
FASTER, SMARTER…AND EMPLOYED
Let’s break it down:
Five years ago, an AI system could write you a passable paragraph. Today? It can write an entire university thesis, build a website, manage your calendar, generate an ad campaign, and fire your assistant – all before lunch.
The tasks themselves aren’t just getting longer, they’re getting more complex, more creative, and more connected to the real world.
And unlike the chip revolution of the ‘90s and 2000s, where improvements were mostly under the hood, this AI wave is in your face, coming for your job, your lifestyle, your industries, and in some cases, even your sense of reality.
We used to laugh at the idea that machines would one day replace humans. But now we’re getting rejection letters for jobs written by an algorithm, while watching AI-generated influencers rack up millions of followers and brand deals.
The real problem, though, is that we’re not ready. Now, don’t get me wrong. I’ve written extensively about the transformative potential of AI in medicine, education, disaster management, and even governance. But while we marvel at its possibilities, we must also stare squarely into its eyes and see the dangers.
And the greatest danger, I believe, is complacency.
You see, AI isn’t following the polite rhythm of Moore’s Law any more. It’s sprinting ahead, leaving institutions, legislation, and education systems in the dust. While we’re still debating whether ChatGPT is cheating or a study tool, AI systems are already hiring, firing, diagnosing diseases, writing laws, designing weapons, and even cashing our pay cheques.
And if AI is doubling in capability every seven months, we’re six doublings away, or roughly three years, from AI systems being 64 times more capable than they are today.
Sixty-four times. That’s crazy.
FROM STEAM TO CIRCUITS TO SENTIENCE?
‘Sentience’ is a new word I recently discovered. It means ‘the capacity to experience feelings and sensations, encompassing awareness and emotional reactions’. You know, the things they say AI cannot do. But how often have ‘they’ been wrong? When the steam engine emerged, it changed the world. When electricity spread, it changed it again. Then the Internet redefined communication, knowledge, and commerce. But none of those inventions, not even the Internet, learned how to think for themselves.
AI doesn’t just compute. It learns, reasons, evolves. And the more we rely on it, the more we risk outsourcing our own intelligence, our memory, our judgement, and our imagination.
That recent Carnegie Mellon and Microsoft study confirmed this: the more people rely on AI for tasks, the less critical thinking they do. We are literally training ourselves out of the skills we need most to survive the AI age. Unless we do three things, and fast.
THE PATH FORWARD (IF WE’RE BRAVE ENOUGH)
First, we need to educate for AI literacy, not just computer literacy. Every student, from age eight to 80, needs to understand how AI works, and what it can and cannot do. Second, we need to legislate with urgency as AI can’t be the Wild West, the way the early days of the Internet were. This is far more dangerous and so we need rules, ethics and guard rails all over. Finally, we need to double down and play up on what makes us human: empathy, ethics, creativity, and judgement – at least for now.
Moore’s Law once gave us faster processors and more capable hardware. But today’s AI revolution is giving us faster everything – including the timeline for irreversible change. So no, Skynet, the robots aren’t coming.
They’re already here and they got here in half the time.
Major Basil Jarrett is the director of communications at the Major Organised Crime and Anti-Corruption Agency and a crisis communications consultant. Follow him on Twitter, Instagram, Threads @IamBasilJarrett and linkedin.com/in/basiljarrett and send feedback to columns@gleanerjm.com.


