Martin Uetz
← Back to Blog
AI

The Machine That Improves Itself

Martin Uetz··9 min read

Why Recursive AI Is the Most Important Thing You're Not Thinking About


I've spent 30 years in technology. Started as a sixteen-year-old apprentice at Hewlett-Packard, rotated through every department they had, and worked my way through HP, Fujitsu, Cisco, and eventually into building my own company. I've seen waves come and go — the internet, mobile, cloud, big data. Every time, people said "this changes everything." And every time, they were mostly right, just not in the ways they expected.

But what's happening right now with artificial intelligence is fundamentally different. And I don't say that lightly.

Here's why.


A machine that rewrites its own blueprint

Let me explain something that most people outside of AI research haven't fully grasped yet. It's called recursive self-improvement. And once you understand it, you won't sleep the same way again.

The concept is deceptively simple. An AI system improves its own code, its own architecture, its own reasoning — and then the improved version improves itself again. And again. Each generation is smarter, faster, more capable than the last. Not because a human engineer sat down and made it better. Because the machine did.

Think about that for a second.

Every other technology in human history has been limited by the speed at which humans can iterate. You design a chip, you test it, you find the flaws, you redesign. That cycle takes months. Years. Careers. But when the thing doing the designing is also the thing being designed — and it's getting better at designing with every cycle — the constraint disappears.

This isn't science fiction. It's already happening.

Google's DeepMind used AI to design the next generation of its own chips — and those chips were better than what their best human engineers produced. AlphaCode writes software that competes with the top tier of human programmers. AI systems are now training other AI systems, optimising architectures, pruning inefficiencies, and discovering approaches that no human would have thought of.

We're not at the beginning of this. We're at the beginning of the beginning.


The curve that breaks your intuition

Here's the part that trips most people up: the acceleration.

We're wired to think linearly. If something improves by 10% this year, we expect roughly 10% next year. That's how salaries work. That's how most things in our daily lives behave.

AI doesn't work like that.

When each improvement makes the next improvement faster and better, you get exponential growth. And humans are catastrophically bad at understanding exponentials. We think we have time. We think the change will be gradual. We think we'll see it coming and adjust.

We won't.

Ray Kurzweil has been talking about this for decades — the singularity, the intelligence explosion, the point at which artificial intelligence surpasses human intelligence and starts accelerating away from us at a pace we can't follow. For years, people called him a dreamer. Now the biggest companies on the planet are racing toward exactly what he described.

The gap between GPT-3 and GPT-4 was about two years. The capabilities jump was staggering. Now imagine that same magnitude of jump happening every six months. Then every month. Then every week. That's what recursive self-improvement looks like when it hits its stride.

I'm not saying this will happen tomorrow. I'm saying we're on that curve. And the curve doesn't care whether you're ready for it.


What's already real

Let me ground this, because I'm not interested in hype.

AI-assisted chip design at Google reduced design cycles from months to hours. NVIDIA is using AI to design the next generation of GPUs — the very hardware that runs AI. That's recursion in the physical world.

Meta's AI research lab built systems that can write, test, and debug code. GitHub Copilot already writes a significant chunk of production software at companies worldwide. Developers who use it aren't being replaced — they're becoming dramatically more productive.

In drug discovery, AI models are identifying molecular structures in days that would have taken medicinal chemists years. AlphaFold cracked the protein folding problem that had stumped biology for fifty years.

And in AI research itself — this is the critical piece — AI is now being used to discover new training methods, new architectures, new optimisation techniques. The machine is literally improving the process of building machines.

Each of these alone would be significant. Together, they form a pattern that's hard to ignore.


The impact nobody's ready for

Now let's talk about what this means for the rest of us. Not for AI researchers at DeepMind. For normal people. For businesses. For society.

Work changes fundamentally. Not in the distant future — now. Every knowledge worker will have an AI co-pilot within the next few years. Not a tool that replaces you, but one that amplifies you. The question isn't whether AI will affect your job. It's whether you'll be the person using AI or the person competing against someone who does.

I've seen this play out in my own company. At humAIne, we use AI every single day — for research, analysis, writing, coding, strategic planning. The work that used to take a team of five a week now takes two people a day. That's not a marginal improvement. That's a structural shift.

Creativity gets redefined. People think AI threatens creativity. I think the opposite is true. When the mechanical parts of creative work — the drafting, the formatting, the technical execution — are handled by machines, humans are freed to do what we actually do best: have original ideas, make unexpected connections, feel things deeply enough to create something meaningful.

The best musicians don't fear new instruments. They learn to play them.

Education becomes obsolete — in its current form. I've written about this before. Our school systems were designed to produce factory workers. Sit down, memorise, repeat, test. That model was already dying. Recursive AI will kill it entirely. When any student can access a personal tutor that's smarter than any teacher, available 24/7, and endlessly patient — what exactly is the classroom for?

The answer is: human development. Social skills. Emotional intelligence. Critical thinking. Collaboration. All the things schools have been ignoring in favour of standardised tests.

Power concentrates — unless we actively prevent it. This is the risk I worry about most. If recursive self-improvement means that whoever has the best AI gets exponentially better AI, you get a winner-take-all dynamic. The gap between the leaders and everyone else doesn't just grow — it explodes. Countries, companies, individuals.

That's not a future I want to live in.


The philosophical earthquake

Here's where it gets really interesting — and really uncomfortable.

For all of human history, we've been the smartest thing on the planet. Everything we've built — art, science, philosophy, civilisation — rests on the assumption that human intelligence is the pinnacle. The ceiling.

What happens when the ceiling disappears?

When a machine can reason better than you, create more beautifully than you, solve problems faster than you — what is your purpose? What makes you you?

I don't think this is a crisis. I think it's a liberation. But only if we approach it with eyes open.

The answer, I believe, is that human value was never really about intelligence. It was about consciousness. About experience. About the fact that we feel things. We love. We grieve. We laugh at absurd jokes. We find meaning in a sunset, not because it's computationally interesting, but because we're alive and we know we won't be forever.

No machine has that. No machine — no matter how recursively improved — will understand what it feels like to hold your child for the first time.

I know. I have two sons. And the moments that defined me as a person had nothing to do with how smart I was. They had to do with how present I was.


Why I built humAIne

This is exactly why I founded humAIne. The name isn't accidental. Human plus AI. The "AI" capitalised inside the word, because it's not separate from us — it's becoming part of us.

My driving belief — the thing I wrote on a piece of paper years ago in a personal workbook that I still have — is this: Eine Welt, wo Menschen im digitalen Zeitalter wieder miteinander Mitgefühl geben. A world where people in the digital age show each other compassion again.

That's not anti-technology. That's pro-human.

The biggest risk of recursive AI isn't that machines become too smart. It's that we become too passive. That we outsource our thinking, our creativity, our decision-making, and eventually our agency to systems that optimise for efficiency rather than meaning.

I've spent my career — from HP to Cisco to building businesses with my wife Sigrun — learning that technology is a tool, not a destination. The destination is always human. Better lives. More connection. Less fear.

The future isn't a spectator sport. We don't get to sit in the stands and watch AI reshape the world and hope it turns out alright. We have to be on the pitch. Making choices. Setting boundaries. Building the systems we actually want to live with.


Where do we go from here

I'll be direct. Here's what I think needs to happen.

We need AI literacy to become as fundamental as reading. Not coding — understanding. Every person should grasp what AI can and cannot do, how it makes decisions, and where it fails. This isn't optional. It's survival.

We need regulation that's as fast and adaptive as the technology itself. The current approach — committee meetings about AI policy while the technology doubles in capability every few months — is like trying to regulate the internet with fax machines.

We need the benefits distributed, not hoarded. If recursive AI creates unprecedented productivity gains, those gains need to flow to more than just shareholders and Silicon Valley. This requires political will, new economic models, and a level of cooperation between nations that we haven't demonstrated yet.

And we need to invest in what makes us human. In relationships, in community, in experiences that no algorithm can replicate. Because the more capable the machines become, the more valuable those distinctly human qualities will be.


The bottom line

Recursive self-improvement in AI is not just another tech trend. It's potentially the most significant development in the history of our species. The machine that improves itself is coming. In many ways, it's already here.

I'm not afraid of it. But I refuse to be naive about it.

The technology will advance whether we're ready or not. The curve doesn't wait. The question — the only question that actually matters — is what kind of humans we choose to be in a world where machines can outthink us.

My bet is on us. On our creativity, our compassion, our stubborn insistence on finding meaning in a universe that doesn't owe us any.

But we have to earn it. Every single day.

Let's get to work.


Martin Uetz is the founder of humAIne, a company focused on the intersection of humans, technology, and business. He writes from Switzerland and Iceland, usually with too much coffee and not enough sleep.