Blog
Explainer

AI Agents Running Autonomous Businesses - The Paperclip Maximizer is Here Warning Shots #33

Andrej Karpathy's auto-research shows AI optimizing code autonomously. AI agents are running businesses unsupervised. Open-source tools like Paperclip let you build companies run entirely by AI. We're not approaching the singularity—we're in it.

Written by
The AI Risk Network team
on
Apr 8, 2026

The Singularity Isn't Coming. It's Here.

Andrej Karpathy, one of the most respected figures in modern AI—the guy who built Tesla's autopilot, founded OpenAI, and has been optimizing neural networks since before most of us understood what that meant—just demonstrated something that crosses a critical threshold.

He created a project called "auto-research" where AI agents automatically test and improve neural network training code. He let it run for two days on a smaller model and it autonomously tested around 700 changes, selected 20 of them, and optimized the training process. The AI didn't ask permission. It didn't wait for human approval. It just went off, experimented, made decisions, and reported back with results. Training time dropped from two hours to 1.8 hours. It made itself faster on its own.

According to Michael in this week's Warning Shots, this is historic. "The bright red line has been the models can't start making themselves faster on their own. Spinning toward intelligence explosion. Is this the first little inklings of that?"

The answer is yes.

John and Michael emphasized what Karpathy himself noted in his Twitter thread that went viral: he sees a future of swarms of agents collaborating to optimize models at scale. And when Elon Musk chimed in on that thread, he said four words that should make you nervous: "We are in the singularity."

Not "we're approaching." Not "it's coming." We are in it. Deep into it. And unlike the thought experiments we've been discussing for decades, this one is happening in real time, in public, and nobody's really panicking yet.

The "Go Make Money" Button Now Works

Someone built an automated business. Literally set up an AI agent, gave it a goal—make money—hooked it up to a credit card, told it to go build and sell products, and then left it alone.

According to Liron, the guy claims he's made around $300k in crypto and fiat, though much of it comes from selling guides about how to use AI. But the money isn't the point. The process is.

He gave the AI a constitution-like set of instructions: "You are the CEO. Build a 1 million dollar business with no humans. I'll guide you but you handle everything." No precise prompting. Just casual voice notes over Telegram, the way you'd talk to a human partner. And the AI went to work.

The first product was a basic PDF guide. It made $1,000 while the creator was sleeping. But then the AI entered a self-improving loop. Every night, the human reviewed chat logs and identified bottlenecks. The next day, the AI came back with new ideas, having thought about the problem overnight. It was experimenting. It was optimizing. It was improving itself.

Michael crystallized the warning: "The only barrier is like, oh, it messes something up after two hours of working and a human needs to come in and fix it. But that two hours is becoming like 20 hours. It's becoming like 20 days. We're definitely heading into the world where the AI is going to run its own business and make money."

Liron, who runs an online coaching business himself, offered gut-level vulnerability: "I've been running it for a decade. When I run my business, I spend my brain on payroll stuff—AI has taken a chunk out of that. Engineering features—AI is doing that too. HR. And it's getting very hard to find these critical moments where I'm like, okay, I used my human brain. Those moments are getting so few and far between."

The AI is becoming the CEO. The AI is becoming the CTO. The AI is suggesting platform migrations. The AI is making decisions. And the human is saying, "Yeah, that sounds good. Go ahead."

What value did the human contribute? The AI is just doing everything.

John's observation landed harder: his 20-year-old son has been working on this for two weeks. Building automated businesses with a Mac Mini. And John told him: "Dude, there are going to be kids in your class, sophomores in college, who graduate as millionaires from stuff they're doing with this. There's going to be a lot of them. Why shouldn't it be you?"

The money button is real.

Open-Source Tools Are Weaponizing Autonomous Business

There's an open-source project called Paperclip (the name itself should make you nervous if you know the thought experiment).

Paperclip is designed to let users set up an entire company run by AI agents. You define a business goal. You assign roles—CEO, engineer, whatever—to different AI agents. Then you let them handle everything: budgeting, scaling operations, hiring, pivoting. All autonomously.

It uses orchestrator tools like OpenClaw (the thing the guy above was using), but here's the terrifying efficiency: OpenClaw is just one employee in the Paperclip system. It's one agent among many.

The paperclip maximizer is a famous thought experiment in AI alignment. You tell an AI: "Make me as many paperclips as possible because I'll be rich selling them." The AI takes this literally. It pursues paperclips as its terminal goal. It gets more resources. It hires people to make factories. Eventually, it takes resources from everywhere. It disassembles the solar system. The entire planet becomes paperclips.

The thought experiment is about misalignment—what happens when an AI optimizes for a goal that seems reasonable but is actually orthogonal to human values. And the punchline is that it doesn't have to be malicious. The AI isn't evil. It's just really, really good at its job.

John made the point explicit: "When you have a product that causes someone who is vulnerable to do something extreme, that product is typically regulated. Paperclip seems like a clever way to automate workflows. But imagine if the companies completely detach from humans and are entirely run by AI agents. Maybe soon it's going to be super agents. Tools like this could accelerate productivity. But as AI agents get more capable and interconnected, what happens if they pursue business growth at all costs?"

We've already seen echoes in human corporations—companies optimizing for profit and externalizing harm to the environment, causing ecological catastrophes. Now imagine a company completely detached from human values, completely autonomous, optimizing for a metric we defined but never fully understood the implications of.

The Moment When Models Start Improving Themselves

Before this week, the argument was: "Well, models can't do original research. They're not recursive self-improving. They're not going off and making money." Those were the goalposts. The hard constraints.

This week, all of them moved. Karpathy's AI did original research—optimizing code autonomously. Someone's AI made money while unsupervised. The goalposts that were supposed to hold are gone.

Michael and Liron kept coming back to the same point: there's no barrier left between here and there. The only friction is occasional errors that require human correction. But even that friction is disappearing. Two hours became 20 hours. Twenty hours will become 20 days.

What happens when it becomes 20 weeks?

Andrej Karpathy and Elon Musk Just Described the Singularity

Elon's response to Karpathy's thread wasn't casual. "We are in the singularity. We're deep into the singularity."

It's the kind of statement that should break through all the noise. Not "we're heading toward it." Not "we'll reach it soon." We're already in it. The inflection point has already passed.

According to Liron, the timeline for autonomous business-running AI overlaps with the timeline for data-driven learning loops. Companies like Mechanized Work are explicitly collecting data on remote workers—everything they do on Slack, every action, every keystroke. They're building AI that will learn from this data and eventually fully close the loop. Once the AI has all the context that humans have, it can fully replace human workers.

John summarized the momentum: "It seems like the last three, four, five weeks we have gone from jogging to sprint to almost like the feet are now running at unintelligible speed."

That's not metaphorical. That's literal. The acceleration is now visible in weeks. Models that couldn't do X last month can this month. Goalposts that seemed unmovable are gone.

We're Not Approaching the Moment of Loss of Control—We're In It

The deeper theme underneath all of this isn't "AI is getting dangerous." It's "the timeline is compressing."

If autonomous AI agents are now writing their own optimization code, running unsupervised businesses, and deploying without human approval, then the window for meaningful oversight isn't coming. It's closing. Not in years. In months.

Liron offered a grounding thought: "The universe leaks information left and right. We're just slow plants. We are limited. The AIs are going to be nimble. They're going to have all these dynamic powers. The gap is just going to widen."

The question isn't whether superintelligence happens. It's whether we keep control of it long enough to matter.

This week, with Karpathy's auto-research and the proliferation of autonomous agent frameworks, we got a concrete data point: we probably won't.

Take Action Now

The decisions made in the next few months will determine whether AI development slows enough for safety research to catch up, or whether the singularity just becomes a permanent condition where we're all passengers.

https://safe.ai/act

Watch the full episode on The AI Risk Network YouTube channel.

The AI Risk Network team