20 Comments
User's avatar
8Lee's avatar

As an engineer who deeply cares about em-dashes... this burned through my soul.

David Andersen's avatar

"The researchers said this could happen in early 2026"

Taking all bets against.

Marco Roy's avatar

It's happening on the inside. We don't have much visibility on this, but definitely 100% being worked on. Most code in the AI Labs is already being written by AI. Obviously the models themselves are the next step. It's happening faster than we think.

Our human minds cannot grasp exponential growth very well.

David Andersen's avatar

AI writing AI code is not AGI.

And I'm sure it's being worked on. So is cold fusion. That doesn't mean we're anywhere near it.

And, given the amount of money at stake, if some efforts of AGI research were starting to bear fruit, I'd expect we'd see snippets of it in commercial models before an all-at-once great reveal.

So I'll continue to say taking all bets against AGI appearing in early 2026.

Marco Roy's avatar

We're not talking about AGI here (it's not even mentioned once); we're talking about self-improving AI. AI that improves itself independently instead of being entirely developed by researchers. AI models created by other AI models, rather than by humans. v2 created by v1, and v3 created by v2, etc. AGI will come later, and will most likely emerge from those very self-improvements (because humans cannot work fast enough, but AI can work 24/7).

That's the "takeoff" the post refers to: once model improvements become entirely independent and can continue autonomously 24/7 (and progress that much faster, exponentially). No other AI lab will be able to keep up unless they achieve takeoff also.

David Andersen's avatar

Ah, sorry, yes - totally misread the original - my brain saw AGI instead of AI.

I'm less skeptical about AI improving itself, but even if used this way, there are limits and barriers. This isn't automatically a gateway to a flawless AI and certainly not AGI.

Marco Roy's avatar

Connect two computers together, and the next thing you know, we have the internet. And things are moving a lot faster since then. Exponentially faster.

AI can now use tools, and this will only accelerate with things like the Bun acquisition (they didn't acquire it just for fun). Tool use is the main differentiator between humans and other species, and the primary driver of our growth. Look at everything humanity has accomplished through the use & creation of tools (including creating better and better tools, and more powerful tools).

Today, AI can use tools. But soon enough, it will begin to create its own tools independently -- and then, everything will change (i.e. the cusp of AGI). Models are merely the foundation for what's to come; the versatile interface with which to interconnect everything.

And then fuse those tools directly into models (kinda like cyborgs), and watch what happens. Any "bug" will merely become a quick self-patch in the AI's internal toolkit (i.e. how it interacts with the outside world, or with its "body"). Having trouble catching that ball? Give it a few minutes (or seconds), and it will never drop it again. Give it any musical instrument, and it will master it in a few minutes (and play it better than any human ever could). In fact, give it just about anything (like a plastic cup), and it will figure out how to play music with it (just like we do, but much better). All it will need to do is to create & update an internal tool for that "instrument". Using it will become as easy as calling a function (like play_note("G#", 0.5s)). Remove a finger, and they will adapt automatically by updating their toolkit.

And then we just need to network AIs for them to be able to fork & customize each other's tools (if that is even necessary). They might end up creating their own version of GitHub. They'll be able to learn new skills just like in The Matrix.

Meanwhile, humans will be living in a perfect utopia! No problems ahead at all!

And then hopefully, we eventually realize that it's no fun at all to be using cheat codes (or at least, not for very long).

Frank Kurka's avatar

I force myself to read claptrap like this to try and understand exactly how cluless humans think about ai.

Thank you for today’s example.

Marco Roy's avatar

Teach us, Jedi Master.

Connor Shepherd's avatar

I looked at your blog and you are maybe 18 months behind the curve which, for someone trying to sell AI implementation services, is deeply embarrassing for you. This post is by someone 2 years ahead of where you are. Please pay attention

Frank Kurka's avatar

You're totally full of nonsense.

18 months?

I doubt you could even accurately count to 18.

Have you even figured out 2 genders yet?

Another childish poster pretending to be something they aren't.

A condescending narcissistic BS artist.

Give it a rest.

Marco Roy's avatar

Are you using an open model?

Frontier models can generate way better insults than this.

Frank Kurka's avatar

Are you stupid? I don't use models for insults. Another clueless boob who thinks they can tell the difference

Marco Roy's avatar

Please go on. I think I'm learning.

Frank Kurka's avatar

simply type the comment into any chatbot and ask it

Connor Shepherd's avatar

And but so aping DFW in business communication would make writing emails more fun for the email writer, and because the incentives of AI apps lean towards the Entertainment of the operator, it's a plausible outcome