Everyone is trying to predict the arrival of AGI. From “It will be here next year” to “This will never happen.”
As you might expect, I land somewhere between the two. I think we are about ten years away.
We will not get there on LLMs alone. If you wanted to make some kind of comparison, the easiest thing to do would be to compare AGI to a human brain. After all, AGI means that we will be approximating the output of a human brain sufficiently close to a real brain that it will be virtually indistinguishable.
LLM stands for Large Language Model and represents the ability to infer the next possible symbol that makes sense in a chunk of output, based on what has come before, and what the users ultimately asked about.
I decided to Google what part of the brain an LLM compares itself to, and if you were interested in that kind of thing, you can go do that yourself. For our conversation, let’s say it represents some percentage of our ability to process language. Really excited techbros might say sixty percent. Skeptical naysayers might peg that at twenty percent.
In my opinion, we will hit the maximum possible value coming out of LLM optimization within a few years. I don’t have some kind of law or some math words behind this. I am just trusting my own instinct here after watching four waves of technological evolution since the early nineties.
We are already getting pretty good outputs from these models. Where they fall down is in context management. We are constrained by the size of the processing model relative to people’s power consumption and CPU capacity. How big does your context need to be? And how much juice do you have to pour into the magic box to have it emit the things? That number may be marginally getting better, but as I understand it, we are linearly scaling relative to the supply available of each, at best.
As we converge on the best possible output for LLMs, we will start to see people building simulations of other parts of the human brain. And this is what we need to get to true AGI.
In about a decade, we will have four or five different systems that have the complexity of an LLM. Each of these will represent an abstract version of a part of the human brain. More importantly, we will have a governing system that decides which system or systems are needed at any given moment.
That combined set of systems will be reasonably deep, and I think it will behave functionally identical to a human brain. It will have flight or fight instincts, the ability to be pleased with itself, and a vast sum of both contextual knowledge about just about everything that anyone could know, plus some extra wetware to do different brain functions above and beyond hallucination reduction.
This is awfully handwavy. I think that it is the minimum of what we need to accomplish to generate AGI. We will not get there with just LLMs, even if we have experts writing prompts that make people wonder if their computer has the feels.
That’s it. That is the whole post.
Unrelated to that, Friday’s presentation on Tips and Tricks on Hiring in 2026, also known as “something something ai,” was great. We had a small group of people having a highly intelligent and valuable conversation about things we have all seen in hiring, and I think more than one person took away some new tools and tactics to experiment with when interviewing. I also learned that I should be checking the battery level on my Bluetooth headset. They gave up the ghost halfway through the show, and I had to put my giant air traffic control headset back on. I am going to re-record some of it and put it back into my Leadership Lighthouse site as personal coursework instead of a webinar. I will update you all on how those experiments are going.
See you all next week!