World Changing: Cognitive Hyperabundance

With all the talk about AI, we are seeing the first wave of change coming. This is being put forth by the technology leaders,, those working deeply in the AI field.

It is easy to cast off their predictions. After all, this is a difficult game that few get right. We heard, for years, how AI is changing the world. Yet, in many ways, it is still the same.

Or is it?

Often, the early changes are overlooked. We take an exponential chart and we can see how this happens. The moves at the left of the chart (early on) are small in comparison. That means the impact is minimal in contrast to the entire shift.

So far, we see some impact in the digital world. Chatbots are extremely popular and AI is being integrated into many applications.

The key to what is upcoming is the next wave that is being forecast. In this article we will take a look at that.

World Changing: Cognitive Hyperabundance

Dario Amodei was speaking at Davos. He said something that I found to be very interesting.

His prediction is that recursive self improvement for AI models will happen in roughly 6 months. This means we are looking at a shift in cognitive expansion.

Generative AI is really nothing more than the generator of units of cognition. As systems improve, the output naturally increases.

If we want to isolate this, we can think of this in terms of intelligence per token. To illustrate, we will use simple IQ numbers.

Let us start with a system that has an IQ of 100. At this level, the system is not capable of advanced physics. Few people at this intellectual level contribute to this field. It is accepted that most are between 130-180 in this arena.

By increasing the IQ of the system, through learning, to 140, we have increased the intelligence per token. The same amount of tokens yield a greater output with regards to cognition.

Then we add in the cost component. Over time, we are seeing the cost/token declining. Each year this drops by a significant margin. What was too costly just a couple years ago is now in the range of the average person.

Models are less costly to train for the same level. We cannot be misled by the fact that each new iteration costs more to create. This is not a like comparison.

In this, I am referring to the idea of training a GPT3 level model. Using this solely as an example, the money required today versus when OpenAI trained it for the first time is a fraction of the cost. Actually, it is pennies on the dollar.

The same is true for GPT4 now that we are operating on a GPT5 level.

Knowledge Work Is Going To Be Cooked

Here we get back into the debate.

That said, I think it is starting to wane. We are seeing more people who are starting to agree with this idea: AI is going to destroy jobs.

I stand by my assertion that knowledge work is cooed. We are going to see a massive spread in the ability of these AI models over the next year. If recursive self improvement is seen in 2026, that is world changing. These models will begin improving at rates much faster than we presently see. The process is compounded as algorithms improve and compute speed accelerate.

Anything that can be done with a mouse or keyboard while looking at a computer screen will be done by AI. That covers a lot of jobs. Anything related to robotics is still further down the road so we will leave that for another article.

For the moment, AI is a supplement to human workers. There was a time when machines in factories were also their to assist human workers. We know that automotive factories employ a fraction of the people while seeing an increased output compared to 40 years ago.

This is what automation does.

What happens when we have cognitive hyperabundance? How will the world look when units of cognition are improving by large percentages each year.

Going back to the IQ concept, how much does that improve. Intelligence is gained throughout life but there is a slowing as we age.

Certainly, some will point to IQ as a lousy metric, failing to encompass most of what "intelligence" is. I agree. However, it does capture, in numerical terms, what we are discussing.

If AI intelligence increases at a rate of 15% per year (a low number compared to what we are seeing), how do humans compete? Humanity is not seeing that same ascent. In fact, one could argue we mostly flatline in this area.

Technology tends to sneak up on people. It is amazing to think that something which gets so much publicity could do that but it is exactly what will happen.

This is why we have to be alert to the world changing instantly. This is going to happen much quicker than the smartphone and look at the impact that had.

Posted Using INLEO