Are We At AGI?

When will we reach Artificial General Intelligence (AGI)?

What does this even mean? One of the challenges with talking about AI is the fact that we have terms which have varying definitions. There are a multitude of ways people define AGI. Hence, the metrics are changing.

If we look at the average chatbot, it is safe to say it is smarter than any human in all fields. The knowledge it possesses tops anyone when the totality is taken into account. Naturally, there are still people, on an individual basis, who are more advanced in particular fields than the AI models.

For example, no AI model is smarter than the top physicist. The title still goes to humans. We could likely say the same thing in fields of medicine, especially when real world factors such as surgery are incorporated.

Elon Musk defines AGI as a model that is smarter than ALL humans in EVERY field. Based upon this criteria, we are not there yet. Musk does believe we will get their this year but we have to remember the accuracy of his predictions.

Others have a lower bar, citing a model that is smarter than MOST humans. By this metric, we have reached it.

In this article, we will dig into this to discuss how this might affect things.

Are We At AGI?

A bigger question is whether it really matters?

To be honest, my thought is we are dealing with mostly an academic exercise. If we have AGI or not really has no impact. To me, this is only a matter of scale.

While AI models might not be smarter than every human, it does provide the majority with superpowers in fields they didnt have before.

Let us take coding.

This is an area of dispute as the models are not up to par with the top coders. A case could be made that it isn't equivalent with the average programmer.

That said, look at how few people really know how to code. It is not a skill the masses possess, such as reading or writing. Coding is still specialized.

In this regard, AI models advance the capabilities of these individuals. While they might not be able to code a highly complex system, games such as Pong are possible. Obviously, this might not seem revolutionary since that game is roughly 50 years old.

But we have to focus upon the fact we are dealing with todays technology. By the end of the year, perhaps PacMan is possible.

The point here is people now have the ability to do what was impossible before.

Another case study could be image generators. Do these rival the top graphic artists? Most likely not.

What we do see is the ability to generate images that are "good enough" for most purposes, especially online. Will Fortune 500 companies still hire firms to design logos and images for marketing campaigns? Sure. Yet how many images fit into this category?

Lacking Utility

One of the keys to the debate over AI is the lack of utility. So far, most are not impressed.

There are two reasons for this:

  1. The applications that are derived from AI capabilities are still lacking. In other words, the benefit is still not seen by the user.

  2. It is overlooked since it is being integrated into everything.

The second is worthy of note. Here we have a general purpose technology (GPT) being incorporated into our daily lives. Consider how often we think about electricity. While we use it, few think about it (at least until the electric bill comes).

Daily, we think about turning on the television, computer, or charging our phones. What does this use? Electricity.

The point is we are dealing with a derivative. Electricity is in the background. AI will be the same. Right now, the derivatives are what is lacking, i.e. the products that utilize electricity.

When that happens, AI usage will explode. Whether it is called AGI or not will be immaterial. People only care about what things do, not what they are called.

Advancement in AI models is taking place. This is the crucial component to the discussion.

Posted Using INLEO

Sort:  

As someone who has been using both chatbots and code assistance on a daily basis, I am strongly convinced that chat bots are still much and much more stupid than the average human. But they are stupid in reasoning. They know a lot of shit, sure, and in that they are usefull, but they halicunate, regurgitate and make errors in even the simplest calculus or logic reasoning. But importantly, that is LLMs, not AI in general, only generative AI in general and LLMs in specific. AGI isn't I think about the "level" of inteligence, it's about how it is applied. It's about combining language and generative skills with world models and causal reasoning. It is about working from first principles to explore fields the AI wasn't trained from. The 'General" part of AGI. AI companies are trying to convince investors LLMs can lead to AGI, and benchmarks seem to agree, but when you realize what LLMs are: amazing overfitting machines; you relialize LLMs are simply overfitting on the benchmarks.

I thinl 'eventualy' AGI might be reachable, but NOT through the LLM path, we need causal AI at the wheel, maybe LLMs for the user interaction, but the central core should be causal AI that talks both to the user with an LLM as mediator, and with an advanced world model that is going to to take massive amounts of expert input hours from thousands of experts, not just like LLMs training on scientific publications, carefull manual world model building. I think most AI companies are on the wrong track, putting too much focus on LLMs, puting them too central.