AGI stands for Artificial General Inteligence, meaning that the inteligence wich the ai gains is general (aka it generalizes) and is able to produce novel and never before seen outputs. Since gpt 3.5 we have seen sparks of AGI inside of the AI (Even tho it was very small sparks of agi), and now ai is getting closer to agi as we can see in newer models like GPT 5 pro and grok 4 heavy (Since they are the most general AIs), being able to create novel information as long as its provided enghout context and time to reason. Now to reach AGI we only need to mostly build scaffoldings on aleardy existing AIs (Like improve memory (Aka what gpt 6 aims for) or make the ai have its own opinions about certain topics (What grok 5 aims for by letting the ai rewrite the whole internet data or its training data atleast).
You are viewing a single comment's thread from:
Letting AI self-learn would go more in the teritory of ASI (Artificial Super Inteligence), since its not about generalizing anymore, but its more about continous self-improvment and refinment, wich can not rlly be done without something extremly close to AGI (Due to hallucinations). The definition of AGI is neither something wich can do every task a human can, since this would not help the ai generalize, it would only make the AI better in specific tasks (But a large spectrum of tasks), but it would not genaralize beyond, this creating only a helpfull ai system wich can help with automating daily tasks, but not AGI or ASI (Being able to do any tasks as well as a human is a benefit of AGI or ASI). We are at max 6 years untill AGI or beyond, best case 1-2 years.