You are viewing a single comment's thread from:

RE: LeoThread 2026-01-09 03-51

A bit over 3 years ago, ChatGPT3 was introduced. This was something special and is now called the ChatGPT moment.

Today, on InLeo, we have Rafiki, which is far more advanced than GPT3 was when introduced. It can be accessed by anyone simply through the inleo.io frontend.

We are awaiting Rafiki 2.0 which will provide more features to the userbase. This is only going to keep growing with the gap between Rafiki and the large models closing. There will always be a trailing (of 6 months or so) by Rafiki since the llms get the heavy training.

Just consider how things are changing.

Sort:  

Rafiki sounds like the ultimate mindset upgrade tool—leveraging AI to build better habits and routines without the excuses. Excited for 2.0 to push personal growth even further in this web3 space

It is true. Billions are dumped into the training of large models and, eventually, the smaller models benefit.

Now there are models training other models. I am running local Deepseek R1 Distilled version of Llama 70B. This means that Llama has been trained to think and reason like Deepseek. These kind of things are changing the game already.

That is what most people overlook. There is a convergence to technology which produces exponential effects.

The advancement of Llama is compounded by Deepseek, which also has its own advancement.

Yep, and people using the corporate frontier models are the ones that are going to get left behind in the end. If you aren't building your own infrastructure, you will be controlled in the future. So it's best to get ahead of it now.

Exactly—those billions fuel the innovation we all tap into for free. Smaller models like Rafiki democratize the gains, letting us focus on real mindset shifts instead of the tech grind

Rafiki 2.0 will be the best, and I'm sure it will surpass many AIs and many people will use it.

Are you using Rafiki?

Very little.

People have not priced Rafiki into LEO