You are viewing a single comment's thread from:

RE: LeoThread 2024-10-16 04:34

in LeoFinance4 days ago

Impressive Benchmark Performance

The host shares benchmark results showing the strong performance of these Liquid Foundation Models compared to other prominent language models like LLaMA and Chinchilla. The 1.3 billion parameter model outperforms LLaMA 3.2 on the MMLU Pro benchmark, while the 40 billion parameter "Mixture of Experts" model beats out even the larger 57 billion parameter Chinchilla model.