You are viewing a single comment's thread from:

RE: LeoThread 2024-08-28 09:06

The failure of large language models to understand the concepts of letters and syllables is indicative of a larger truth that we often forget: These things don’t have brains. They do not think like we do. They are not human, nor even particularly humanlike.

Most LLMs are built on transformers, a kind of deep learning architecture. Transformer models break text into tokens, which can be full words, syllables, or letters, depending on the model.

“LLMs are based on this transformer architecture, which notably is not actually reading text. What happens when you input a prompt is that it’s translated into an encoding,” Matthew Guzdial, an AI researcher and assistant professor at the University of Alberta, told TechCrunch. “When it sees the word ‘the,’ it has this one encoding of what ‘the’ means, but it does not know about ‘T,’ ‘H,’ ‘E.’”