Testing on a Raspberry Pi
- π Dave installs LLaMA on a Raspberry Pi 4 with 8 GB of RAM, using Raspian.
- β οΈ The model runs slowly, about one word per second, due to the Pi's lack of GPU and limited CPU power.
- π The test shows that while the Pi can run the model, it's not practical for real-time use.
Testing on a Mini PC
- π₯οΈ Dave tests LLaMA on a Herk Mini PC, which starts at $388 and features a Ryzen 9 7940HS chip and Radeon 780M GPU.
- π» He installs LLaMA directly on Windows and runs the 3.1 model, which performs well but doesn't utilize the GPU due to its limited memory.
- π A smaller 3.2 model is tested, which is faster but still doesn't use the GPU, likely due to compatibility issues.