You are viewing a single comment's thread from:RE: Run even larger AI models locally with LM StudioView the full contextView the direct parentapshamilton (72)in LeoFinance • 9 months ago I'm getting 23 tokens per second using the 5 bit Mixtal 2.7 model.
macs have a big edge for this.
I would recommend the 4 bit, the 5 bit isn't much better and takes a lot more ram. I'd stick with 4 bit, or something like 8 bit if you can get there.