Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm using Qwen 3.5 27b on my 4090 and let me tell you. This is the first time I am seriously blown away by coding performance on a local model. They are almost always unusable. Not this time though...


122b is probably better; especially on a mac with 128gb memory.

localllama thread on this: https://www.reddit.com/r/LocalLLaMA/comments/1rk01ea/qwen351... (see comments for actual real usage rather thank benchmarks)

But for nvidia gpus 27b on a 3090 or similar is where it's at for sure.


27B dense model is probably the best in the 3.5 lot, not absolutely but for perf:size. Its also pretty good at prose, which is a rarity for a Qwen.


You don't need a coding version of model from Qwen? the 3.5 works?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: