#lmstudio

3 posts · Last used 2d

Back to Timeline
@nelov@social.linux.pizza · 2d ago
I've been playing with #LMStudio for #localLLM with mediocre results. However #gemma4 really changed that. It's faster and is more capable then the other models I could try on my hardware. It has recent data and is able to use a fetch tool(among others) to get info on stuff it doesn't know! So I installed #ollama and now it runs even faster, to the point where delay(waiting) is not that noticeable. Since I am a lightweight user, I can see myself using it as mainly source.
0
0
0
@Larvitz@burningboard.net · Apr 11, 2026
I tried out the Gemma AI models from Google, running locally on my AMD APU (Ryzen 7 Pro 7840U with Radeon 780M) and asked it some questions about ZFS send / receive. gemma-4-26B-A4B-Q4_K_M: 14.29 tok/sec . The information, it generated, was factually correct and well laid out. Not the fasted, and surprisingly good. gemma-4-E4B-Q4-K_M: 26 tok/sec. The information was completely wrong BS with made up parameters. The presentation was confident and well laid out. But it generated it quickly 😂 Bottom line: confidently incorrect at high speeds. #ai #gemma #lmstudio #llm #generativeAI #amd #radeon
10
2
3
@bluszcz@mastodon.com.pl · Feb 26, 2026
Nowy #lmstudio oraz nowy #qwen - i lm studio ma ulepszone tools dla qwen. Już ściągam i będziemy testować. #ai #genai #agentic #opencode #qwen #llm
0
0
0

You've seen all posts