#glm

2 posts · Last used 24d

Back to Timeline
@thbley@phpc.social · Apr 03, 2026
New update for the slides of my talk "Run LLMs Locally": WebGPU Now models can run completely inside the browser using Transformer.js, Vulkan and WebGPU (slower than llama.cpp, but already usable). https://codeberg.org/thbley/talks/raw/branch/main/Run_LLMs_Locally_2026_ThomasBley.pdf #ai #llm #llamacpp #stablediffusion #gptoss #qwen3 #glm #localai #webgpu
2
0
0
@bluszcz@mastodon.com.pl · Feb 12, 2026
0
0
0

You've seen all posts