
According to Ollama, the new version processes prompts around 1.6 times faster (prefill speed) and nearly doubles the speed at which it generates responses (decode speed). Macs with M5-series chips are said to see the largest improvements, thanks to Apple’s new GPU Neural Accelerators.
The update also includes smarter memory management, which should make AI-powered coding tools and chat assistants feel noticeably more responsive during extended use.
Ollama says the new performance boost should especially benefit macOS users who run personal assistants like OpenClaw or coding agents like Claude Code, OpenCode, or Codex.
The preview release is available to download as Ollama 0.19 – just make sure you have a Mac with more than 32GB of unified memory to run it. Support is currently limited to Alibaba’s Qwen3.5, but Ollama says support for more AI models is planned.
This article, “Ollama Now Runs Faster on Macs Thanks to Apple’s MLX Framework” first appeared on MacRumors.com
Discuss this article in our forums