The ollama open-source software that makes it easy to run Llama 3, DeepSeek-R1, Gemma 3, and other large language models is out with its newest release. The ollama software makes it easy to leverage the llama.cpp back-end for running a variety of LLMs and enjoying convenient integration with other desktop software.
With today’s release of ollama 0.6.2 there is now support for AMD Strix Halo GPUs, a.k.a. the Ryzen AI Max+ laptop / SFF desktop SoCs. Ryzen AI Max+ appears quite impressive though unfortunately we haven’t had the opportunity to see how well it works under Linux. In any event it’s good seeing ollama 0.6.2 providing timely support for the Ryzen AI Max+ “Strix Halo” hardware.
The other focus of ollama 0.6.2 is providing a number of fixes around its Gemma 3 LLM support, including now supporting multiple images with Gemma 3.
More details and downloads for the ollama 0.6.2 release via GitHub.