Mozilla Builders' LocalScore: An Interesting Local AI LLM Benchmark

Via Mozilla’s Mozilla Builders initiative for fostering open-source AI projects is LocalScore, an interesting local AI large language model (LLM) benchmark for Windows and Linux systems. LocalScore has a lot of potential and also builds off the Mozilla Ocho Llamafile project as an easy-to-distribute LLM framework. LocalScore is still in its early stages but is already working well and will also be used in future hardware reviews on Phoronix.

Llamafile 0.9.2 was released this past week and brings with it LocalScore into the codebase, a benchmarking utility. LocalScore helps in facilitating large language model (LLM) benchmarks on both CPUs and GPus. It’s simple and a portable way to evaluate LLM system performance.

LocalScore benchmark

LocalScore can be triggered from Llamafile packages or there is also an independent LocalScore binary for Windows and Linux to facilitate easy AI benchmarking.

LocalScore.ai

As part of this addition to Llamafile, there is now LocalScore.ai as an opt-in repository for the CPU/GPU results from LocalScore with the official models based on tiny / small / medium Meta Llama 3.1 models.

Via LocalScore.ai are the simple steps for running the CPU and/or GPU AI LLM benchmarks with the official models. The benchmarking can be easily done on Windows and Linux systems.

LocalScore is open-source and meets all of my standards/needs for benchmarking so it will be incorporated into future Phoronix benchmarks for the Linux hardware reviews, etc. (Well there’s just a few tweaks needed that should get folded into the next release of LocalScore / Llamafile, but in any event look for LocalScore usage in the coming weeks.)

LocalScore is a nice addition to the Mozilla Builders initiative and I’m all for seeing more open-source AI/LLM benchmarks that are easy-to-use / quick-to-deploy and cross-platform. Hopefully others check out LocalScore as well.