Contents:
GitHub: ggml-org/llama.cpp/examples/llama-bench/README.md
AMD GPU support for llama.cpp via Vulkan on Raspberry Pi 5
How to benchmark 'llama.cpp' builds for specific hardware?
Performance of llama.cpp on Apple Silicon M-series #4167 提供了如何对比测试的案例