Script 'mail_helper' called by obssrc
Hello community,

here is the log from the commit of package llamacpp for openSUSE:Factory 
checked in at 2026-03-05 17:14:49
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Comparing /work/SRC/openSUSE:Factory/llamacpp (Old)
 and      /work/SRC/openSUSE:Factory/.llamacpp.new.561 (New)
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Package is "llamacpp"

Thu Mar  5 17:14:49 2026 rev:26 rq:1336614 version:8189

Changes:
--------
--- /work/SRC/openSUSE:Factory/llamacpp/llamacpp.changes        2026-01-29 
17:48:48.315161962 +0100
+++ /work/SRC/openSUSE:Factory/.llamacpp.new.561/llamacpp.changes       
2026-03-05 17:18:11.546212897 +0100
@@ -1,0 +2,35 @@
+Mon Mar  2 22:29:33 UTC 2026 - Robert Munteanu <[email protected]>
+
+- Update to version 8189:
+  * CUDA: CDNA3 MFMA support for FA MMA kernel, improved CUDA graph
+    capture, dequantization optimizations, and grid.y cap fix in
+    non-contiguous kernels.
+  * Vulkan: Intel/AMD tuning, FA scalar and coopmat1 refactors,
+    new ops (L2_NORM, GGML_OP_SET), overlap check before fusion,
+    and correctness fixes for rope, fp16 FA, and mul_mat_id.
+  * CPU and GGML: mxfp4 repack, SVE/RVV kernel additions, q5_K and
+    q6_K repack with dotprod and i8mm, tiled FA for prompt
+    processing, s390x optimizations, and extended bin bcast.
+  * Accelerator backends: new ops and optimizations across OpenCL,
+    Metal, WebGPU, SYCL, CANN, HIP/ROCm, Hexagon, ZenDNN, and a
+    new VirtGPU backend for Virglrenderer API remoting.
+  * Models and conversion: added Kimi-K2.5, Kimi Linear (MLA KV
+    cache), Jina Embeddings v5 Nano, Kanana-2, JAIS-2, PaddleOCR-VL,
+    GLM-OCR, GLM MoE DSA, Tiny Aya, full modern BERT, Step3.5-Flash,
+    Qwen3.5 series, and Devstral-2; graph deduplication and
+    conversion fixes across multiple architectures.
+  * Server: /v1/responses mirroring, multi-model alias support,
+    multi-modal prompt caching and context checkpoints,
+    max_completion_tokens property, and various stability fixes.
+  * KV cache, MTMD, and Jinja: M-RoPE shift fix, V-less cache
+    support, hybrid model size fix, multimodal tiling and padding
+    fixes, and multiple Jinja correctness and feature additions.
+  * WebUI: full-height code blocks, raw LLM output switcher, system
+    message injection, router mode fixes, and Svelte update.
+  * Misc: self-speculative decoding without a draft model, NetBSD
+    support, ggml bumped to 0.9.7, gguf-py to 0.18.0, updated
+    miniaudio, cpp-httplib, and BoringSSL.
+  * Full commit log:
+    https://github.com/ggml-org/llama.cpp/compare/b7789...b8189
+
+-------------------------------------------------------------------

Old:
----
  llamacpp-7789.tar.gz

New:
----
  llamacpp-8189.tar.gz

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Other differences:
------------------
++++++ llamacpp.spec ++++++
--- /var/tmp/diff_new_pack.LEufmB/_old  2026-03-05 17:18:12.742262395 +0100
+++ /var/tmp/diff_new_pack.LEufmB/_new  2026-03-05 17:18:12.746262560 +0100
@@ -25,11 +25,11 @@
 %global mtmd_sover         0.0.%{version}
 %global mtmd_sover_suffix  0
 
-%global ggml_sover         0.9.5
+%global ggml_sover         0.9.7
 %global ggml_sover_suffix  0
 
 Name:           llamacpp
-Version:        7789
+Version:        8189
 Release:        0
 Summary:        Inference of Meta's LLaMA model (and others) in pure C/C++
 License:        MIT

++++++ llamacpp-7789.tar.gz -> llamacpp-8189.tar.gz ++++++
/work/SRC/openSUSE:Factory/llamacpp/llamacpp-7789.tar.gz 
/work/SRC/openSUSE:Factory/.llamacpp.new.561/llamacpp-8189.tar.gz differ: char 
28, line 1

Reply via email to