Script 'mail_helper' called by obssrc
Hello community,

here is the log from the commit of package llamacpp for openSUSE:Factory 
checked in at 2026-01-29 17:45:46
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Comparing /work/SRC/openSUSE:Factory/llamacpp (Old)
 and      /work/SRC/openSUSE:Factory/.llamacpp.new.1995 (New)
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Package is "llamacpp"

Thu Jan 29 17:45:46 2026 rev:25 rq:1329718 version:7789

Changes:
--------
--- /work/SRC/openSUSE:Factory/llamacpp/llamacpp.changes        2025-12-26 
14:38:20.148487236 +0100
+++ /work/SRC/openSUSE:Factory/.llamacpp.new.1995/llamacpp.changes      
2026-01-29 17:48:48.315161962 +0100
@@ -1,0 +2,26 @@
+Wed Jan 21 18:51:02 UTC 2026 - Eyad Issa <[email protected]>
+
+- Update to version 7789:
+  * CUDA: multiple fixes and cleanups, improved Blackwell and RDNA
+    support, FA/MMA correctness fixes, reduced debug output, and
+    improved compatibility with older toolchains.
+  * Vulkan: performance optimizations, new and extended operators,
+    improved handling of large tensors, and driver-specific
+    workarounds for AMD and Intel.
+  * GGML: new operators, backend sampling support, allocator and
+    graph improvements, WebGPU enhancements, and better error
+    handling.
+  * CPU and accelerator backends: fixes and optimizations across
+    OpenCL, HIP, SYCL, CANN, Hexagon, Metal, and Power
+    architectures.
+  * Models and conversion: added and fixed support for multiple
+    models including GLM-4 variants, EXAONE MoE, Qwen3 Next,
+    Gemma 3, Granite,LFM2, Youtu-VL, and others.
+  * Server and CLI: scheduling, memory, and KV-cache fixes,
+    improved defaults, new options, and stability improvements.
+  * Jinja templates: correctness fixes, feature completion, and
+    improved test coverage.
+  * Full commit log:
+    https://github.com/ggml-org/llama.cpp/compare/b7540...b7789
+
+-------------------------------------------------------------------

Old:
----
  llamacpp-7540.tar.gz

New:
----
  llamacpp-7789.tar.gz

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Other differences:
------------------
++++++ llamacpp.spec ++++++
--- /var/tmp/diff_new_pack.0OYLfs/_old  2026-01-29 17:48:50.587259292 +0100
+++ /var/tmp/diff_new_pack.0OYLfs/_new  2026-01-29 17:48:50.599259806 +0100
@@ -1,7 +1,7 @@
 #
 # spec file for package llamacpp
 #
-# Copyright (c) 2025 SUSE LLC and contributors
+# Copyright (c) 2026 SUSE LLC and contributors
 # Copyright (c) 2025 Eyad Issa <[email protected]>
 #
 # All modifications and additions to the file contributed by third parties
@@ -25,11 +25,11 @@
 %global mtmd_sover         0.0.%{version}
 %global mtmd_sover_suffix  0
 
-%global ggml_sover         0.9.4
+%global ggml_sover         0.9.5
 %global ggml_sover_suffix  0
 
 Name:           llamacpp
-Version:        7540
+Version:        7789
 Release:        0
 Summary:        Inference of Meta's LLaMA model (and others) in pure C/C++
 License:        MIT

++++++ llamacpp-7540.tar.gz -> llamacpp-7789.tar.gz ++++++
/work/SRC/openSUSE:Factory/llamacpp/llamacpp-7540.tar.gz 
/work/SRC/openSUSE:Factory/.llamacpp.new.1995/llamacpp-7789.tar.gz differ: char 
17, line 1

Reply via email to