Script 'mail_helper' called by obssrc
Hello community,

here is the log from the commit of package llamacpp for openSUSE:Factory 
checked in at 2025-09-16 18:20:03
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Comparing /work/SRC/openSUSE:Factory/llamacpp (Old)
 and      /work/SRC/openSUSE:Factory/.llamacpp.new.1977 (New)
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Package is "llamacpp"

Tue Sep 16 18:20:03 2025 rev:19 rq:1305191 version:6428

Changes:
--------
--- /work/SRC/openSUSE:Factory/llamacpp/llamacpp.changes        2025-09-02 
17:58:46.163624899 +0200
+++ /work/SRC/openSUSE:Factory/.llamacpp.new.1977/llamacpp.changes      
2025-09-16 18:21:04.449854585 +0200
@@ -1,0 +2,43 @@
+Tue Sep  9 12:00:15 UTC 2025 - Eyad Issa <[email protected]>
+
+- Update to version 6428
+  * Added support for DeepSeek V3.1, Nemotron, and Seed OSS
+    thinking & tool calling.
+  * Fixed crashes and tool_call parsing issues.
+  * CLI improvements: better warnings and enhanced bash completion.
+  * Improved context handling (n_outputs, graph stats, reserve fixes).
+  * KV-cache optimizations and fixes for slot handling, SWA checks,
+    and batching.
+  * New support for EmbeddingGemma 300M and fixes for Gemma 270M.
+  * General stability and correctness fixes across evaluation,
+    initialization, and buffer management.
+  * Major updates for aarch64 (SVE F16), RVV support, and
+    s390x cleanup.
+  * New ops: WAN video model, WebGPU transpose/reshape, Vulkan
+    im2col_3d, pad_ext, integer dot products.
+  * Optimizations for RVV kernels, OpenCL fused ops, and Vulkan
+    matmul paths.
+  * Expanded casting, exponential functions, and memory
+    improvements.
+  * Upgraded kleidiai to v1.13.0.
+  * Refactored gguf_writer, improved byte-swapping, and fixed
+    metadata entries.
+  * Python bindings cleanup and fixes.
+  * Added flags, templates, debugging tools, QAT-Q4 quantization,
+    and mmproj targets.
+  * Fixed errors, added missing scripts, and removed hardcoded
+    shebangs.
+  * New support for jina-embeddings-v3, MiniCPM-V 4.5, Kimi VL, and
+    extended embedding options.
+  * Improved defaults for GPU usage and attention.
+  * Added documentation and parameters (parallel_tool_calls,
+    exceed_context_size_error).
+  * Security improvements (/slots enabled by default).
+  * Optimized sampling strategies.
+  * New logging/coloring options.
+  * JSON schema improvements (enum handling).
+  * Multiple bug fixes across graph, presets, mtmd, and thinking models.
+  * Full commit log:
+    https://github.com/ggml-org/llama.cpp/compare/b6269...b6428
+
+-------------------------------------------------------------------

Old:
----
  llamacpp-6269.tar.gz

New:
----
  llamacpp-6428.tar.gz

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Other differences:
------------------
++++++ llamacpp.spec ++++++
--- /var/tmp/diff_new_pack.XXdhTM/_old  2025-09-16 18:21:05.117882697 +0200
+++ /var/tmp/diff_new_pack.XXdhTM/_new  2025-09-16 18:21:05.121882865 +0200
@@ -20,7 +20,7 @@
 %global backend_dir %{_libdir}/ggml
 
 Name:           llamacpp
-Version:        6269
+Version:        6428
 Release:        0
 Summary:        Inference of Meta's LLaMA model (and others) in pure C/C++
 License:        MIT

++++++ llamacpp-6269.tar.gz -> llamacpp-6428.tar.gz ++++++
/work/SRC/openSUSE:Factory/llamacpp/llamacpp-6269.tar.gz 
/work/SRC/openSUSE:Factory/.llamacpp.new.1977/llamacpp-6428.tar.gz differ: char 
28, line 1

Reply via email to