Script 'mail_helper' called by obssrc
Hello community,

here is the log from the commit of package llamacpp for openSUSE:Factory 
checked in at 2025-09-29 16:32:13
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Comparing /work/SRC/openSUSE:Factory/llamacpp (Old)
 and      /work/SRC/openSUSE:Factory/.llamacpp.new.11973 (New)
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Package is "llamacpp"

Mon Sep 29 16:32:13 2025 rev:20 rq:1307499 version:6605

Changes:
--------
--- /work/SRC/openSUSE:Factory/llamacpp/llamacpp.changes        2025-09-16 
18:21:04.449854585 +0200
+++ /work/SRC/openSUSE:Factory/.llamacpp.new.11973/llamacpp.changes     
2025-09-29 16:34:27.672039739 +0200
@@ -1,0 +2,30 @@
+Sat Sep 27 16:54:06 UTC 2025 - Eyad Issa <[email protected]>
+
+- Update to b6605:
+  * Added docker protocol support and resumable downloads for
+    llama-server
+  * New models: LLaDA-7b-MoE, Grok-2, GroveMoE, OLMo3, LiquidAI
+    LFM2-2.6B
+  * Added conversion support for GraniteHybrid (non-hybrid attn)
+    and Llama4ForCausalLM
+  * llama: support for qwen3 reranker, T5 unequal encoder-decoder
+    layers, seq limit bumped 64 → 256
+  * Bench improvements: list devices, multiple devices, n-cpu-moe
+  * Vulkan: conv_transpose_2d, GET_ROWS, iGPU device selection,
+    buffer optimizations, shader fixes, OOM handling
+  * ggml: semantic versioning, backend/device extensions,
+    optimizations, fixes for embedding, quantization, padding
+  * ggml-cpu: SIMD support (MXFP4 for s390x), cpumask respect,
+    ARM INT8 checks
+  * Common: fixes for memory corruption, offline mode without curl,
+    switch to cpp-httplib
+  * Server: SSE/OpenAI error handling, usage stats opt-in, external
+    test server, removed LLAMA_SERVER_SSL
+  * WebUI: migrated to SvelteKit, hash-based routing, chunk
+    handling fixes
+  * Fixes across model-conversion, rpc, media, devops, embedding
+    docs, typos
+  * Full commit log:
+    https://github.com/ggml-org/llama.cpp/compare/b6269...b6428
+
+-------------------------------------------------------------------

Old:
----
  llamacpp-6428.tar.gz

New:
----
  llamacpp-6605.tar.gz

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Other differences:
------------------
++++++ llamacpp.spec ++++++
--- /var/tmp/diff_new_pack.NPvYny/_old  2025-09-29 16:34:28.192061632 +0200
+++ /var/tmp/diff_new_pack.NPvYny/_new  2025-09-29 16:34:28.192061632 +0200
@@ -20,7 +20,7 @@
 %global backend_dir %{_libdir}/ggml
 
 Name:           llamacpp
-Version:        6428
+Version:        6605
 Release:        0
 Summary:        Inference of Meta's LLaMA model (and others) in pure C/C++
 License:        MIT

++++++ llamacpp-6428.tar.gz -> llamacpp-6605.tar.gz ++++++
/work/SRC/openSUSE:Factory/llamacpp/llamacpp-6428.tar.gz 
/work/SRC/openSUSE:Factory/.llamacpp.new.11973/llamacpp-6605.tar.gz differ: 
char 12, line 1

Reply via email to