Script 'mail_helper' called by obssrc
Hello community,

here is the log from the commit of package llamacpp for openSUSE:Factory 
checked in at 2025-05-30 14:32:23
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Comparing /work/SRC/openSUSE:Factory/llamacpp (Old)
 and      /work/SRC/openSUSE:Factory/.llamacpp.new.25440 (New)
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Package is "llamacpp"

Fri May 30 14:32:23 2025 rev:9 rq:1280718 version:5516

Changes:
--------
--- /work/SRC/openSUSE:Factory/llamacpp/llamacpp.changes        2025-05-20 
12:19:58.581773369 +0200
+++ /work/SRC/openSUSE:Factory/.llamacpp.new.25440/llamacpp.changes     
2025-05-30 17:23:42.995254374 +0200
@@ -1,0 +2,21 @@
+Tue May 27 22:51:38 UTC 2025 - Eyad Issa <eyadlore...@gmail.com>
+
+- Update to 5516:
+  * llama : remove llama_kv_cache_view API
+  * model : disable SWA for Phi models
+  * kv-cache : simplify the interface
+  * server : Add the endpoints /api/tags and /api/chat
+  * ggml : add ggml_gelu_erf()
+  * hparams : support models for which all layers use SWA
+  * opencl: fix couple crashes
+  * opencl: Add support for multiple devices
+  * mtmd : add ultravox audio input
+  * server : support audio input
+  * server: streaming of tool calls and thoughts when jinja is on
+  * mtmd : support Qwen 2.5 Omni
+  * ggml : riscv: add xtheadvector support
+  * opencl : various optimizations
+  * Full changelog:
+    https://github.com/ggml-org/llama.cpp/compare/b5426...b5516
+
+-------------------------------------------------------------------

Old:
----
  llamacpp-5426.tar.gz

New:
----
  llamacpp-5516.tar.gz

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Other differences:
------------------
++++++ llamacpp.spec ++++++
--- /var/tmp/diff_new_pack.wlC0AV/_old  2025-05-30 17:23:43.527276470 +0200
+++ /var/tmp/diff_new_pack.wlC0AV/_new  2025-05-30 17:23:43.527276470 +0200
@@ -17,9 +17,9 @@
 
 
 Name:           llamacpp
-Version:        5426
+Version:        5516
 Release:        0
-Summary:        llama-cli tool to run inference using the llama.cpp library
+Summary:        Inference of Meta's LLaMA model (and others) in pure C/C++
 License:        MIT
 URL:            https://github.com/ggml-org/llama.cpp
 Source:         
https://github.com/ggml-org/llama.cpp/archive/b%{version}/%{name}-%{version}.tar.gz

++++++ llamacpp-5426.tar.gz -> llamacpp-5516.tar.gz ++++++
/work/SRC/openSUSE:Factory/llamacpp/llamacpp-5426.tar.gz 
/work/SRC/openSUSE:Factory/.llamacpp.new.25440/llamacpp-5516.tar.gz differ: 
char 29, line 2

Reply via email to