Script 'mail_helper' called by obssrc
Hello community,

here is the log from the commit of package llamacpp for openSUSE:Factory 
checked in at 2025-12-05 16:56:38
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Comparing /work/SRC/openSUSE:Factory/llamacpp (Old)
 and      /work/SRC/openSUSE:Factory/.llamacpp.new.1939 (New)
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Package is "llamacpp"

Fri Dec  5 16:56:38 2025 rev:23 rq:1321203 version:7266

Changes:
--------
--- /work/SRC/openSUSE:Factory/llamacpp/llamacpp.changes        2025-11-06 
18:13:47.276894816 +0100
+++ /work/SRC/openSUSE:Factory/.llamacpp.new.1939/llamacpp.changes      
2025-12-05 16:58:03.155959603 +0100
@@ -1,0 +2,42 @@
+Thu Dec  4 12:15:40 UTC 2025 - Eyad Issa <[email protected]>
+
+- Switch to .so versioning, following upstream
+
+- Update to version 7266:
+  * Added support for several new and updated models including
+    Ministral3, Qwen3 Next, RND1 Diffusion LM, AfmoeForCausalLM,
+    openPangu-Embedded, and improved detection for
+    GigaChat3-10-A1.8B.
+  * Server improvements: multi-model API, Anthropic Messages API,
+    task generator API, HTTP interface split, jinja enabled by
+    default.
+  * Chat and parsing improvements: generalized XML-style tool-call
+    parsing, composable PEG parser combinators.
+  * WebUI enhancements: restored HTML in Markdown tables, rehype
+    plugin improvements, attachment-handling UX improvements,
+    Harmony tool-call visualization, new keyboard shortcuts,
+    clickability fixes, autoscroll toggle, and new “Continue”
+    action.
+  * CUDA backend improvements: FP16 restrictions, memory bandwidth
+    improvements, stream-based concurrency, MMQ and fusion fixes,
+    rope fusion corrections, improved handling of nb00/nb02, and
+    various stability fixes.
+  * Vulkan backend improvements: new operators, improved FA and
+    MMVQ support, async graph_compute, conv2d spec constants, i32 copy
+    support.
+  * GGML and CPU backend updates: expanded RVV, ARM64, RISC-V
+    feature detection; new CPU intrinsic implementations; improved
+    GEMM/GEMV repack kernels; ops additions.
+  * OpenCL, SYCL, HIP, MUSA, and Hexagon improvements: expanded
+    operator support, new kernels, fallback logic for older SoCs,
+    buffer handling fixes.
+  * MTMD (multimodal) improvements: warmup toggles, CLI log-noise
+    reduction, image embedding size fixes and audio model patch
+    fixes.
+  * General performance, stability, and correctness improvements
+    across CPU, GPU, schedulers, memory management, kv-cache,
+    async behavior, thread safety, and operator fusion.
+  * Full commit log:
+    https://github.com/ggml-org/llama.cpp/compare/b6937...b7266
+
+-------------------------------------------------------------------

Old:
----
  llamacpp-6937.tar.gz

New:
----
  llamacpp-7266.tar.gz

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Other differences:
------------------
++++++ llamacpp.spec ++++++
--- /var/tmp/diff_new_pack.ioQ0Cj/_old  2025-12-05 16:58:04.344010063 +0100
+++ /var/tmp/diff_new_pack.ioQ0Cj/_new  2025-12-05 16:58:04.344010063 +0100
@@ -19,8 +19,17 @@
 
 %global backend_dir %{_libdir}/ggml
 
+%global llama_sover        0.0.%{version}
+%global llama_sover_suffix 0
+
+%global mtmd_sover         0.0.%{version}
+%global mtmd_sover_suffix  0
+
+%global ggml_sover         0.9.4
+%global ggml_sover_suffix  0
+
 Name:           llamacpp
-Version:        6937
+Version:        7266
 Release:        0
 Summary:        Inference of Meta's LLaMA model (and others) in pure C/C++
 License:        MIT
@@ -47,14 +56,16 @@
 
 %package devel
 Summary:        Development files for llama.cpp
+Obsoletes:      libllama < 7266
+Obsoletes:      libmtmd < 7266
 
 %description devel
 Development files for llama.cpp
 
-%package -n libllama
+%package -n libllama%{llama_sover_suffix}
 Summary:        A C++ interface for running inference with large language 
models
 
-%description -n libllama
+%description -n libllama%{llama_sover_suffix}
 The llama.cpp library provides a C++ interface for running inference
 with large language models (LLMs). Initially designed to support Meta's
 LLaMA model, it has since been extended to work with a variety of other models.
@@ -62,20 +73,20 @@
 This package includes the shared libraries necessary for running applications
 that depend on libllama.so.
 
-%package -n libggml
+%package -n libggml%{ggml_sover_suffix}
 Summary:        A tensor library for C++
 Requires:       libggml-cpu
 Recommends:     libggml-opencl
 Recommends:     libggml-vulkan
 
-%description -n libggml
+%description -n libggml%{ggml_sover_suffix}
 A tensor library for C++. It was created originally to support llama.cpp
 and WhisperCpp projects.
 
-%package -n libggml-base
+%package -n libggml-base%{ggml_sover_suffix}
 Summary:        A tensor library for C++ (base)
 
-%description -n libggml-base
+%description -n libggml-base%{ggml_sover_suffix}
 A tensor library for C++. It was created originally to support llama.cpp
 and WhisperCpp projects.
 
@@ -110,6 +121,8 @@
 
 %package -n ggml-devel
 Summary:        Development files for ggml
+Obsoletes:      libggml < 7266
+Obsoletes:      libggml-base < 7266
 
 %description -n ggml-devel
 A tensor library for C++. It was created originally to support llama.cpp
@@ -118,10 +131,10 @@
 This package includes the development files necessary for building applications
 that depend on ggml.
 
-%package -n libmtmd
+%package -n libmtmd%{mtmd_sover_suffix}
 Summary:        Library to run multimodals inference models
 
-%description -n libmtmd
+%description -n libmtmd%{mtmd_sover_suffix}
 As outlined in the history, libmtmd is the modern library designed to
 replace the original llava.cpp implementation for handling multimodal inputs.
 
@@ -138,6 +151,11 @@
 %description -n libllava
 Library to handle multimodal inputs for llama.cpp.
 
+%ldconfig_scriptlets -n libllama%{llama_sover_suffix}
+%ldconfig_scriptlets -n libggml%{ggml_sover_suffix}
+%ldconfig_scriptlets -n libggml-base%{ggml_sover_suffix}
+%ldconfig_scriptlets -n libmtmd%{mtmd_sover_suffix}
+
 %prep
 %autosetup -p1 -n llama.cpp-b%{version}
 
@@ -184,18 +202,24 @@
 %{_includedir}/mtmd*
 %{_libdir}/cmake/llama
 %{_libdir}/pkgconfig/llama.pc
+# libllama symlinks
+%{_libdir}/libllama.so
+%{_libdir}/libllama.so.0
+# libmtmd symlinks
+%{_libdir}/libmtmd.so
+%{_libdir}/libmtmd.so.0
 
-%files -n libllama
+%files -n libllama%{llama_sover_suffix}
 %license LICENSE
-%{_libdir}/libllama.so
+%{_libdir}/libllama.so.%{llama_sover}
 
-%files -n libggml
+%files -n libggml%{ggml_sover_suffix}
 %license LICENSE
-%{_libdir}/libggml.so
+%{_libdir}/libggml.so.%{ggml_sover}
 
-%files -n libggml-base
+%files -n libggml-base%{ggml_sover_suffix}
 %license LICENSE
-%{_libdir}/libggml-base.so
+%{_libdir}/libggml-base.so.%{ggml_sover}
 
 %files -n libggml-cpu
 %license LICENSE
@@ -217,8 +241,12 @@
 %{_includedir}/ggml*.h
 %{_includedir}/gguf.h
 %{_libdir}/cmake/ggml
+%{_libdir}/libggml.so
+%{_libdir}/libggml.so.0
+%{_libdir}/libggml-base.so
+%{_libdir}/libggml-base.so.0
 
-%files -n libmtmd
+%files -n libmtmd%{mtmd_sover_suffix}
 %license LICENSE
-%{_libdir}/libmtmd.so
+%{_libdir}/libmtmd.so.%{mtmd_sover}
 

++++++ llamacpp-6937.tar.gz -> llamacpp-7266.tar.gz ++++++
/work/SRC/openSUSE:Factory/llamacpp/llamacpp-6937.tar.gz 
/work/SRC/openSUSE:Factory/.llamacpp.new.1939/llamacpp-7266.tar.gz differ: char 
13, line 1

Reply via email to