Script 'mail_helper' called by obssrc
Hello community,

here is the log from the commit of package ollama for openSUSE:Factory checked 
in at 2024-04-17 14:45:50
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Comparing /work/SRC/openSUSE:Factory/ollama (Old)
 and      /work/SRC/openSUSE:Factory/.ollama.new.26366 (New)
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Package is "ollama"

Wed Apr 17 14:45:50 2024 rev:2 rq:1168439 version:0.1.31

Changes:
--------
--- /work/SRC/openSUSE:Factory/ollama/ollama.changes    2024-02-27 
22:50:05.726316486 +0100
+++ /work/SRC/openSUSE:Factory/.ollama.new.26366/ollama.changes 2024-04-17 
14:46:28.972496139 +0200
@@ -1,0 +2,138 @@
+Tue Apr 16 10:52:25 UTC 2024 - bwiedem...@suse.com
+
+- Update to version 0.1.31:
+  * Backport MacOS SDK fix from main
+  * Apply 01-cache.diff
+  * fix: workflows
+  * stub stub
+  * mangle arch
+  * only generate on changes to llm subdirectory
+  * only generate cuda/rocm when changes to llm detected
+  * Detect arrow keys on windows (#3363)
+  * add license in file header for vendored llama.cpp code (#3351)
+  * remove need for `$VSINSTALLDIR` since build will fail if `ninja` cannot be 
found (#3350)
+  * change `github.com/jmorganca/ollama` to `github.com/ollama/ollama` (#3347)
+  * malformed markdown link (#3358)
+  * Switch runner for final release job
+  * Use Rocky Linux Vault to get GCC 10.2 installed
+  * Revert "Switch arm cuda base image to centos 7"
+  * Switch arm cuda base image to centos 7
+  * Bump llama.cpp to b2527
+  * Fix ROCm link in `development.md`
+  * adds ooo to community integrations (#1623)
+  * Add cliobot to ollama supported list (#1873)
+  * Add Dify.AI to community integrations (#1944)
+  * enh: add ollero.nvim to community applications (#1905)
+  * Add typechat-cli to Terminal apps (#2428)
+  * add new Web & Desktop link in readme for alpaca webui (#2881)
+  * Add LibreChat to Web & Desktop Apps (#2918)
+  * Add Community Integration: OllamaGUI (#2927)
+  * Add Community Integration: OpenAOE (#2946)
+  * Add Saddle (#3178)
+  * tlm added to README.md terminal section. (#3274)
+  * Update README.md (#3288)
+  * Update README.md (#3338)
+  * Integration tests conditionally pull
+  * add support for libcudart.so for CUDA devices (adds Jetson support)
+  * llm: prevent race appending to slice (#3320)
+  * Bump llama.cpp to b2510
+  * Add Testcontainers into Libraries section (#3291)
+  * Revamp go based integration tests
+  * rename `.gitattributes`
+  * Bump llama.cpp to b2474
+  * Add docs for GPU selection and nvidia uvm workaround
+  * doc: faq gpu compatibility (#3142)
+  * Update faq.md
+  * Better tmpdir cleanup
+  * Update faq.md
+  * update `faq.md`
+  * dyn global
+  * llama: remove server static assets (#3174)
+  * add `llm/ext_server` directory to `linguist-vendored` (#3173)
+  * Add Radeon gfx940-942 GPU support
+  * Wire up more complete CI for releases
+  * llm,readline: use errors.Is instead of simple == check (#3161)
+  * server: replace blob prefix separator from ':' to '-' (#3146)
+  * Add ROCm support to linux install script (#2966)
+  * .github: fix model and feature request yml (#3155)
+  * .github: add issue templates (#3143)
+  * fix: clip memory leak
+  * Update README.md
+  * add `OLLAMA_KEEP_ALIVE` to environment variable docs for `ollama serve` 
(#3127)
+  * Default Keep Alive environment variable (#3094)
+  * Use stdin for term discovery on windows
+  * Update ollama.iss
+  * restore locale patch (#3091)
+  * token repeat limit for prediction requests (#3080)
+  * Fix iGPU detection for linux
+  * add more docs on for the modelfile message command (#3087)
+  * warn when json format is expected but not mentioned in prompt (#3081)
+  * Adapt our build for imported server.cpp
+  * Import server.cpp as of b2356
+  * refactor readseeker
+  * Add docs explaining GPU selection env vars
+  * chore: fix typo (#3073)
+  * fix gpu_info_cuda.c compile warning (#3077)
+  * use `-trimpath` when building releases (#3069)
+  * relay load model errors to the client (#3065)
+  * Update troubleshooting.md
+  * update llama.cpp submodule to `ceca1ae` (#3064)
+  * convert: fix shape
+  * Avoid rocm runner and dependency clash
+  * fix `03-locale.diff`
+  * Harden for deps file being empty (or short)
+  * Add ollama executable peer dir for rocm
+  * patch: use default locale in wpm tokenizer (#3034)
+  * only copy deps for `amd64` in `build_linux.sh`
+  * Rename ROCm deps file to avoid confusion (#3025)
+  * add `macapp` to `.dockerignore`
+  * add `bundle_metal` and `cleanup_metal` funtions to `gen_darwin.sh`
+  * tidy cleanup logs
+  * update llama.cpp submodule to `77d1ac7` (#3030)
+  * disable gpu for certain model architectures and fix divide-by-zero on 
memory estimation
+  * Doc how to set up ROCm builds on windows
+  * Finish unwinding idempotent payload logic
+  * update llama.cpp submodule to `c2101a2` (#3020)
+  * separate out `isLocalIP`
+  * simplify host checks
+  * add additional allowed hosts
+  * Update docs `README.md` and table of contents
+  * add allowed host middleware and remove `workDir` middleware (#3018)
+  * decode ggla
+  * convert: fix default shape
+  * fix: allow importing a model from name reference (#3005)
+  * update llama.cpp submodule to `6cdabe6` (#2999)
+  * Update api.md
+  * Revert "adjust download and upload concurrency based on available 
bandwidth" (#2995)
+  * cmd: tighten up env var usage sections (#2962)
+  * default terminal width, height
+  * Refined ROCm troubleshooting docs
+  * Revamp ROCm support
+  * update go to 1.22 in other places (#2975)
+  * docs: Add LLM-X to Web Integration section (#2759)
+  * fix some typos (#2973)
+  * Convert Safetensors to an Ollama model (#2824)
+  * Allow setting max vram for workarounds
+  * cmd: document environment variables for serve command
+  * Add Odin Runes, a Feature-Rich Java UI for Ollama, to README (#2440)
+  * Update api.md
+  * Add NotesOllama to Community Integrations (#2909)
+  * Added community link for Ollama Copilot (#2582)
+  * use LimitGroup for uploads
+  * adjust group limit based on download speed
+  * add new LimitGroup for dynamic concurrency
+  * refactor download run
+
+-------------------------------------------------------------------
+Wed Mar 06 23:51:28 UTC 2024 - computersemiexp...@outlook.com
+
+- Update to version 0.1.28:
+  * Fix embeddings load model behavior (#2848)
+  * Add Community Integration: NextChat (#2780)
+  * prepend image tags (#2789)
+  * fix: print usedMemory size right (#2827)
+  * bump submodule to `87c91c07663b707e831c59ec373b5e665ff9d64a` (#2828)
+  * Add ollama user to video group
+  * Add env var so podman will map cuda GPUs
+
+-------------------------------------------------------------------

Old:
----
  ollama-0.1.27.tar.gz

New:
----
  _servicedata
  ollama-0.1.31.tar.gz

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Other differences:
------------------
++++++ ollama.spec ++++++
--- /var/tmp/diff_new_pack.1TmR5q/_old  2024-04-17 14:46:34.852712052 +0200
+++ /var/tmp/diff_new_pack.1TmR5q/_new  2024-04-17 14:46:34.856712199 +0200
@@ -15,8 +15,9 @@
 # Please submit bugfixes or comments via https://bugs.opensuse.org/
 #
 
+
 Name:           ollama
-Version:        0.1.27
+Version:        0.1.31
 Release:        0
 Summary:        Tool for running AI models on-premise
 License:        MIT
@@ -30,7 +31,7 @@
 BuildRequires:  gcc-c++ >= 11.4.0
 BuildRequires:  git
 BuildRequires:  sysuser-tools
-BuildRequires:  golang(API) >= 1.21
+BuildRequires:  golang(API) >= 1.22
 
 %{sysusers_requires}
 

++++++ _service ++++++
--- /var/tmp/diff_new_pack.1TmR5q/_old  2024-04-17 14:46:34.892713521 +0200
+++ /var/tmp/diff_new_pack.1TmR5q/_new  2024-04-17 14:46:34.896713667 +0200
@@ -4,7 +4,7 @@
   <service name="tar_scm" mode="manual">
     <param name="url">https://github.com/ollama/ollama.git</param>
     <param name="scm">git</param>
-    <param name="revision">v0.1.27</param>
+    <param name="revision">v0.1.31</param>
     <param name="versionformat">@PARENT_TAG@</param>
     <param name="versionrewrite-pattern">v(.*)</param>
     <param name="changesgenerate">enable</param>

++++++ _servicedata ++++++
<servicedata>
<service name="tar_scm">
                <param name="url">https://github.com/ollama/ollama.git</param>
              <param 
name="changesrevision">dc011d16b9ff160c0be3829fc39a43054f0315d0</param></service></servicedata>
(No newline at EOF)

++++++ ollama-0.1.27.tar.gz -> ollama-0.1.31.tar.gz ++++++
/work/SRC/openSUSE:Factory/ollama/ollama-0.1.27.tar.gz 
/work/SRC/openSUSE:Factory/.ollama.new.26366/ollama-0.1.31.tar.gz differ: char 
12, line 1

++++++ vendor.tar.xz ++++++
++++ 239453 lines of diff (skipped)

Reply via email to