Public bug reported:
Right now installing llama.cpp from resolute-proposed (8064+dfsg-1) will
not pull in any libggml backends.
This is because all backends are Suggests for libggml0.
# apt depends libggml0
libggml0
Depends: libc6 (>= 2.38)
Depends: libgcc-s1 (>= 3.3.1)
Depends: libgomp1 (>= 4.9)
Depends: libstdc++6 (>= 11)
Breaks: libggml
Breaks: <libggml0-backend-cpu>
Suggests: libggml0-backend-blas
Suggests: libggml0-backend-cuda
Suggests: libggml0-backend-hip
Suggests: libggml0-backend-vulkan
Replaces: libggml
Replaces: <libggml0-backend-cpu>
This means that someone who runs `apt install llama.cpp` will use CPU by
default and might not even know there is another backend that supports
their hardware for libggml0.
There's a few ways to solve this I can think of:
1) llama.cpp metapackages (IE llama.cpp-rocm, llama.cpp-cuda, etc)
2) Metapackages do it (IE apt install rocm has a recommends for
libggml0-backend-hip)
3) Add Recommends for all backends to llama.cpp.
I personally like #3 the most, this will mean when a user runs 'apt
install llama.cpp' they will have maximum compatibility across their
hardware available. If a user doesn't want all backends, they can
remove them because they're recommends.
** Affects: llama.cpp (Ubuntu)
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2141980
Title:
llama.cpp doesn't pull in backends by default
To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/llama.cpp/+bug/2141980/+subscriptions
--
ubuntu-bugs mailing list
[email protected]
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs