On 2/3/26 6:56 PM, Kirill A. Korinsky wrote:
On Tue, 03 Feb 2026 18:45:00 +0100,
Volker Schlecht <[email protected]> wrote:
While it *is* based on libggml, the sd-cpp ggml is built with
GGML_MAX_NAME=128, so we can't use devel/libggml from ports.
Likewise, we can't dynamically select the backend as in llama.cpp,
hence the -vulkan FLAVOR.
Two remarks:
1. I've tried to rebuild llama.cpp and whisper.cpp against ggml with
GGML_MAX_NAME=128 and it works, so at least this isn't a blocker.
2. We still can use global libggml, but we should manually link the backend
to link vulcan or which cpu.
Cool. I still wonder if that isn't more trouble than it's worth to save
what, ~2 MB on the binary size for something that requires a
multi-gigabyte-sized diffusion model to be useful (for very small
values of 'useful' :-))