On Wed, 21 Jan 2026 18:18:29 +0100,
Chris Cappuccio <[email protected]> wrote:
> 
> Kirill A. Korinsky [[email protected]] wrote:
> > 
> > Here a version where I:
> > 1. added runtime dependency on textproc/ripgrep
> > 2. patched required current_exe() usage
> > 3. added README with sample how it can be used against llama.cpp server
> > 
> 
> I like this. I had done a similar README. In model_providers you should add:
> 
> headers = {}
> query_params = {}
> 
> They might not break anything today but they are of no use to llama.cpp and
> the future is less certain. They are designed for openai's interface.
> 

Do you mean something like that?

        headers = {}
        query_params = {}

        model_provider = "local"
        model = "local"

        [model_providers.local]
        name = "llama-server"
        base_url = "http://127.0.0.1:8080/v1";
        wire_api = "chat"


-- 
wbr, Kirill

Reply via email to