branch: elpa/gptel commit 525ab4b7fa0108d837734771d8148eac130ff57d Author: Michael Werner <michaelwer...@mailbox.org> Commit: karthink <karthikchikmaga...@gmail.com>
gptel-privategpt: Add usage instructions to README README: Add instructions for using PrivateGPT. --- README.org | 71 +++++++++++++++++++++++++++++++++++++++++++++++--------------- 1 file changed, 54 insertions(+), 17 deletions(-) diff --git a/README.org b/README.org index d57bcbc170..5329fad04f 100644 --- a/README.org +++ b/README.org @@ -4,23 +4,24 @@ gptel is a simple Large Language Model chat client for Emacs, with support for multiple models and backends. -| LLM Backend | Supports | Requires | -|--------------------+----------+---------------------------| -| ChatGPT | ✓ | [[https://platform.openai.com/account/api-keys][API key]] | -| Azure | ✓ | Deployment and API key | -| Ollama | ✓ | [[https://ollama.ai/][Ollama running locally]] | -| GPT4All | ✓ | [[https://gpt4all.io/index.html][GPT4All running locally]] | -| Gemini | ✓ | [[https://makersuite.google.com/app/apikey][API key]] | -| Llama.cpp | ✓ | [[https://github.com/ggerganov/llama.cpp/tree/master/examples/server#quick-start][Llama.cpp running locally]] | -| Llamafile | ✓ | [[https://github.com/Mozilla-Ocho/llamafile#quickstart][Local Llamafile server]] | -| Kagi FastGPT | ✓ | [[https://kagi.com/settings?p=api][API key]] | -| Kagi Summarizer | ✓ | [[https://kagi.com/settings?p=api][API key]] | -| together.ai | ✓ | [[https://api.together.xyz/settings/api-keys][API key]] | -| Anyscale | ✓ | [[https://docs.endpoints.anyscale.com/][API key]] | -| Perplexity | ✓ | [[https://docs.perplexity.ai/docs/getting-started][API key]] | -| Anthropic (Claude) | ✓ | [[https://www.anthropic.com/api][API key]] | -| Groq | ✓ | [[https://console.groq.com/keys][API key]] | -| OpenRouter | ✓ | [[https://openrouter.ai/keys][API key]] | +| LLM Backend | Supports | Requires | +|--------------------+----------+----------------------------| +| ChatGPT | ✓ | [[https://platform.openai.com/account/api-keys][API key]] | +| Azure | ✓ | Deployment and API key | +| Ollama | ✓ | [[https://ollama.ai/][Ollama running locally]] | +| GPT4All | ✓ | [[https://gpt4all.io/index.html][GPT4All running locally]] | +| Gemini | ✓ | [[https://makersuite.google.com/app/apikey][API key]] | +| Llama.cpp | ✓ | [[https://github.com/ggerganov/llama.cpp/tree/master/examples/server#quick-start][Llama.cpp running locally]] | +| Llamafile | ✓ | [[https://github.com/Mozilla-Ocho/llamafile#quickstart][Local Llamafile server]] | +| Kagi FastGPT | ✓ | [[https://kagi.com/settings?p=api][API key]] | +| Kagi Summarizer | ✓ | [[https://kagi.com/settings?p=api][API key]] | +| together.ai | ✓ | [[https://api.together.xyz/settings/api-keys][API key]] | +| Anyscale | ✓ | [[https://docs.endpoints.anyscale.com/][API key]] | +| Perplexity | ✓ | [[https://docs.perplexity.ai/docs/getting-started][API key]] | +| Anthropic (Claude) | ✓ | [[https://www.anthropic.com/api][API key]] | +| Groq | ✓ | [[https://console.groq.com/keys][API key]] | +| OpenRouter | ✓ | [[https://openrouter.ai/keys][API key]] | +| PrivateGPT | ✓ | [[https://github.com/zylon-ai/private-gpt#-documentation][PrivateGPT running locally]] | *General usage*: ([[https://www.youtube.com/watch?v=bsRnh_brggM][YouTube Demo]]) @@ -63,6 +64,7 @@ gptel uses Curl if available, but falls back to url-retrieve to work without ext - [[#anthropic-claude][Anthropic (Claude)]] - [[#groq][Groq]] - [[#openrouter][OpenRouter]] + - [[#privategpt][PrivateGPT]] - [[#usage][Usage]] - [[#in-any-buffer][In any buffer:]] - [[#in-a-dedicated-chat-buffer][In a dedicated chat buffer:]] @@ -556,6 +558,41 @@ The above code makes the backend available to select. If you want it to be the #+end_src #+html: </details> +**** PrivateGPT +#+html: </summary> + +Register a backend with +#+begin_src emacs-lisp +(gptel-make-privategpt "privateGPT" ;Any name you want + :protocol "http" + :host "localhost:8001" + :stream t + :context t ;Use context provided by embeddings + :sources t ;Return information about source documents + :models '("private-gpt")) + +#+end_src + +You can pick this backend from the menu when using gptel (see [[#usage][Usage]]). + +***** (Optional) Set as the default gptel backend + +The above code makes the backend available to select. If you want it to be the default backend for gptel, you can set this as the value of =gptel-backend=. Use this instead of the above. +#+begin_src emacs-lisp +;; OPTIONAL configuration +(setq gptel-model "private-gpt" + gptel-backend + (gptel-make-privategpt "privateGPT" ;Any name you want + :protocol "http" + :host "localhost:8001" + :stream t + :context t ;Use context provided by embeddings + :sources t ;Return information about source documents + :models '("private-gpt"))) + +#+end_src + +#+html: </details>