branch: elpa/gptel
commit ca6086888ced9d01aec5ec3d7bbc69d308734b4f
Author: Karthik Chikmagalur <karthikchikmaga...@gmail.com>
Commit: Karthik Chikmagalur <karthikchikmaga...@gmail.com>

    gptel: Documentation and formatting changes
    
    * README.org: Spacing and formatting changes to make it easier to
    integrate parts of the README into the manual.  TeXinfo complains
    about heading level 1 -> 3 transitions without a level 2 in
    between, and requires a blank line between text and source blocks.
    
    * gptel-transient.el (gptel--preset): Clarify messaging.
    
    * gptel.el (gptel--transform-apply-preset): Clarify exactly how
    @preset is applied.  Presets can be used for interesting
    transformations of the prompt, but this requires careful
    consideration of where point is when the preset is applied, etc.
---
 README.org         | 92 +++++++++++++++++++++++++++++++++++++++++++-----------
 gptel-transient.el |  2 +-
 gptel.el           |  6 +++-
 3 files changed, 80 insertions(+), 20 deletions(-)

diff --git a/README.org b/README.org
index 7430d343e63..3898b599da3 100644
--- a/README.org
+++ b/README.org
@@ -94,13 +94,14 @@ gptel uses Curl if available, but falls back to the 
built-in url-retrieve to wor
 
 ** Contents :toc:
   - [[#installation][Installation]]
-      - [[#straight][Straight]]
-      - [[#manual][Manual]]
-      - [[#doom-emacs][Doom Emacs]]
-      - [[#spacemacs][Spacemacs]]
+    - [[#straight][Straight]]
+    - [[#manual][Manual]]
+    - [[#doom-emacs][Doom Emacs]]
+    - [[#spacemacs][Spacemacs]]
   - [[#setup][Setup]]
     - [[#chatgpt][ChatGPT]]
     - [[#other-llm-backends][Other LLM backends]]
+      - [[#optional-securing-api-keys-with-authinfo][(Optional) Securing API 
keys with =authinfo=]]
       - [[#azure][Azure]]
       - [[#gpt4all][GPT4All]]
       - [[#ollama][Ollama]]
@@ -176,40 +177,44 @@ Note: gptel requires Transient 0.7.4 or higher.  
Transient is a built-in package
 - *Optional:* Install =markdown-mode=.
 
 #+html: <details><summary>
-**** Straight
+*** Straight
 #+html: </summary>
 #+begin_src emacs-lisp
   (straight-use-package 'gptel)
 #+end_src
 #+html: </details>
 #+html: <details><summary>
-**** Manual
+*** Manual
 #+html: </summary>
 Note: gptel requires Transient 0.7.4 or higher.  Transient is a built-in 
package and Emacs does not update it by default.  Ensure that 
=package-install-upgrade-built-in= is true, or update Transient manually.
 
 Clone or download this repository and run =M-x package-install-file⏎= on the 
repository directory.
 #+html: </details>
 #+html: <details><summary>
-**** Doom Emacs
+*** Doom Emacs
 #+html: </summary>
 In =packages.el=
+
 #+begin_src emacs-lisp
 (package! gptel :recipe (:nonrecursive t))
 #+end_src
 
 In =config.el=
+
 #+begin_src emacs-lisp
 (use-package! gptel
  :config
  (setq! gptel-api-key "your key"))
 #+end_src
+
 "your key" can be the API key itself, or (safer) a function that returns the 
key.  Setting =gptel-api-key= is optional, you will be asked for a key if it's 
not found.
 
 #+html: </details>
 #+html: <details><summary>
-**** Spacemacs
+*** Spacemacs
 #+html: </summary>
 In your =.spacemacs= file, add =llm-client= to 
=dotspacemacs-configuration-layers=.
+
 #+begin_src emacs-lisp
 (llm-client :variables
             llm-client-enable-gptel t)
@@ -229,18 +234,21 @@ Optional: Set =gptel-api-key= to the key. Alternatively, 
you may choose a more s
 ChatGPT is configured out of the box.  If you want to use other LLM backends 
(like Ollama, Claude/Anthropic or Gemini) you need to register and configure 
them first.
 
 As an example, registering a backend typically looks like the following:
+
 #+begin_src emacs-lisp
 (gptel-make-anthropic "Claude" :stream t :key gptel-api-key)
 #+end_src
+
 Once this backend is registered, you'll see model names prefixed by "Claude:" 
appear in gptel's menu.
 
 See below for details on your preferred LLM provider, including local LLMs.
 
 #+html: <details><summary>
-***** (Optional) Securing API keys with =authinfo=
+**** (Optional) Securing API keys with =authinfo=
 #+html: </summary>
 
 You can use Emacs' built-in support for =authinfo= to store API keys required 
by gptel.  Add your API keys to =~/.authinfo=, and leave =gptel-api-key= set to 
its default.  By default, the API endpoint DNS name (e.g. "api.openai.com") is 
used as HOST and "apikey" as USER.
+
 #+begin_src authinfo
 machine api.openai.com login apikey password sk-secret-openai-api-key-goes-here
 machine api.anthropic.com login apikey password 
sk-secret-anthropic-api-key-goes-here
@@ -252,6 +260,7 @@ machine api.anthropic.com login apikey password 
sk-secret-anthropic-api-key-goes
 #+html: </summary>
 
 Register a backend with
+
 #+begin_src emacs-lisp
 (gptel-make-azure "Azure-1"             ;Name, whatever you'd like
   :protocol "https"                     ;Optional -- https is the default
@@ -261,6 +270,7 @@ Register a backend with
   :key #'gptel-api-key
   :models '(gpt-3.5-turbo gpt-4))
 #+end_src
+
 Refer to the documentation of =gptel-make-azure= to set more parameters.
 
 You can pick this backend from the menu when using gptel. (see 
[[#usage][Usage]]).
@@ -268,6 +278,7 @@ You can pick this backend from the menu when using gptel. 
(see [[#usage][Usage]]
 ***** (Optional) Set as the default gptel backend
 
 The above code makes the backend available to select.  If you want it to be 
the default backend for gptel, you can set this as the value of 
=gptel-backend=.  Use this instead of the above.
+
 #+begin_src emacs-lisp
 ;; OPTIONAL configuration
 (setq
@@ -280,19 +291,21 @@ The above code makes the backend available to select.  If 
you want it to be the
                  :key #'gptel-api-key
                  :models '(gpt-3.5-turbo gpt-4)))
 #+end_src
-#+html: </details>
 
+#+html: </details>
 #+html: <details><summary>
 **** GPT4All
 #+html: </summary>
 
 Register a backend with
+
 #+begin_src emacs-lisp
 (gptel-make-gpt4all "GPT4All"           ;Name of your choosing
  :protocol "http"
  :host "localhost:4891"                 ;Where it's running
  :models '(mistral-7b-openorca.Q4_0.gguf)) ;Available models
 #+end_src
+
 These are the required parameters, refer to the documentation of 
=gptel-make-gpt4all= for more.
 
 You can pick this backend from the menu when using gptel (see 
[[#usage][Usage]]).
@@ -300,6 +313,7 @@ You can pick this backend from the menu when using gptel 
(see [[#usage][Usage]])
 ***** (Optional) Set as the default gptel backend
 
 The above code makes the backend available to select.  If you want it to be 
the default backend for gptel, you can set this as the value of 
=gptel-backend=.  Use this instead of the above.  Additionally you may want to 
increase the response token size since GPT4All uses very short (often 
truncated) responses by default.
+
 #+begin_src emacs-lisp
 ;; OPTIONAL configuration
 (setq
@@ -312,7 +326,6 @@ The above code makes the backend available to select.  If 
you want it to be the
 #+end_src
 
 #+html: </details>
-
 #+html: <details><summary>
 **** Ollama
 #+html: </summary>
@@ -324,6 +337,7 @@ Register a backend with
   :stream t                             ;Stream responses
   :models '(mistral:latest))          ;List of models
 #+end_src
+
 These are the required parameters, refer to the documentation of 
=gptel-make-ollama= for more.
 
 You can pick this backend from the menu when using gptel (see 
[[#usage][Usage]])
@@ -331,6 +345,7 @@ You can pick this backend from the menu when using gptel 
(see [[#usage][Usage]])
 ***** (Optional) Set as the default gptel backend
 
 The above code makes the backend available to select.  If you want it to be 
the default backend for gptel, you can set this as the value of 
=gptel-backend=.  Use this instead of the above.
+
 #+begin_src emacs-lisp
 ;; OPTIONAL configuration
 (setq
@@ -384,6 +399,7 @@ You can pick this backend from the menu when using gptel 
(see [[#usage][Usage]])
 ***** (Optional) Set as the default gptel backend
 
 The above code makes the backend available to select.  If you want it to be 
the default backend for gptel, you can set this as the value of 
=gptel-backend=.  Use this instead of the above.
+
 #+begin_src emacs-lisp
 ;; OPTIONAL configuration
 (setq
@@ -404,10 +420,12 @@ The above code makes the backend available to select.  If 
you want it to be the
 #+html: </summary>
 
 Register a backend with
+
 #+begin_src emacs-lisp
 ;; :key can be a function that returns the API key.
 (gptel-make-gemini "Gemini" :key "YOUR_GEMINI_API_KEY" :stream t)
 #+end_src
+
 These are the required parameters, refer to the documentation of 
=gptel-make-gemini= for more.
 
 You can pick this backend from the menu when using gptel (see 
[[#usage][Usage]])
@@ -415,6 +433,7 @@ You can pick this backend from the menu when using gptel 
(see [[#usage][Usage]])
 ***** (Optional) Set as the default gptel backend
 
 The above code makes the backend available to select.  If you want it to be 
the default backend for gptel, you can set this as the value of 
=gptel-backend=.  Use this instead of the above.
+
 #+begin_src emacs-lisp
 ;; OPTIONAL configuration
 (setq
@@ -434,6 +453,7 @@ The above code makes the backend available to select.  If 
you want it to be the
 (If using a llamafile, run a 
[[https://github.com/Mozilla-Ocho/llamafile#other-example-llamafiles][server 
llamafile]] instead of a "command-line llamafile", and a model that supports 
text generation.)
 
 Register a backend with
+
 #+begin_src emacs-lisp
 ;; Llama.cpp offers an OpenAI compatible API
 (gptel-make-openai "llama-cpp"          ;Any name
@@ -442,6 +462,7 @@ Register a backend with
   :host "localhost:8000"                ;Llama.cpp server location
   :models '(test))                    ;Any names, doesn't matter for Llama
 #+end_src
+
 These are the required parameters, refer to the documentation of 
=gptel-make-openai= for more.
 
 You can pick this backend from the menu when using gptel (see 
[[#usage][Usage]])
@@ -449,6 +470,7 @@ You can pick this backend from the menu when using gptel 
(see [[#usage][Usage]])
 ***** (Optional) Set as the default gptel backend
 
 The above code makes the backend available to select.  If you want it to be 
the default backend for gptel, you can set this as the value of 
=gptel-backend=.  Use this instead of the above.
+
 #+begin_src emacs-lisp
 ;; OPTIONAL configuration
 (setq
@@ -472,10 +494,12 @@ Kagi's FastGPT model and the Universal Summarizer are 
both supported.  A couple
 2. Kagi models do not support multi-turn conversations, interactions are 
"one-shot".  They also do not support streaming responses.
 
 Register a backend with
+
 #+begin_src emacs-lisp
 (gptel-make-kagi "Kagi"                    ;any name
   :key "YOUR_KAGI_API_KEY")                ;can be a function that returns the 
key
 #+end_src
+
 These are the required parameters, refer to the documentation of 
=gptel-make-kagi= for more.
 
 You can pick this backend and the model (fastgpt/summarizer) from the 
transient menu when using gptel.
@@ -483,6 +507,7 @@ You can pick this backend and the model 
(fastgpt/summarizer) from the transient
 ***** (Optional) Set as the default gptel backend
 
 The above code makes the backend available to select.  If you want it to be 
the default backend for gptel, you can set this as the value of 
=gptel-backend=.  Use this instead of the above.
+
 #+begin_src emacs-lisp
 ;; OPTIONAL configuration
 (setq
@@ -499,6 +524,7 @@ The alternatives to =fastgpt= include =summarize:cecil=, 
=summarize:agnes=, =sum
 #+html: </summary>
 
 Register a backend with
+
 #+begin_src emacs-lisp
 ;; Together.ai offers an OpenAI compatible API
 (gptel-make-openai "TogetherAI"         ;Any name you want
@@ -516,6 +542,7 @@ You can pick this backend from the menu when using gptel 
(see [[#usage][Usage]])
 ***** (Optional) Set as the default gptel backend
 
 The above code makes the backend available to select.  If you want it to be 
the default backend for gptel, you can set this as the value of 
=gptel-backend=.  Use this instead of the above.
+
 #+begin_src emacs-lisp
 ;; OPTIONAL configuration
 (setq
@@ -537,6 +564,7 @@ The above code makes the backend available to select.  If 
you want it to be the
 #+html: </summary>
 
 Register a backend with
+
 #+begin_src emacs-lisp
 ;; Anyscale offers an OpenAI compatible API
 (gptel-make-openai "Anyscale"           ;Any name you want
@@ -551,6 +579,7 @@ You can pick this backend from the menu when using gptel 
(see [[#usage][Usage]])
 ***** (Optional) Set as the default gptel backend
 
 The above code makes the backend available to select.  If you want it to be 
the default backend for gptel, you can set this as the value of 
=gptel-backend=.  Use this instead of the above.
+
 #+begin_src emacs-lisp
 ;; OPTIONAL configuration
 (setq
@@ -569,6 +598,7 @@ The above code makes the backend available to select.  If 
you want it to be the
 #+html: </summary>
 
 Register a backend with
+
 #+begin_src emacs-lisp
 (gptel-make-perplexity "Perplexity"     ;Any name you want
   :key "your-api-key"                   ;can be a function that returns the key
@@ -580,6 +610,7 @@ You can pick this backend from the menu when using gptel 
(see [[#usage][Usage]])
 ***** (Optional) Set as the default gptel backend
 
 The above code makes the backend available to select.  If you want it to be 
the default backend for gptel, you can set this as the value of 
=gptel-backend=.  Use this instead of the above.
+
 #+begin_src emacs-lisp
 ;; OPTIONAL configuration
 (setq
@@ -593,6 +624,7 @@ The above code makes the backend available to select.  If 
you want it to be the
 **** Anthropic (Claude)
 #+html: </summary>
 Register a backend with
+
 #+begin_src emacs-lisp
 (gptel-make-anthropic "Claude"          ;Any name you want
   :stream t                             ;Streaming responses
@@ -605,6 +637,7 @@ You can pick this backend from the menu when using gptel 
(see [[#usage][Usage]])
 ***** (Optional) Set as the default gptel backend
 
 The above code makes the backend available to select.  If you want it to be 
the default backend for gptel, you can set this as the value of 
=gptel-backend=.  Use this instead of the above.
+
 #+begin_src emacs-lisp
 ;; OPTIONAL configuration
 (setq
@@ -636,6 +669,7 @@ You can control whether/how the reasoning output is shown 
via gptel's menu or =g
 #+html: </summary>
 
 Register a backend with
+
 #+begin_src emacs-lisp
 ;; Groq offers an OpenAI compatible API
 (gptel-make-openai "Groq"               ;Any name you want
@@ -656,6 +690,7 @@ You can pick this backend from the menu when using gptel 
(see [[#usage][Usage]])
 ***** (Optional) Set as the default gptel backend
 
 The above code makes the backend available to select.  If you want it to be 
the default backend for gptel, you can set this as the value of 
=gptel-backend=.  Use this instead of the above.
+
 #+begin_src emacs-lisp
 ;; OPTIONAL configuration
 (setq gptel-model   'mixtral-8x7b-32768
@@ -674,12 +709,12 @@ The above code makes the backend available to select.  If 
you want it to be the
 #+end_src
 
 #+html: </details>
-
 #+html: <details><summary>
 **** Mistral Le Chat
 #+html: </summary>
 
 Register a backend with
+
 #+begin_src emacs-lisp
 ;; Mistral offers an OpenAI compatible API
 (gptel-make-openai "MistralLeChat"  ;Any name you want
@@ -695,6 +730,7 @@ You can pick this backend from the menu when using gptel 
(see [[#usage][Usage]])
 ***** (Optional) Set as the default gptel backend
 
 The above code makes the backend available to select.  If you want it to be 
the default backend for gptel, you can set this as the value of 
=gptel-backend=.  Use this instead of the above.
+
 #+begin_src emacs-lisp
 ;; OPTIONAL configuration
 (setq gptel-model   'mistral-small
@@ -708,13 +744,13 @@ The above code makes the backend available to select.  If 
you want it to be the
 #+end_src
 
 #+html: </details>
-
 #+html: <details><summary>
 
 **** OpenRouter
 #+html: </summary>
 
 Register a backend with
+
 #+begin_src emacs-lisp
 ;; OpenRouter offers an OpenAI compatible API
 (gptel-make-openai "OpenRouter"               ;Any name you want
@@ -736,6 +772,7 @@ You can pick this backend from the menu when using gptel 
(see [[#usage][Usage]])
 ***** (Optional) Set as the default gptel backend
 
 The above code makes the backend available to select.  If you want it to be 
the default backend for gptel, you can set this as the value of 
=gptel-backend=.  Use this instead of the above.
+
 #+begin_src emacs-lisp
 ;; OPTIONAL configuration
 (setq gptel-model   'mixtral-8x7b-32768
@@ -760,6 +797,7 @@ The above code makes the backend available to select.  If 
you want it to be the
 #+html: </summary>
 
 Register a backend with
+
 #+begin_src emacs-lisp
 (gptel-make-privategpt "privateGPT"               ;Any name you want
   :protocol "http"
@@ -776,6 +814,7 @@ You can pick this backend from the menu when using gptel 
(see [[#usage][Usage]])
 ***** (Optional) Set as the default gptel backend
 
 The above code makes the backend available to select.  If you want it to be 
the default backend for gptel, you can set this as the value of 
=gptel-backend=.  Use this instead of the above.
+
 #+begin_src emacs-lisp
 ;; OPTIONAL configuration
 (setq gptel-model   'private-gpt
@@ -796,6 +835,7 @@ The above code makes the backend available to select.  If 
you want it to be the
 #+html: </summary>
 
 Register a backend with
+
 #+begin_src emacs-lisp
 (gptel-make-deepseek "DeepSeek"       ;Any name you want
   :stream t                           ;for streaming responses
@@ -807,6 +847,7 @@ You can pick this backend from the menu when using gptel 
(see [[#usage][Usage]])
 ***** (Optional) Set as the default gptel backend
 
 The above code makes the backend available to select.  If you want it to be 
the default backend for gptel, you can set this as the value of 
=gptel-backend=.  Use this instead of the above.
+
 #+begin_src emacs-lisp
 ;; OPTIONAL configuration
 (setq gptel-model   'deepseek-reasoner
@@ -823,6 +864,7 @@ The above code makes the backend available to select.  If 
you want it to be the
 Sambanova offers various LLMs through their Samba Nova Cloud offering, with 
Deepseek-R1 being one of them. The token speed for Deepseek R1 via Sambanova is 
about 6 times faster than when accessed through deepseek.com 
 
 Register a backend with
+
 #+begin_src emacs-lisp
 (gptel-make-openai "Sambanova"        ;Any name you want
   :host "api.sambanova.ai"
@@ -836,11 +878,13 @@ You can pick this backend from the menu when using gptel 
(see [[#usage][Usage]])
 
 ***** (Optional) Set as the default gptel backend
 The code aboves makes the backend available for selection.  If you want it to 
be the default backend for gptel, you can set this as the value of 
=gptel-backend=.  Add these two lines to your configuration: 
+
 #+begin_src emacs-lisp
 ;; OPTIONAL configuration
   (setq gptel-model 'DeepSeek-R1)
   (setq gptel-backend (gptel-get-backend "Sambanova"))
 #+end_src
+
 #+html: </details>
 #+html: <details><summary>
 
@@ -848,6 +892,7 @@ The code aboves makes the backend available for selection.  
If you want it to be
 #+html: </summary>
 
 Register a backend with
+
 #+begin_src emacs-lisp
 ;; Cerebras offers an instant OpenAI compatible API
 (gptel-make-openai "Cerebras"
@@ -864,6 +909,7 @@ You can pick this backend from the menu when using gptel 
(see [[#usage][Usage]])
 ***** (Optional) Set as the default gptel backend
 
 The above code makes the backend available to select.  If you want it to be 
the default backend for gptel, you can set this as the value of 
=gptel-backend=.  Use this instead of the above.
+
 #+begin_src emacs-lisp
 ;; OPTIONAL configuration
 (setq gptel-model   'llama3.1-8b
@@ -885,6 +931,7 @@ The above code makes the backend available to select.  If 
you want it to be the
 NOTE:  [[https://docs.github.com/en/github-models/about-github-models][GitHub 
Models]] is /not/ GitHub Copilot!  If you want to use GitHub Copilot chat via 
gptel, look at the instructions for GitHub CopilotChat below instead.
 
 Register a backend with
+
 #+begin_src emacs-lisp
   ;; Github Models offers an OpenAI compatible API
   (gptel-make-openai "Github Models" ;Any name you want
@@ -904,6 +951,7 @@ You can pick this backend from the menu when using (see 
[[#usage][Usage]]).
 ***** (Optional) Set as the default gptel backend
 
 The above code makes the backend available to select.  If you want it to be 
the default backend for gptel, you can set this as the value of 
=gptel-backend=.  Use this instead of the above.
+
 #+begin_src emacs-lisp
   ;; OPTIONAL configuration
   (setq gptel-model  'gpt-4o
@@ -922,6 +970,7 @@ The above code makes the backend available to select.  If 
you want it to be the
 #+html: </summary>
 
 Register a backend with
+
 #+begin_src emacs-lisp
 ;; Novita AI offers an OpenAI compatible API
 (gptel-make-openai "NovitaAI"         ;Any name you want
@@ -940,6 +989,7 @@ You can pick this backend from the menu when using gptel 
(see [[#usage][Usage]])
 ***** (Optional) Set as the default gptel backend
 
 The above code makes the backend available to select.  If you want it to be 
the default backend for gptel, you can set this as the value of 
=gptel-backend=.  Use this instead of the above.
+
 #+begin_src emacs-lisp
 ;; OPTIONAL configuration
 (setq
@@ -957,12 +1007,12 @@ The above code makes the backend available to select.  
If you want it to be the
 #+end_src
 
 #+html: </details>
-
 #+html: <details><summary>
 **** xAI
 #+html: </summary>
 
 Register a backend with
+
 #+begin_src emacs-lisp
 (gptel-make-xai "xAI"                   ; Any name you want
   :stream t
@@ -974,6 +1024,7 @@ You can pick this backend from the menu when using gptel 
(see [[#usage][Usage]])
 ***** (Optional) Set as the default gptel backend
 
 The above code makes the backend available to select.  If you want it to be 
the default backend for gptel, you can set this as the value of 
=gptel-backend=.  Use this instead of the above.
+
 #+begin_src emacs-lisp
 (setq gptel-model 'grok-3-latest
       gptel-backend
@@ -983,7 +1034,6 @@ The above code makes the backend available to select.  If 
you want it to be the
 #+end_src
 
 #+html: </details>
-
 #+html: <details><summary>
 **** AI/ML API
 #+html: </summary>
@@ -991,6 +1041,7 @@ The above code makes the backend available to select.  If 
you want it to be the
 AI/ML API provides 300+ AI models including Deepseek, Gemini, ChatGPT. The 
models run at enterprise-grade rate limits and uptimes.
 
 Register a backend with
+
 #+begin_src emacs-lisp
 ;; AI/ML API offers an OpenAI compatible API
 (gptel-make-openai "AI/ML API"        ;Any name you want
@@ -1006,6 +1057,7 @@ You can pick this backend from the menu when using gptel 
(see [[#usage][Usage]])
 ***** (Optional) Set as the default gptel backend
 
 The above code makes the backend available to select.  If you want it to be 
the default backend for gptel, you can set this as the value of 
=gptel-backend=.  Use this instead of the above.
+
 #+begin_src emacs-lisp
 ;; OPTIONAL configuration
 (setq gptel-model 'gpt-4o
@@ -1024,6 +1076,7 @@ The above code makes the backend available to select.  If 
you want it to be the
 #+html: </summary>
 
 Register a backend with
+
 #+begin_src emacs-lisp
 (gptel-make-gh-copilot "Copilot")
 #+end_src
@@ -1034,6 +1087,7 @@ You can pick this backend from the menu when using gptel 
(see [[#usage][Usage]])
 ***** (Optional) Set as the default gptel backend
 
 The above code makes the backend available to select.  If you want it to be 
the default backend for gptel, you can set this as the value of 
=gptel-backend=.  Use this instead of the above.
+
 #+begin_src emacs-lisp
 ;; OPTIONAL configuration
 (setq gptel-model 'claude-3.7-sonnet
@@ -1041,12 +1095,12 @@ The above code makes the backend available to select.  
If you want it to be the
 #+end_src
 
 #+html: </details>
-
 #+html: <details><summary>
 **** AWS Bedrock
 #+html: </summary>
 
 Register a backend with
+
 #+begin_src emacs-lisp
 (gptel-make-bedrock "AWS"
   ;; optionally enable streaming
@@ -1073,6 +1127,7 @@ You can pick this backend from the menu when using gptel 
(see [[#usage][Usage]])
 ***** (Optional) Set as the default gptel backend
 
 The above code makes the backend available to select.  If you want it to be 
the default backend for gptel, you can set this as the value of 
=gptel-backend=.  Use this instead of the above.
+
 #+begin_src emacs-lisp
 ;; OPTIONAL configuration
 (setq gptel-model   'claude-sonnet-4-20250514
@@ -1090,7 +1145,6 @@ The above code makes the backend available to select.  If 
you want it to be the
 #+end_src
 
 #+html: </details>
-
 #+html: <details><summary>
 **** Moonshot (Kimi)
 #+html: </summary>
@@ -1137,7 +1191,6 @@ Then you also need to add the tool declaration via 
=:request-params= because it
 Now the chat should be able to automatically use search. Try "what's new 
today" and you should expect the up-to-date news in response.
 
 #+html: </details>
-
 ** Usage
 
 gptel provides a few powerful, general purpose and flexible commands.  You can 
dynamically tweak their behavior to the needs of your task with /directives/, 
redirection options and more.  There is a 
[[https://www.youtube.com/watch?v=bsRnh_brggM][video demo]] showing various 
uses of gptel -- but =gptel-send= might be all you need.
@@ -1310,6 +1363,9 @@ Most gptel options can be set from gptel's transient 
menu, available by calling
 Selecting a model and backend can be done interactively via the =-m= command 
of =gptel-menu=.  Available registered models are prefixed by the name of their 
backend with a string like =ChatGPT:gpt-4o-mini=, where =ChatGPT= is the 
backend name you used to register it and =gpt-4o-mini= is the name of the model.
 
 *** Include more context with requests
+:PROPERTIES:
+:CUSTOM_ID: include-context
+:END:
 
 By default, gptel will query the LLM with the active region or the buffer 
contents up to the cursor.  Often it can be helpful to provide the LLM with 
additional context from outside the current buffer. For example, when you're in 
a chat buffer but want to ask questions about a (possibly changing) code buffer 
and auxiliary project files.
 
diff --git a/gptel-transient.el b/gptel-transient.el
index 1919ef07061..48194f8e9d2 100644
--- a/gptel-transient.el
+++ b/gptel-transient.el
@@ -971,7 +971,7 @@ together.  See `gptel-make-preset' for details."
   :transient-suffix #'transient--do-return
   [:description "Save or apply a preset collection of gptel options"
    [:pad-keys t
-    ("C-s" "Save current settings to preset" gptel--save-preset)]]
+    ("C-s" "Save current settings as new preset" gptel--save-preset)]]
   [:if (lambda () gptel--known-presets)
    :class transient-column
    :setup-children
diff --git a/gptel.el b/gptel.el
index 94b56f98d50..0492e965a12 100644
--- a/gptel.el
+++ b/gptel.el
@@ -3909,7 +3909,9 @@ NAME is the name of a preset, or a spec (plist) of the 
form
 (defun gptel--transform-apply-preset (_fsm)
   "Apply a gptel preset to the buffer depending on the prompt.
 
-If the user prompt begins with @foo, the preset foo is applied."
+If the last user prompt includes @foo, the preset foo is applied.
+Before applying the preset, \"@foo\" is removed from the prompt and
+point is placed at its position."
   (when gptel--known-presets
     (text-property-search-backward 'gptel nil t)
     (while (re-search-forward "@\\([^[:blank:]]+\\)\\_>" nil t)
@@ -3921,6 +3923,8 @@ If the user prompt begins with @foo, the preset foo is 
applied."
                     (preset (or (gptel-get-preset (intern-soft name))
                                 (gptel-get-preset name))))
           (delete-region (match-beginning 0) (match-end 0))
+          ;; Point must be after @foo when the preset is applied to allow for
+          ;; more advanced transformations.
           (gptel--apply-preset preset
                                (lambda (sym val)
                                  (set (make-local-variable sym) val))))))))

Reply via email to