branch: elpa/gptel commit daa2b8205183a25326d1d121dd85f4927454af0b Author: Karthik Chikmagalur <karthikchikmaga...@gmail.com> Commit: Karthik Chikmagalur <karthikchikmaga...@gmail.com>
gptel: Update docstring for gptel-track-media (#991) * gptel.el (gptel-track-media): This user option is backend-agnostic, and it should work with any model that supports media (text, images, video etc). Update documentation to clarify this. --- gptel.el | 15 +++++++-------- 1 file changed, 7 insertions(+), 8 deletions(-) diff --git a/gptel.el b/gptel.el index 7c4f8c4cd6..1f370d74d9 100644 --- a/gptel.el +++ b/gptel.el @@ -817,17 +817,16 @@ always handled separately." (defcustom gptel-track-media nil "Whether supported media in chat buffers should be sent. -When the active `gptel-model' supports it, gptel can send images -or other media from links in chat buffers to the LLM. To use -this, the following steps are required. +When the active `gptel-model' supports it, gptel can send text, images +or other media from links in chat buffers to the LLM. To use this, the +following steps are required. 1. `gptel-track-media' (this variable) should be non-nil -2. The LLM should provide vision or document support. Currently, -only the OpenAI, Anthropic and Ollama APIs are supported. See -the documentation of `gptel-make-openai', `gptel-make-anthropic' -and `gptel-make-ollama' resp. for details on how to specify media -support for models. +2. The LLM should provide vision or document support. (See +`gptel-make-openai', `gptel-make-anthropic', `gptel-make-ollama' or +`gptel-make-gemini' for details on how to specify media support for +models.) 3. Only \"standalone\" links in chat buffers are considered. These are links on their own line with no surrounding text.