Version 0.9.8.5 of package Gptel has just been released in NonGNU ELPA. You can now find it in M-x list-packages RET.
Gptel describes itself as: =================================== Interact with ChatGPT or other LLMs =================================== More at https://elpa.nongnu.org/nongnu/gptel.html ## Summary: gptel is a simple Large Language Model chat client, with support for multiple models and backends. It works in the spirit of Emacs, available at any time and in any buffer. gptel supports: - The services ChatGPT, Azure, Gemini, Anthropic AI, Together.ai, Perplexity, Anyscale, OpenRouter, Groq, PrivateGPT, DeepSeek, Cerebras, Github Models, GitHub Copilot chat, AWS Bedrock, Novita AI, xAI, Sambanova, Mistral Le Chat and Kagi (FastGPT & Summarizer). - Local models via Ollama, Llama.cpp, Llamafiles or GPT4All Additionally, any LLM service (local or remote) that provides an OpenAI-compatible API is supported. Features: ## Recent NEWS: # -*- mode: org; -*- * 0.9.8.5 2025-06-11 ** Breaking changes - ~gptel-org-branching-context~ is now a global variable. It was buffer-local by default in past releases. - The following models have been removed from the default ChatGPT backend: - ~o1-preview~: use ~o1~ instead. - ~gpt-4-turbo-preview~: use ~gpt-4o~ or ~gpt-4-turbo~ instead. - ~gpt-4-32k~, ~gpt-4-0125-preview~ and ~gpt-4-1106-preview~: use ~gpt-4o~ or ~gpt-4~ instead. Alternatively, you can add these models back to the backend in your personal configuration: #+begin_src emacs-lisp (push 'gpt-4-turbo-preview (gptel-backend-models (gptel-get-backend "ChatGPT"))) #+end_src - Only relevant if you use ~gptel-request~ in your elisp code, /interactive gptel usage is unaffected/: ~gptel-request~ now takes a new, optional =:transforms= argument. Any prompt modifications (like adding context to requests) must now be specified via this argument. See the definition of ~gptel-send~ for an example. ** New models and backends - Add support for ~gpt-4.1~, ~gpt-4.1-mini~, ~gpt-4.1-nano~, ~o3~ and ~o4-mini~. - Add support for ~gemini-2.5-pro-exp-03-25~, ~gemini-2.5-flash-preview-04-17~, ~gemini-2.5-pro-preview-05-06~ and ~gemini-2.5-pro-preview-06-05~. - Add support for ~claude-sonnet-4-20250514~ and ~claude-opus-4-20250514~. - Add support for AWS Bedrock models. You can create an AWS Bedrock gptel backend with ~gptel-make-bedrock~, which see. Please note: AWS Bedrock support requires Curl 8.5.0 or higher. - You can now create an xAI backend with ~gptel-make-xai~, which see. (xAI was supported before but the model configuration is now handled for you by this function.) - Add support for GitHub Copilot Chat. See the README and ~gptel-make-gh-copilot~. Please note: this is only the chat component of GitHub Copilot. Copilot's ~completion-at-point~ (tab-completion) functionality is not supported by gptel. - Add support for Sambanova. This is an OpenAI-compatible API so you can create a backend with ~gptel-make-openai~, see the README for details. - Add support for Mistral Le Chat. This is an an OpenAI-compatible API so you can create a backend with ~gptel-make-openai~, see the README for details. ** New features and UI changes - gptel now supports handling reasoning/thinking blocks in responses from Gemini models. This is controlled by ~gptel-include-reasoning~, in the same way that it handles other APIs. - The new option ~gptel-curl-extra-args~ can be used to specify extra arguments to the Curl command used for the request. This is the global version of the gptel-backend-specific ~:curl-args~ slot, which can be used to specify Curl arguments when using a specific backend. - Tools now run in the buffer from which the request originates. This can be significant when tools read or manipulate Emacs' state. - gptel can access MCP server tools by integrating with the mcp.el package, which is at https://github.com/lizqwerscott/mcp.el. (mcp.el is available on MELPA.) To help with the integration, two new commands are provided: ~gptel-mcp-connect~ and ~gptel-mcp-disconnect~. You can use these to start MCP servers selectively and add tools to gptel. These commands are also available from gptel's tools menu. These commands are currently not autoloaded by gptel. To access them, require the ~gptel-integrations~ feature. - You can now define "presets", which are a bundle of gptel options, such as the backend, model, system message, included tools, temperature and so on. This set of options can be applied together, making it easy to switch between different tasks using gptel. From gptel's transient menu, you can save the current configuration as a preset or apply another one. Presets can be applied globally, buffer-locally or for the next request only. To persist presets across Emacs sessions, define presets in your configuration using ~gptel-make-preset~. ... ...