Version 0.9.9.4 of package Gptel has just been released in NonGNU ELPA.
You can now find it in M-x list-packages RET.

Gptel describes itself as:

  ===================================
  Interact with ChatGPT or other LLMs
  ===================================

More at https://elpa.nongnu.org/nongnu/gptel.html

## Summary:

  gptel is a simple Large Language Model chat client, with support for multiple
  models and backends.

  It works in the spirit of Emacs, available at any time and in any buffer.

  gptel supports:

  - The services ChatGPT, Azure, Gemini, Anthropic AI, Together.ai, Perplexity,
    AI/ML API, Anyscale, OpenRouter, Groq, PrivateGPT, DeepSeek, Cerebras, 
Github Models,
    GitHub Copilot chat, AWS Bedrock, Novita AI, xAI, Sambanova, Mistral Le
    Chat and Kagi (FastGPT & Summarizer).
  - Local models via Ollama, Llama.cpp, Llamafiles or GPT4All

  Additionally, any LLM service (local or remote) that provides an
  OpenAI-compatible API is supported.

  Features:

## Recent NEWS:

# -*- mode: org; -*-

* 0.9.9.4

** Breaking changes

- The models =gpt-5-codex=, =o3=, =o3-mini=, =o4-mini=,
  =claude-3.5-sonnet=, =claude-3.7-sonnet=, =claude-3.7-sonnet-thought=,
  =claude-opus-4= and =gemini-2.0-flash-001= have been removed from the
  default list of GitHub Copilot models.  These models are no longer
  available in the GitHub Copilot API.

- =gptel-track-media= now controls whether links to media files are
  tracked /only/ in chat buffers.  Previously it also controlled whether
  media files added to the context explicitly via =gptel-add-file= were
  sent.  This is considered a bug and has now been fixed.

** New models and backends

- GitHub Copilot backend: Add support for =gpt-5.2=, =gpt-5.2-codex=,
  =gpt-41-copilot=, =claude-opus-4.5=, =claude-opus-4.6=,
  =gemini-3-pro-preview= and =gemini-3-flash-preview=.

- Anthropic backend: Add support for =claude-opus-4-6= and
  =claude-sonnet-4-6=.

- Bedrock backend: Add support for =claude-opus-4-5=,
  =claude-opus-4-6=, =claude-sonnet-4-6= and =nova-2-lite=.

- Add support for =gemini-3.1-pro-preview=, =gemini-3-pro-preview= and
  =gemini-3-flash-preview=.

- Add support for =gpt-5.1=.

** New features and UI changes

- Running ~gptel-add~ in IBuffer now adds marked buffers or the buffer
  at point to gptel's context, and running ~gptel-add~ with a negative
  prefix-arg removes them.  This is similar to its behavior in Dired.
  To add the literal contents of the IBuffer to the context, you can
  select a text region first.

- When redirecting LLM responses to the kill ring or echo area, gptel
  now omits tool call results, as these tend to be very noisy.  Kill
  ring redirection now correctly captures the full response from the
  LLM, including pre- and post-tool-call text.

- =gptel-rewrite= now supports tool calling.  If =gptel-tools= is
  non-nil the LLM can, for instance, read files to fetch more context
  for the rewrite action.

- If a preset has been applied in a gptel chat buffer, saving the
  buffer to a file causes the preset to be recorded along with the
  other metadata (model, backend, tools etc).  This makes it possible
  to associate any collection of gptel settings/preferences with the
  chat file, and not just the few properties that gptel writes to the
  file otherwise.  But resuming this chat with the preset settings
  applied requires that the preset be defined, so the chat file will
  be less self-contained.

- =gptel-send= now works in Vterm buffers in a limited way.  Responses
  will be inserted into Vterm buffers, but without streaming.  The
  respond-in-place option to overwrite queries with responses in Vterm
  buffers is supported as well, but might be buggy if your shell
  prompt is "rich" and has many dynamic elements.

  Support for =gptel-send= in Term/Ansi-Term and Eat buffers is not
  yet available but planned.

** Notable bug fixes

- Function-valued system messages/directives are now evaluated in the
  buffer from which the gptel request is sent, so they can use the
  context of the current buffer correctly.  (Previously they were
  evaluated in a temporary buffer used to construct the query, leading
  to unexpected behavior.)

- When using OpenAI-compatible APIs (such as Deepseek), models that
  call tools within their "reasoning" phase are now correctly handled
  by gptel.

* 0.9.9.3

** Breaking changes

- The models =gpt-4-copilot= and =o1= have been removed from the default
  list of GitHub Copilot models.  These models are no longer available
  in the GitHub Copilot API.

- Link handling in gptel chat buffers has changed, hopefully for the
  better.  When ~gptel-track-media~ is non-nil, gptel follows links in
  the prompt and includes their contents with queries.  Previously,
  links to files had to be placed "standalone", surrounded by blank
  lines, for the files to be included in the prompt.  This limitation
  has been removed -- all supported links in the prompt will be followed
  now.

  The "standalone" limitation was imposed to make included links stand
...
...

Reply via email to