Version 0.9.0 of package Llm has just been released in GNU ELPA.
You can now find it in M-x list-packages RET.

Llm describes itself as:

  ===================================
  Interface to pluggable llm backends
  ===================================

More at https://elpa.gnu.org/packages/llm.html

## Summary:

                          ━━━━━━━━━━━━━━━━━━━━━━━
                           LLM PACKAGE FOR EMACS
                          ━━━━━━━━━━━━━━━━━━━━━━━





  1 Introduction
  ══════════════

    This library provides an interface for interacting with Large Language
    Models (LLMs). It allows elisp code to use LLMs while also giving
    end-users the choice to select their preferred LLM. This is
    particularly beneficial when working with LLMs since various
    high-quality models exist, some of which have paid API access, while
    others are locally installed and free but offer medium
    quality. Applications using LLMs can utilize this library to ensure
    compatibility regardless of whether the user has a local LLM or is
    paying for API access.

## Recent NEWS:

1 Version 0.9
═════════════

  • Add `llm-chat-token-limit' to find the token limit based on the
    model.
  • Add request timeout customization.


2 Version 0.8
═════════════

  • Allow users to change the Open AI URL, to allow for proxies and
    other services that re-use the API.
  • Add `llm-name' and `llm-cancel-request' to the API.
  • Standardize handling of how context, examples and history are folded
    into `llm-chat-prompt-interactions'.


3 Version 0.7
═════════════

  • Upgrade Google Cloud Vertex to Gemini - previous models are no
    longer available.
  • Added `gemini' provider, which is an alternate endpoint with
    alternate (and easier) authentication and setup compared to Cloud
    Vertex.
  • Provide default for `llm-chat-async' to fall back to streaming if
    not defined for a provider.


4 Version 0.6
═════════════

  • Add provider `llm-llamacpp'.
  • Fix issue with Google Cloud Vertex not responding to messages with a
    system interaction.
  • Fix use of `(pos-eol)' which is not compatible with Emacs 28.1.


5 Version 0.5.2
═══════════════

  • Fix incompatibility with older Emacs introduced in Version 0.5.1.
  • Add support for Google Cloud Vertex model `text-bison' and variants.
  • `llm-ollama' can now be configured with a scheme (http vs https).


6 Version 0.5.1
═══════════════

  • Implement token counting for Google Cloud Vertex via their API.
  • Fix issue with Google Cloud Vertex erroring on multibyte strings.
  • Fix issue with small bits of missing text in Open AI and Ollama
    streaming chat.


7 Version 0.5
═════════════

  • Fixes for conversation context storage, requiring clients to handle
    ongoing conversations slightly differently.
  • Fixes for proper sync request http error code handling.
  • `llm-ollama' can now be configured with a different hostname.
  • Callbacks now always attempts to be in the client's original buffer.
  • Add provider `llm-gpt4all'.


8 Version 0.4
═════════════

  • Add helper function `llm-chat-streaming-to-point'.
  • Add provider `llm-ollama'.


9 Version 0.3
═════════════

  • Streaming support in the API, and for the Open AI and Vertex models.
  • Properly encode and decode in utf-8 so double-width or other
    character sizes don't cause problems.


10 Version 0.2.1
════════════════

  • Changes in how we make and listen to requests, in preparation for
    streaming functionality.
  • Fix overzealous change hook creation when using async llm requests.


11 Version 0.2
══════════════

  • Remove the dependency on non-GNU request library.

Reply via email to