branch: externals/llm
commit 2b1bccba10f0db017a42ce68e6707d1c034bcf35
Author: Andrew Hyatt <[email protected]>
Commit: GitHub <[email protected]>

    Improve the tool calling docs in the README (#236)
---
 NEWS.org   |  1 +
 README.org | 21 +++++++++++++--------
 2 files changed, 14 insertions(+), 8 deletions(-)

diff --git a/NEWS.org b/NEWS.org
index 185b652697..e2e13b8281 100644
--- a/NEWS.org
+++ b/NEWS.org
@@ -1,4 +1,5 @@
 * Version 0.28.5
+- Improved the tool calling docs
 - Fix for running tools in the original buffer with streaming
 * Version 0.28.4
 - Removed bad interactions made in Ollama tool calls
diff --git a/README.org b/README.org
index 0a2fb4ee12..c52c9f4156 100644
--- a/README.org
+++ b/README.org
@@ -235,7 +235,7 @@ Tool use is a way to give the LLM a list of functions it 
can call, and have it c
 4. The LLM will return with a text response based on the initial prompt and 
the results of the tool use.
 5. The client can now can continue the conversation.
 
-This basic structure is useful because it can guarantee a well-structured 
output (if the LLM does decide to use the tool). *Not every LLM can handle tool 
use, and those that do not will ignore the tools entirely*. The function 
=llm-capabilities= will return a list with =tool-use= in it if the LLM supports 
tool use.  Because not all providers support tool use when streaming, 
=streaming-tool-use= indicates the ability to use tool uses in 
~llm-chat-streaming~. Right now only Gemini, Vertex, [...]
+This basic structure is useful because it can guarantee a well-structured 
output (if the LLM does decide to use the tool). *Not every LLM can handle tool 
use, and those that do not will ignore the tools entirely*. The function 
=llm-capabilities= will return a list with =tool-use= in it if the LLM supports 
tool use.  Because not all providers support tool use when streaming, 
=streaming-tool-use= indicates the ability to use tool uses in 
~llm-chat-streaming~. However, even for LLMs that ha [...]
 
 The way to call tools is to attach a list of tools to the =tools= slot in the 
prompt. This is a list of =llm-tool= structs, which is a tool that is an elisp 
function, with a name, a description, and a list of arguments. The docstrings 
give an explanation of the format.  An example is:
 
@@ -271,15 +271,20 @@ After the tool is called, the client could use the 
result, but if you want to pr
 
 Tools will be called with vectors for array results, =nil= for false boolean 
results, and plists for objects.
 
-Be aware that there is no gaurantee that the tool will be called correctly.  
While the LLMs mostly get this right, they are trained on Javascript functions, 
so imitating Javascript names is recommended. So, "write_email" is a better 
name for a function than "write-email".
+When tools are called, the result have, in =multi-output= mode will have 
output like the following:
+
+#+begin_src emacs-lisp
+(:tool-uses ((:name "capital_of_country" :args (("country" . "France" ))))
+            :tool-results (("capital_of_country" . "Paris")))
+#+end_src
 
-Examples can be found in =llm-tester=. There is also a function call to 
generate function calls from existing elisp functions in 
=utilities/elisp-to-tool.el=.
+The tool uses here comes from the LLM, whereas the tool results are the result 
of the elisp function that is executed as part of the tool use.
+
+Without =multi-output= the result will be just the tool results.
+
+Be aware that there is no gaurantee that the tool will be called correctly.  
While the LLMs mostly get this right, they are trained on Javascript functions, 
so imitating Javascript names is recommended. So, "write_email" is a better 
name for a function than "write-email".
 
-Tool use can be controlled by the =:tool-options= param in 
=llm-make-chat-prompt=
-that takes a =llm-tool-options= struct.  This can be set to force or forbid 
tool
-calling, or to force a specific tool to be called.  This is useful when a
-converastion with tools happens and the tools remain constant but how they are
-used may need to change.  Ollama does not support currently support this.
+Examples can be found in =llm-tester=. There is also a tool call to generate 
tool calls from existing elisp functions in =utilities/elisp-to-tool.el=. Tool 
use can be controlled by the =:tool-options= param in =llm-make-chat-prompt= 
that takes a =llm-tool-options= struct.  This can be set to force or forbid 
tool calling, or to force a specific tool to be called.  This is useful when a 
converastion with tools happens and the tools remain constant but how they are 
used may need to change.  [...]
 ** Media input
 *Note:  media input functionality is currently alpha quality.  If you want to 
use it, please watch the =llm= 
[[https://github.com/ahyatt/llm/discussions][discussions]] for any 
announcements about changes.*
 

Reply via email to