GitHub user alnzng added a comment to the discussion: Agent Skills Integration 
Design

@wenjin272 Yep, that part is a little challenging with current design / 
implementation. Below are my thoughts on this:

To make skill-tool binding work, the tool list we send to LLM needs to be able 
to **change between turns** of LLM calls. I think this is the fundamental 
requirement, and we are aligned on this.

So basically, we need to change current **static tool list to dynamic tool 
list**. This means two things we need to solve:

  **1. Let LLM call load the latest active tools.** Currently 
`BaseChatModelSetup.chat()` reads tools from `self.tools`, which is fixed at 
init time. Instead, it should read from a shared mutable place, for example, 
`RunnerContext`. This way, every time `chat()` is called, it picks up whatever 
tools are currently active.

  **2. Dynamically update the tool list when skill is loaded.** We need a place 
to do the actual update. A natural candidate is `ToolCallAction`, when it 
executes `load_skill()`, it can register that skill's tools into the same 
shared registry (e.g., `RunnerContext`). This way the tool list gets updated 
right at the moment skill is loaded.

Together these two pieces form a read-write pattern: `ToolCallAction` writes 
new tools when skill loads `BaseChatModelSetup.chat()` reads latest tools on 
every LLM call. The gap in between is bridged by `RunnerContext` as the shared 
state.

Basically below is the flow I am thinking:

1. Turn 1: LLM only sees base built-in tools (`load_skill, 
execute_shell_command`, `load_skill_resource`) + skill catalog in system 
prompt. LLM decides to call `load_skill("data_analysis")`.
2. Tool execution: `TOOL_CALL_ACTION` executes `load_skill()`. During this 
execution, the skill's declared tools (like `query_api`, `generate_report`) get 
registered into a mutable tool registry on RunnerContext.
3. Turn 2: `ChatModelAction` calls `chatModel.chat()` again. This time, 
`chat()` reads current tool list from the registry, now it includes `query_api` 
and `generate_report`. LLM can see and call these new tools.



GitHub link: 
https://github.com/apache/flink-agents/discussions/565#discussioncomment-16318528

----
This is an automatically sent email for [email protected].
To unsubscribe, please send an email to: [email protected]

Reply via email to