sbp commented on issue #14:
URL: https://github.com/apache/tooling-agents/issues/14#issuecomment-4352850338

   > the agent can call its invokable LLM N times in sequence
   
   Yeah, but the main aim is to use cheaper LLMs to gather the context, medium 
cost LLMs to collate and do a small amount of reasoning about that gathered 
context (also compactifying), and then high cost LLMs to work on the output of 
that. This is covered in the second section anyway, so I think the first 
section is just irrelevant.
   
   > if good return it
   
   Nope, the early phases are for gathering. It's not like we're trying weaker 
models on the same task and then trying it over and over with more expensive 
models until there's success. The reason why we can't do that is because 
verification may cost nearly as much as production in the first place, and how 
can you get a weak model to know whether it's done the right thing or not?
   
   > This one's a math technique
   
   Well, [yes](https://en.wikipedia.org/wiki/Telescoping_series), but the 
missing section here is about compactifying inputs but then caching them 
outside of the tools and agents.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to