Copilot commented on code in PR #285:
URL:
https://github.com/apache/incubator-hugegraph-ai/pull/285#discussion_r2228219922
##########
hugegraph-llm/src/hugegraph_llm/demo/rag_demo/configs_block.py:
##########
@@ -202,7 +183,6 @@ def apply_graph_config(url, name, user, pwd, gs,
origin_call=None) -> int:
return response
-# Different llm models have different parameters, so no meaningful argument
names are given here
def apply_llm_config(current_llm_config, arg1, arg2, arg3, arg4,
origin_call=None) -> int:
Review Comment:
The function comment should be updated to reflect the actual parameter
meanings now that QianFan is removed. The comment 'Different llm models have
different parameters, so no meaningful argument names are given here' is
outdated and should document the remaining parameter patterns for OpenAI,
LiteLLM, and Ollama.
```suggestion
def apply_llm_config(current_llm_config, api_key, api_base, language_model,
tokens, origin_call=None) -> int:
"""
Configure the LLM settings based on the selected model type.
Parameters:
current_llm_config (str): The current LLM configuration name (e.g.,
"chat").
api_key (str): API key for authentication (used by OpenAI and
LiteLLM).
api_base (str): Base URL for the API (used by OpenAI and LiteLLM).
language_model (str): The language model to use (e.g.,
"gpt-3.5-turbo").
tokens (int): Token limit for the model (used by OpenAI and LiteLLM).
origin_call (optional): Additional context for the API call.
Returns:
int: HTTP status code indicating the result of the configuration.
"""
```
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]