Baoyuantop commented on code in PR #13049:
URL: https://github.com/apache/apisix/pull/13049#discussion_r2871701459
##########
apisix/plugins/ai-drivers/openai-base.lua:
##########
@@ -131,10 +131,12 @@ local function read_response(conf, ctx, res,
response_filter)
core.log.info("got token usage from ai service: ",
core.json.delay_encode(data.usage))
ctx.llm_raw_usage = data.usage
+ local pt = data.usage.prompt_tokens or
data.usage.input_tokens or 0
Review Comment:
These changes appear to be unrelated to this PR; please split them into
separate PRs.
##########
apisix/plugins/ai-rate-limiting.lua:
##########
@@ -65,6 +65,10 @@ local schema = {
default = "total_tokens",
description = "The strategy to limit the tokens"
},
+ -- 使用 OpenRouter/OpenAI 兼容的标准头名,IDE 插件(Cursor/Continue)可直接识别
Review Comment:
Please use English comments throughout.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]