This is an automated email from the ASF dual-hosted git repository.

traky pushed a commit to branch 3.13-docs-patch
in repository https://gitbox.apache.org/repos/asf/apisix.git


The following commit(s) were added to refs/heads/3.13-docs-patch by this push:
     new ffa1d8d1e remove example in ai-proxy-multi.md
ffa1d8d1e is described below

commit ffa1d8d1ed66d13865ce417a4b6857f024c00724
Author: Traky Deng <trakyd...@gmail.com>
AuthorDate: Fri Aug 8 14:00:06 2025 +0800

    remove example in ai-proxy-multi.md
---
 docs/en/latest/plugins/ai-proxy-multi.md | 68 --------------------------------
 1 file changed, 68 deletions(-)

diff --git a/docs/en/latest/plugins/ai-proxy-multi.md 
b/docs/en/latest/plugins/ai-proxy-multi.md
index 6418599d9..ed3421bb8 100644
--- a/docs/en/latest/plugins/ai-proxy-multi.md
+++ b/docs/en/latest/plugins/ai-proxy-multi.md
@@ -935,71 +935,3 @@ curl "http://127.0.0.1:9180/apisix/admin/routes"; -X PUT \
 ```
 
 For verification, the behaviours should be consistent with the verification in 
[active health checks](../tutorials/health-check.md).
-
-### Include LLM Information in Access Log
-
-The following example demonstrates how you can log LLM request related 
information in the gateway's access log to improve analytics and audit. The 
following variables are available:
-
-* `request_type`: Type of request, where the value could be 
`traditional_http`, `ai_chat`, or `ai_stream`.
-* `llm_time_to_first_token`: Duration from request sending to the first token 
received from the LLM service, in milliseconds.
-* `llm_model`: LLM model.
-* `llm_prompt_tokens`: Number of tokens in the prompt.
-* `llm_completion_tokens`: Number of chat completion tokens in the prompt.
-
-:::note
-
-The usage in this example will become available in APISIX 3.13.0.
-
-:::
-
-Update the access log format in your configuration file to include additional 
LLM related variables:
-
-```yaml title="conf/config.yaml"
-nginx_config:
-  http:
-    access_log_format: "$remote_addr - $remote_user [$time_local] $http_host 
\"$request_line\" $status $body_bytes_sent $request_time \"$http_referer\" 
\"$http_user_agent\" $upstream_addr $upstream_status $upstream_response_time 
\"$upstream_scheme://$upstream_host$upstream_uri\" \"$apisix_request_id\" 
\"$request_type\" \"$llm_time_to_first_token\" \"$llm_model\" 
\"$llm_prompt_tokens\" \"$llm_completion_tokens\""
-```
-
-Reload APISIX for configuration changes to take effect.
-
-Next, create a Route with the `ai-proxy-multi` Plugin and send a request. For 
instance, if the request is forwarded to OpenAI and you receive the following 
response:
-
-```json
-{
-  ...,
-  "model": "gpt-4-0613",
-  "choices": [
-    {
-      "index": 0,
-      "message": {
-        "role": "assistant",
-        "content": "1+1 equals 2.",
-        "refusal": null,
-        "annotations": []
-      },
-      "logprobs": null,
-      "finish_reason": "stop"
-    }
-  ],
-  "usage": {
-    "prompt_tokens": 23,
-    "completion_tokens": 8,
-    "total_tokens": 31,
-    "prompt_tokens_details": {
-      "cached_tokens": 0,
-      "audio_tokens": 0
-    },
-    ...
-  },
-  "service_tier": "default",
-  "system_fingerprint": null
-}
-```
-
-In the gateway's access log, you should see a log entry similar to the 
following:
-
-```text
-192.168.215.1 - - [21/Mar/2025:04:28:03 +0000] api.openai.com "POST /anything 
HTTP/1.1" 200 804 2.858 "-" "curl/8.6.0" - - - "http://api.openai.com"; 
"5c5e0b95f8d303cb81e4dc456a4b12d9" "ai_chat" "2858" "gpt-4" "23" "8"
-```
-
-The access log entry shows the request type is `ai_chat`, time to first token 
is `2858` milliseconds, LLM model is `gpt-4`, prompt token usage is `23`, and 
completion token usage is `8`.

Reply via email to