This is an automated email from the ASF dual-hosted git repository.
navendu pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/apisix.git
The following commit(s) were added to refs/heads/master by this push:
new 131f20a58 docs: Add default log format for each logger plugin (#10764)
131f20a58 is described below
commit 131f20a58c9fd096c0329906e9b894fc57a8f8c6
Author: baiyun <[email protected]>
AuthorDate: Wed Jan 10 13:44:53 2024 +0800
docs: Add default log format for each logger plugin (#10764)
---
docs/en/latest/plugins/clickhouse-logger.md | 43 +++++++++++++++++++
docs/en/latest/plugins/elasticsearch-logger.md | 40 ++++++++++++++++++
docs/en/latest/plugins/error-log-logger.md | 6 +++
docs/en/latest/plugins/file-logger.md | 44 ++++++++++++++++++++
docs/en/latest/plugins/google-cloud-logging.md | 30 ++++++++++++++
docs/en/latest/plugins/http-logger.md | 44 ++++++++++++++++++++
docs/en/latest/plugins/loggly.md | 6 +++
docs/en/latest/plugins/loki-logger.md | 42 +++++++++++++++++++
docs/en/latest/plugins/rocketmq-logger.md | 55 +++++++++++++++++++++++++
docs/en/latest/plugins/skywalking-logger.md | 57 ++++++++++++++++++++++++++
docs/en/latest/plugins/sls-logger.md | 27 ++++++++++++
docs/en/latest/plugins/splunk-hec-logging.md | 31 ++++++++++++++
docs/en/latest/plugins/syslog.md | 6 +++
docs/en/latest/plugins/tcp-logger.md | 40 ++++++++++++++++++
docs/en/latest/plugins/tencent-cloud-cls.md | 40 ++++++++++++++++++
docs/en/latest/plugins/udp-logger.md | 40 ++++++++++++++++++
docs/zh/latest/plugins/clickhouse-logger.md | 43 +++++++++++++++++++
docs/zh/latest/plugins/elasticsearch-logger.md | 40 ++++++++++++++++++
docs/zh/latest/plugins/error-log-logger.md | 6 +++
docs/zh/latest/plugins/file-logger.md | 44 ++++++++++++++++++++
docs/zh/latest/plugins/google-cloud-logging.md | 30 ++++++++++++++
docs/zh/latest/plugins/http-logger.md | 44 ++++++++++++++++++++
docs/zh/latest/plugins/loggly.md | 6 +++
docs/zh/latest/plugins/loki-logger.md | 42 +++++++++++++++++++
docs/zh/latest/plugins/rocketmq-logger.md | 1 -
docs/zh/latest/plugins/skywalking-logger.md | 57 ++++++++++++++++++++++++++
docs/zh/latest/plugins/sls-logger.md | 27 ++++++++++++
docs/zh/latest/plugins/splunk-hec-logging.md | 31 ++++++++++++++
docs/zh/latest/plugins/syslog.md | 6 +++
docs/zh/latest/plugins/tcp-logger.md | 40 ++++++++++++++++++
docs/zh/latest/plugins/tencent-cloud-cls.md | 40 ++++++++++++++++++
docs/zh/latest/plugins/udp-logger.md | 40 ++++++++++++++++++
32 files changed, 1047 insertions(+), 1 deletion(-)
diff --git a/docs/en/latest/plugins/clickhouse-logger.md
b/docs/en/latest/plugins/clickhouse-logger.md
index feb6dd8a7..f41a7aaec 100644
--- a/docs/en/latest/plugins/clickhouse-logger.md
+++ b/docs/en/latest/plugins/clickhouse-logger.md
@@ -54,6 +54,49 @@ NOTE: `encrypt_fields = {"password"}` is also defined in the
schema, which means
This Plugin supports using batch processors to aggregate and process entries
(logs/data) in a batch. This avoids the need for frequently submitting the
data. The batch processor submits data every `5` seconds or when the data in
the queue reaches `1000`. See [Batch
Processor](../batch-processor.md#configuration) for more information or setting
your custom configuration.
+### Example of default log format
+
+```json
+{
+ "response": {
+ "status": 200,
+ "size": 118,
+ "headers": {
+ "content-type": "text/plain",
+ "connection": "close",
+ "server": "APISIX/3.7.0",
+ "content-length": "12"
+ }
+ },
+ "client_ip": "127.0.0.1",
+ "upstream_latency": 3,
+ "apisix_latency": 98.999998092651,
+ "upstream": "127.0.0.1:1982",
+ "latency": 101.99999809265,
+ "server": {
+ "version": "3.7.0",
+ "hostname": "localhost"
+ },
+ "route_id": "1",
+ "start_time": 1704507612177,
+ "service_id": "",
+ "request": {
+ "method": "POST",
+ "querystring": {
+ "foo": "unknown"
+ },
+ "headers": {
+ "host": "localhost",
+ "connection": "close",
+ "content-length": "18"
+ },
+ "size": 110,
+ "uri": "/hello?foo=unknown",
+ "url": "http://localhost:1984/hello?foo=unknown"
+ }
+}
+```
+
## Metadata
You can also set the format of the logs by configuring the Plugin metadata.
The following configurations are available:
diff --git a/docs/en/latest/plugins/elasticsearch-logger.md
b/docs/en/latest/plugins/elasticsearch-logger.md
index 89a7a826f..06f70354f 100644
--- a/docs/en/latest/plugins/elasticsearch-logger.md
+++ b/docs/en/latest/plugins/elasticsearch-logger.md
@@ -53,6 +53,46 @@ NOTE: `encrypt_fields = {"auth.password"}` is also defined
in the schema, which
This Plugin supports using batch processors to aggregate and process entries
(logs/data) in a batch. This avoids the need for frequently submitting the
data. The batch processor submits data every `5` seconds or when the data in
the queue reaches `1000`. See [Batch
Processor](../batch-processor.md#configuration) for more information or setting
your custom configuration.
+### Example of default log format
+
+```json
+{
+ "upstream_latency": 2,
+ "apisix_latency": 100.9999256134,
+ "request": {
+ "size": 59,
+ "url": "http://localhost:1984/hello",
+ "method": "GET",
+ "querystring": {},
+ "headers": {
+ "host": "localhost",
+ "connection": "close"
+ },
+ "uri": "/hello"
+ },
+ "server": {
+ "version": "3.7.0",
+ "hostname": "localhost"
+ },
+ "client_ip": "127.0.0.1",
+ "upstream": "127.0.0.1:1980",
+ "response": {
+ "status": 200,
+ "headers": {
+ "content-length": "12",
+ "connection": "close",
+ "content-type": "text/plain",
+ "server": "APISIX/3.7.0"
+ },
+ "size": 118
+ },
+ "start_time": 1704524807607,
+ "route_id": "1",
+ "service_id": "",
+ "latency": 102.9999256134
+}
+```
+
## Enable Plugin
### Full configuration
diff --git a/docs/en/latest/plugins/error-log-logger.md
b/docs/en/latest/plugins/error-log-logger.md
index 889ea7f25..f63e89a9b 100644
--- a/docs/en/latest/plugins/error-log-logger.md
+++ b/docs/en/latest/plugins/error-log-logger.md
@@ -69,6 +69,12 @@ NOTE: `encrypt_fields = {"clickhouse.password"}` is also
defined in the schema,
This Plugin supports using batch processors to aggregate and process entries
(logs/data) in a batch. This avoids the need for frequently submitting the
data. The batch processor submits data every `5` seconds or when the data in
the queue reaches `1000`. See [Batch
Processor](../batch-processor.md#configuration) for more information or setting
your custom configuration.
+### Example of default log format
+
+```text
+["2024/01/06 16:04:30 [warn] 11786#9692271: *1 [lua] plugin.lua:205: load():
new plugins: {"error-log-logger":true}, context:
init_worker_by_lua*","\n","2024/01/06 16:04:30 [warn] 11786#9692271: *1 [lua]
plugin.lua:255: load_stream(): new plugins:
{"limit-conn":true,"ip-restriction":true,"syslog":true,"mqtt-proxy":true},
context: init_worker_by_lua*","\n"]
+```
+
## Enable Plugin
To enable the Plugin, you can add it in your configuration file
(`conf/config.yaml`):
diff --git a/docs/en/latest/plugins/file-logger.md
b/docs/en/latest/plugins/file-logger.md
index f46b5a68c..61df441ee 100644
--- a/docs/en/latest/plugins/file-logger.md
+++ b/docs/en/latest/plugins/file-logger.md
@@ -53,6 +53,50 @@ The `file-logger` Plugin is used to push log streams to a
specific location.
| include_resp_body_expr | array | False | When the `include_resp_body`
attribute is set to `true`, use this to filter based on
[lua-resty-expr](https://github.com/api7/lua-resty-expr). If present, only logs
the response into file if the expression evaluates to `true`. |
| match | array[] | False | Logs will be recorded when the rule
matching is successful if the option is set. See
[lua-resty-expr](https://github.com/api7/lua-resty-expr#operator-list) for a
list of available expressions. |
+### Example of default log format
+
+ ```json
+ {
+ "service_id": "",
+ "apisix_latency": 100.99999809265,
+ "start_time": 1703907485819,
+ "latency": 101.99999809265,
+ "upstream_latency": 1,
+ "client_ip": "127.0.0.1",
+ "route_id": "1",
+ "server": {
+ "version": "3.7.0",
+ "hostname": "localhost"
+ },
+ "request": {
+ "headers": {
+ "host": "127.0.0.1:1984",
+ "content-type": "application/x-www-form-urlencoded",
+ "user-agent": "lua-resty-http/0.16.1 (Lua) ngx_lua/10025",
+ "content-length": "12"
+ },
+ "method": "POST",
+ "size": 194,
+ "url": "http://127.0.0.1:1984/hello?log_body=no",
+ "uri": "/hello?log_body=no",
+ "querystring": {
+ "log_body": "no"
+ }
+ },
+ "response": {
+ "headers": {
+ "content-type": "text/plain",
+ "connection": "close",
+ "content-length": "12",
+ "server": "APISIX/3.7.0"
+ },
+ "status": 200,
+ "size": 123
+ },
+ "upstream": "127.0.0.1:1982"
+ }
+ ```
+
## Metadata
You can also set the format of the logs by configuring the Plugin metadata.
The following configurations are available:
diff --git a/docs/en/latest/plugins/google-cloud-logging.md
b/docs/en/latest/plugins/google-cloud-logging.md
index 459ee96f9..3ea60f4db 100644
--- a/docs/en/latest/plugins/google-cloud-logging.md
+++ b/docs/en/latest/plugins/google-cloud-logging.md
@@ -53,6 +53,36 @@ NOTE: `encrypt_fields = {"auth_config.private_key"}` is also
defined in the sche
This Plugin supports using batch processors to aggregate and process entries
(logs/data) in a batch. This avoids the need for frequently submitting the
data. The batch processor submits data every `5` seconds or when the data in
the queue reaches `1000`. See [Batch
Processor](../batch-processor.md#configuration) for more information or setting
your custom configuration.
+### Example of default log format
+
+```json
+{
+ "insertId": "0013a6afc9c281ce2e7f413c01892bdc",
+ "labels": {
+ "source": "apache-apisix-google-cloud-logging"
+ },
+ "logName": "projects/apisix/logs/apisix.apache.org%2Flogs",
+ "httpRequest": {
+ "requestMethod": "GET",
+ "requestUrl": "http://localhost:1984/hello",
+ "requestSize": 59,
+ "responseSize": 118,
+ "status": 200,
+ "remoteIp": "127.0.0.1",
+ "serverIp": "127.0.0.1:1980",
+ "latency": "0.103s"
+ },
+ "resource": {
+ "type": "global"
+ },
+ "jsonPayload": {
+ "service_id": "",
+ "route_id": "1"
+ },
+ "timestamp": "2024-01-06T03:34:45.065Z"
+}
+```
+
## Metadata
You can also set the format of the logs by configuring the Plugin metadata.
The following configurations are available:
diff --git a/docs/en/latest/plugins/http-logger.md
b/docs/en/latest/plugins/http-logger.md
index aef965f0a..4ad87acd0 100644
--- a/docs/en/latest/plugins/http-logger.md
+++ b/docs/en/latest/plugins/http-logger.md
@@ -54,6 +54,50 @@ This Plugin supports using batch processors to aggregate and
process entries (lo
:::
+### Example of default log format
+
+ ```json
+ {
+ "service_id": "",
+ "apisix_latency": 100.99999809265,
+ "start_time": 1703907485819,
+ "latency": 101.99999809265,
+ "upstream_latency": 1,
+ "client_ip": "127.0.0.1",
+ "route_id": "1",
+ "server": {
+ "version": "3.7.0",
+ "hostname": "localhost"
+ },
+ "request": {
+ "headers": {
+ "host": "127.0.0.1:1984",
+ "content-type": "application/x-www-form-urlencoded",
+ "user-agent": "lua-resty-http/0.16.1 (Lua) ngx_lua/10025",
+ "content-length": "12"
+ },
+ "method": "POST",
+ "size": 194,
+ "url": "http://127.0.0.1:1984/hello?log_body=no",
+ "uri": "/hello?log_body=no",
+ "querystring": {
+ "log_body": "no"
+ }
+ },
+ "response": {
+ "headers": {
+ "content-type": "text/plain",
+ "connection": "close",
+ "content-length": "12",
+ "server": "APISIX/3.7.0"
+ },
+ "status": 200,
+ "size": 123
+ },
+ "upstream": "127.0.0.1:1982"
+ }
+ ```
+
## Metadata
You can also set the format of the logs by configuring the Plugin metadata.
The following configurations are available:
diff --git a/docs/en/latest/plugins/loggly.md b/docs/en/latest/plugins/loggly.md
index 663ba0574..c7318ce76 100644
--- a/docs/en/latest/plugins/loggly.md
+++ b/docs/en/latest/plugins/loggly.md
@@ -53,6 +53,12 @@ This Plugin supports using batch processors to aggregate and
process entries (lo
To generate a Customer token, go to `<your assigned
subdomain>/loggly.com/tokens` or navigate to Logs > Source setup > Customer
tokens.
+### Example of default log format
+
+```text
+<10>1 2024-01-06T06:50:51.739Z 127.0.0.1 apisix 58525 - [token-1@41058
tag="apisix"]
{"service_id":"","server":{"version":"3.7.0","hostname":"localhost"},"apisix_latency":100.99985313416,"request":{"url":"http://127.0.0.1:1984/opentracing","headers":{"content-type":"application/x-www-form-urlencoded","user-agent":"lua-resty-http/0.16.1
(Lua)
ngx_lua/10025","host":"127.0.0.1:1984"},"querystring":{},"uri":"/opentracing","size":155,"method":"GET"},"response":{"headers":{"content-type":"text
[...]
+```
+
## Metadata
You can also configure the Plugin through Plugin metadata. The following
configurations are available:
diff --git a/docs/en/latest/plugins/loki-logger.md
b/docs/en/latest/plugins/loki-logger.md
index e79e5396e..2a9e160b9 100644
--- a/docs/en/latest/plugins/loki-logger.md
+++ b/docs/en/latest/plugins/loki-logger.md
@@ -55,6 +55,48 @@ When the Plugin is enabled, APISIX will serialize the
request context informatio
This plugin supports using batch processors to aggregate and process entries
(logs/data) in a batch. This avoids the need for frequently submitting the
data. The batch processor submits data every `5` seconds or when the data in
the queue reaches `1000`. See [Batch
Processor](../batch-processor.md#configuration) for more information or setting
your custom configuration.
+### Example of default log format
+
+```json
+{
+ "request": {
+ "headers": {
+ "connection": "close",
+ "host": "localhost",
+ "test-header": "only-for-test#1"
+ },
+ "method": "GET",
+ "uri": "/hello",
+ "url": "http://localhost:1984/hello",
+ "size": 89,
+ "querystring": {}
+ },
+ "client_ip": "127.0.0.1",
+ "start_time": 1704525701293,
+ "apisix_latency": 100.99994659424,
+ "response": {
+ "headers": {
+ "content-type": "text/plain",
+ "server": "APISIX/3.7.0",
+ "content-length": "12",
+ "connection": "close"
+ },
+ "status": 200,
+ "size": 118
+ },
+ "route_id": "1",
+ "loki_log_time": "1704525701293000000",
+ "upstream_latency": 5,
+ "latency": 105.99994659424,
+ "upstream": "127.0.0.1:1980",
+ "server": {
+ "hostname": "localhost",
+ "version": "3.7.0"
+ },
+ "service_id": ""
+}
+```
+
## Metadata
You can also set the format of the logs by configuring the Plugin metadata.
The following configurations are available:
diff --git a/docs/en/latest/plugins/rocketmq-logger.md
b/docs/en/latest/plugins/rocketmq-logger.md
index 3f50b0786..324ecfe51 100644
--- a/docs/en/latest/plugins/rocketmq-logger.md
+++ b/docs/en/latest/plugins/rocketmq-logger.md
@@ -66,6 +66,61 @@ If the process is successful, it will return `true` and if
it fails, returns `ni
### meta_format example
+- default:
+
+```json
+ {
+ "upstream": "127.0.0.1:1980",
+ "start_time": 1619414294760,
+ "client_ip": "127.0.0.1",
+ "service_id": "",
+ "route_id": "1",
+ "request": {
+ "querystring": {
+ "ab": "cd"
+ },
+ "size": 90,
+ "uri": "/hello?ab=cd",
+ "url": "http://localhost:1984/hello?ab=cd",
+ "headers": {
+ "host": "localhost",
+ "content-length": "6",
+ "connection": "close"
+ },
+ "method": "GET"
+ },
+ "response": {
+ "headers": {
+ "connection": "close",
+ "content-type": "text/plain; charset=utf-8",
+ "date": "Mon, 26 Apr 2021 05:18:14 GMT",
+ "server": "APISIX/2.5",
+ "transfer-encoding": "chunked"
+ },
+ "size": 190,
+ "status": 200
+ },
+ "server": {
+ "hostname": "localhost",
+ "version": "2.5"
+ },
+ "latency": 0
+ }
+```
+
+- origin:
+
+```http
+ GET /hello?ab=cd HTTP/1.1
+ host: localhost
+ content-length: 6
+ connection: close
+
+ abcdef
+```
+
+### meta_format example
+
- `default`:
```json
diff --git a/docs/en/latest/plugins/skywalking-logger.md
b/docs/en/latest/plugins/skywalking-logger.md
index df4c786fd..b72ec5577 100644
--- a/docs/en/latest/plugins/skywalking-logger.md
+++ b/docs/en/latest/plugins/skywalking-logger.md
@@ -47,6 +47,63 @@ If there is an existing tracing context, it sets up the
trace-log correlation au
This Plugin supports using batch processors to aggregate and process entries
(logs/data) in a batch. This avoids the need for frequently submitting the
data. The batch processor submits data every `5` seconds or when the data in
the queue reaches `1000`. See [Batch
Processor](../batch-processor.md#configuration) for more information or setting
your custom configuration.
+### Example of default log format
+
+ ```json
+ {
+ "serviceInstance": "APISIX Instance Name",
+ "body": {
+ "json": {
+ "json": "body-json"
+ }
+ },
+ "endpoint": "/opentracing",
+ "service": "APISIX"
+ }
+ ```
+
+For body-json data, it is an escaped json string
+
+ ```json
+ {
+ "response": {
+ "status": 200,
+ "headers": {
+ "server": "APISIX/3.7.0",
+ "content-type": "text/plain",
+ "transfer-encoding": "chunked",
+ "connection": "close"
+ },
+ "size": 136
+ },
+ "route_id": "1",
+ "upstream": "127.0.0.1:1982",
+ "upstream_latency": 8,
+ "apisix_latency": 101.00020599365,
+ "client_ip": "127.0.0.1",
+ "service_id": "",
+ "server": {
+ "hostname": "localhost",
+ "version": "3.7.0"
+ },
+ "start_time": 1704429712768,
+ "latency": 109.00020599365,
+ "request": {
+ "headers": {
+ "content-length": "9",
+ "host": "localhost",
+ "connection": "close"
+ },
+ "method": "POST",
+ "body": "body-data",
+ "size": 94,
+ "querystring": {},
+ "url": "http://localhost:1984/opentracing",
+ "uri": "/opentracing"
+ }
+ }
+ ```
+
## Metadata
You can also set the format of the logs by configuring the Plugin metadata.
The following configurations are available:
diff --git a/docs/en/latest/plugins/sls-logger.md
b/docs/en/latest/plugins/sls-logger.md
index 26808b8cb..47dc9449b 100644
--- a/docs/en/latest/plugins/sls-logger.md
+++ b/docs/en/latest/plugins/sls-logger.md
@@ -52,6 +52,33 @@ NOTE: `encrypt_fields = {"access_key_secret"}` is also
defined in the schema, wh
This Plugin supports using batch processors to aggregate and process entries
(logs/data) in a batch. This avoids the need for frequently submitting the
data. The batch processor submits data every `5` seconds or when the data in
the queue reaches `1000`. See [Batch
Processor](../batch-processor.md#configuration) for more information or setting
your custom configuration.
+### Example of default log format
+
+```json
+{
+ "route_conf": {
+ "host": "100.100.99.135",
+ "buffer_duration": 60,
+ "timeout": 30000,
+ "include_req_body": false,
+ "logstore": "your_logstore",
+ "log_format": {
+ "vip": "$remote_addr"
+ },
+ "project": "your_project",
+ "inactive_timeout": 5,
+ "access_key_id": "your_access_key_id",
+ "access_key_secret": "your_access_key_secret",
+ "batch_max_size": 1000,
+ "max_retry_count": 0,
+ "retry_delay": 1,
+ "port": 10009,
+ "name": "sls-logger"
+ },
+ "data": "<46>1 2024-01-06T03:29:56.457Z localhost apisix 28063 -
[logservice project=\"your_project\" logstore=\"your_logstore\"
access-key-id=\"your_access_key_id\"
access-key-secret=\"your_access_key_secret\"]
{\"vip\":\"127.0.0.1\",\"route_id\":\"1\"}\n"
+}
+```
+
## Metadata
You can also set the format of the logs by configuring the Plugin metadata.
The following configurations are available:
diff --git a/docs/en/latest/plugins/splunk-hec-logging.md
b/docs/en/latest/plugins/splunk-hec-logging.md
index bdddfd7fa..acfa77468 100644
--- a/docs/en/latest/plugins/splunk-hec-logging.md
+++ b/docs/en/latest/plugins/splunk-hec-logging.md
@@ -48,6 +48,37 @@ When the Plugin is enabled, APISIX will serialize the
request context informatio
This Plugin supports using batch processors to aggregate and process entries
(logs/data) in a batch. This avoids the need for frequently submitting the
data. The batch processor submits data every `5` seconds or when the data in
the queue reaches `1000`. See [Batch
Processor](../batch-processor.md#configuration) for more information or setting
your custom configuration.
+### Example of default log format
+
+```json
+{
+ "sourcetype": "_json",
+ "time": 1704513555.392,
+ "event": {
+ "upstream": "127.0.0.1:1980",
+ "request_url": "http://localhost:1984/hello",
+ "request_query": {},
+ "request_size": 59,
+ "response_headers": {
+ "content-length": "12",
+ "server": "APISIX/3.7.0",
+ "content-type": "text/plain",
+ "connection": "close"
+ },
+ "response_status": 200,
+ "response_size": 118,
+ "latency": 108.00004005432,
+ "request_method": "GET",
+ "request_headers": {
+ "connection": "close",
+ "host": "localhost"
+ }
+ },
+ "source": "apache-apisix-splunk-hec-logging",
+ "host": "localhost"
+}
+```
+
## Metadata
You can also set the format of the logs by configuring the Plugin metadata.
The following configurations are available:
diff --git a/docs/en/latest/plugins/syslog.md b/docs/en/latest/plugins/syslog.md
index d8f107f36..1a7e5e4a8 100644
--- a/docs/en/latest/plugins/syslog.md
+++ b/docs/en/latest/plugins/syslog.md
@@ -50,6 +50,12 @@ Logs can be set as JSON objects.
This Plugin supports using batch processors to aggregate and process entries
(logs/data) in a batch. This avoids the need for frequently submitting the
data. The batch processor submits data every `5` seconds or when the data in
the queue reaches `1000`. See [Batch
Processor](../batch-processor.md#configuration) for more information or setting
your custom configuration.
+### meta_format example
+
+```text
+"<46>1 2024-01-06T02:30:59.145Z 127.0.0.1 apisix 82324 - -
{\"response\":{\"status\":200,\"size\":141,\"headers\":{\"content-type\":\"text/plain\",\"server\":\"APISIX/3.7.0\",\"transfer-encoding\":\"chunked\",\"connection\":\"close\"}},\"route_id\":\"1\",\"server\":{\"hostname\":\"baiyundeMacBook-Pro.local\",\"version\":\"3.7.0\"},\"request\":{\"uri\":\"/opentracing\",\"url\":\"http://127.0.0.1:1984/opentracing\",\"querystring\":{},\"method\":\"GET\",\"size\":155,\"headers\":{\"content-t
[...]
+```
+
## Metadata
You can also set the format of the logs by configuring the Plugin metadata.
The following configurations are available:
diff --git a/docs/en/latest/plugins/tcp-logger.md
b/docs/en/latest/plugins/tcp-logger.md
index d1af5b11b..e5bffac35 100644
--- a/docs/en/latest/plugins/tcp-logger.md
+++ b/docs/en/latest/plugins/tcp-logger.md
@@ -50,6 +50,46 @@ This plugin also allows to push logs as a batch to your
external TCP server. It
This Plugin supports using batch processors to aggregate and process entries
(logs/data) in a batch. This avoids the need for frequently submitting the
data. The batch processor submits data every `5` seconds or when the data in
the queue reaches `1000`. See [Batch
Processor](../batch-processor.md#configuration) for more information or setting
your custom configuration.
+### Example of default log format
+
+```json
+{
+ "response": {
+ "status": 200,
+ "headers": {
+ "server": "APISIX/3.7.0",
+ "content-type": "text/plain",
+ "content-length": "12",
+ "connection": "close"
+ },
+ "size": 118
+ },
+ "server": {
+ "version": "3.7.0",
+ "hostname": "localhost"
+ },
+ "start_time": 1704527628474,
+ "client_ip": "127.0.0.1",
+ "service_id": "",
+ "latency": 102.9999256134,
+ "apisix_latency": 100.9999256134,
+ "upstream_latency": 2,
+ "request": {
+ "headers": {
+ "connection": "close",
+ "host": "localhost"
+ },
+ "size": 59,
+ "method": "GET",
+ "uri": "/hello",
+ "url": "http://localhost:1984/hello",
+ "querystring": {}
+ },
+ "upstream": "127.0.0.1:1980",
+ "route_id": "1"
+}
+```
+
## Metadata
You can also set the format of the logs by configuring the Plugin metadata.
The following configurations are available:
diff --git a/docs/en/latest/plugins/tencent-cloud-cls.md
b/docs/en/latest/plugins/tencent-cloud-cls.md
index 559f13e2d..9895dc564 100644
--- a/docs/en/latest/plugins/tencent-cloud-cls.md
+++ b/docs/en/latest/plugins/tencent-cloud-cls.md
@@ -52,6 +52,46 @@ NOTE: `encrypt_fields = {"secret_key"}` is also defined in
the schema, which mea
This Plugin supports using batch processors to aggregate and process entries
(logs/data) in a batch. This avoids the need for frequently submitting the
data. The batch processor submits data every `5` seconds or when the data in
the queue reaches `1000`. See [Batch
Processor](../batch-processor.md#configuration) for more information or setting
your custom configuration.
+### Example of default log format
+
+```json
+{
+ "response": {
+ "headers": {
+ "content-type": "text/plain",
+ "connection": "close",
+ "server": "APISIX/3.7.0",
+ "transfer-encoding": "chunked"
+ },
+ "size": 136,
+ "status": 200
+ },
+ "route_id": "1",
+ "upstream": "127.0.0.1:1982",
+ "client_ip": "127.0.0.1",
+ "apisix_latency": 100.99985313416,
+ "service_id": "",
+ "latency": 103.99985313416,
+ "start_time": 1704525145772,
+ "server": {
+ "version": "3.7.0",
+ "hostname": "localhost"
+ },
+ "upstream_latency": 3,
+ "request": {
+ "headers": {
+ "connection": "close",
+ "host": "localhost"
+ },
+ "url": "http://localhost:1984/opentracing",
+ "querystring": {},
+ "method": "GET",
+ "size": 65,
+ "uri": "/opentracing"
+ }
+}
+```
+
## Metadata
You can also set the format of the logs by configuring the Plugin metadata.
The following configurations are available:
diff --git a/docs/en/latest/plugins/udp-logger.md
b/docs/en/latest/plugins/udp-logger.md
index 57d52b594..e3acd0030 100644
--- a/docs/en/latest/plugins/udp-logger.md
+++ b/docs/en/latest/plugins/udp-logger.md
@@ -48,6 +48,46 @@ This plugin also allows to push logs as a batch to your
external UDP server. It
This Plugin supports using batch processors to aggregate and process entries
(logs/data) in a batch. This avoids the need for frequently submitting the
data. The batch processor submits data every `5` seconds or when the data in
the queue reaches `1000`. See [Batch
Processor](../batch-processor.md#configuration) for more information or setting
your custom configuration.
+### Example of default log format
+
+```json
+{
+ "apisix_latency": 99.999988555908,
+ "service_id": "",
+ "server": {
+ "version": "3.7.0",
+ "hostname": "localhost"
+ },
+ "request": {
+ "method": "GET",
+ "headers": {
+ "connection": "close",
+ "host": "localhost"
+ },
+ "url": "http://localhost:1984/opentracing",
+ "size": 65,
+ "querystring": {},
+ "uri": "/opentracing"
+ },
+ "start_time": 1704527399740,
+ "client_ip": "127.0.0.1",
+ "response": {
+ "status": 200,
+ "size": 136,
+ "headers": {
+ "server": "APISIX/3.7.0",
+ "content-type": "text/plain",
+ "transfer-encoding": "chunked",
+ "connection": "close"
+ }
+ },
+ "upstream": "127.0.0.1:1982",
+ "route_id": "1",
+ "upstream_latency": 12,
+ "latency": 111.99998855591
+}
+```
+
## Metadata
You can also set the format of the logs by configuring the Plugin metadata.
The following configurations are available:
diff --git a/docs/zh/latest/plugins/clickhouse-logger.md
b/docs/zh/latest/plugins/clickhouse-logger.md
index 09d4c512f..f719a40e7 100644
--- a/docs/zh/latest/plugins/clickhouse-logger.md
+++ b/docs/zh/latest/plugins/clickhouse-logger.md
@@ -54,6 +54,49 @@ description: 本文介绍了 API 网关 Apache APISIX 如何使用 clickhouse-lo
该插件支持使用批处理器来聚合并批量处理条目(日志/数据)。这样可以避免插件频繁地提交数据,默认情况下批处理器每 `5` 秒钟或队列中的数据达到 `1000`
条时提交数据,如需了解批处理器相关参数设置,请参考 [Batch-Processor](../batch-processor.md#配置)。
+### 默认日志格式示例
+
+```json
+{
+ "response": {
+ "status": 200,
+ "size": 118,
+ "headers": {
+ "content-type": "text/plain",
+ "connection": "close",
+ "server": "APISIX/3.7.0",
+ "content-length": "12"
+ }
+ },
+ "client_ip": "127.0.0.1",
+ "upstream_latency": 3,
+ "apisix_latency": 98.999998092651,
+ "upstream": "127.0.0.1:1982",
+ "latency": 101.99999809265,
+ "server": {
+ "version": "3.7.0",
+ "hostname": "localhost"
+ },
+ "route_id": "1",
+ "start_time": 1704507612177,
+ "service_id": "",
+ "request": {
+ "method": "POST",
+ "querystring": {
+ "foo": "unknown"
+ },
+ "headers": {
+ "host": "localhost",
+ "connection": "close",
+ "content-length": "18"
+ },
+ "size": 110,
+ "uri": "/hello?foo=unknown",
+ "url": "http://localhost:1984/hello?foo=unknown"
+ }
+}
+```
+
## 配置插件元数据
`clickhouse-logger` 也支持自定义日志格式,与 [http-logger](./http-logger.md) 插件类似。
diff --git a/docs/zh/latest/plugins/elasticsearch-logger.md
b/docs/zh/latest/plugins/elasticsearch-logger.md
index e15e84783..d97311b17 100644
--- a/docs/zh/latest/plugins/elasticsearch-logger.md
+++ b/docs/zh/latest/plugins/elasticsearch-logger.md
@@ -54,6 +54,46 @@ description: 本文介绍了 API 网关 Apache APISIX 的
elasticsearch-logger
本插件支持使用批处理器来聚合并批量处理条目(日志和数据)。这样可以避免插件频繁地提交数据,默认设置情况下批处理器会每 `5` 秒钟或队列中的数据达到
`1000` 条时提交数据,如需了解或自定义批处理器相关参数设置,请参考
[Batch-Processor](../batch-processor.md#配置) 配置部分。
+### 默认日志格式示例
+
+```json
+{
+ "upstream_latency": 2,
+ "apisix_latency": 100.9999256134,
+ "request": {
+ "size": 59,
+ "url": "http://localhost:1984/hello",
+ "method": "GET",
+ "querystring": {},
+ "headers": {
+ "host": "localhost",
+ "connection": "close"
+ },
+ "uri": "/hello"
+ },
+ "server": {
+ "version": "3.7.0",
+ "hostname": "localhost"
+ },
+ "client_ip": "127.0.0.1",
+ "upstream": "127.0.0.1:1980",
+ "response": {
+ "status": 200,
+ "headers": {
+ "content-length": "12",
+ "connection": "close",
+ "content-type": "text/plain",
+ "server": "APISIX/3.7.0"
+ },
+ "size": 118
+ },
+ "start_time": 1704524807607,
+ "route_id": "1",
+ "service_id": "",
+ "latency": 102.9999256134
+}
+```
+
## 启用插件
你可以通过如下命令在指定路由上启用 `elasticsearch-logger` 插件:
diff --git a/docs/zh/latest/plugins/error-log-logger.md
b/docs/zh/latest/plugins/error-log-logger.md
index cc3a34b41..f8fab5006 100644
--- a/docs/zh/latest/plugins/error-log-logger.md
+++ b/docs/zh/latest/plugins/error-log-logger.md
@@ -68,6 +68,12 @@ description: API 网关 Apache APISIX error-log-logger 插件用于将
APISIX
本插件支持使用批处理器来聚合并批量处理条目(日志/数据)。这样可以避免插件频繁地提交数据,默认设置情况下批处理器会每 `5` 秒钟或队列中的数据达到
`1000` 条时提交数据,如需了解或自定义批处理器相关参数设置,请参考
[Batch-Processor](../batch-processor.md#配置) 配置部分。
+### 默认日志格式示例
+
+```text
+["2024/01/06 16:04:30 [warn] 11786#9692271: *1 [lua] plugin.lua:205: load():
new plugins: {"error-log-logger":true}, context:
init_worker_by_lua*","\n","2024/01/06 16:04:30 [warn] 11786#9692271: *1 [lua]
plugin.lua:255: load_stream(): new plugins:
{"limit-conn":true,"ip-restriction":true,"syslog":true,"mqtt-proxy":true},
context: init_worker_by_lua*","\n"]
+```
+
## 启用插件
该插件默认为禁用状态,你可以在 `./conf/config.yaml` 中启用 `error-log-logger` 插件。你可以参考如下示例启用插件:
diff --git a/docs/zh/latest/plugins/file-logger.md
b/docs/zh/latest/plugins/file-logger.md
index ddb9b646b..87cd6e6ae 100644
--- a/docs/zh/latest/plugins/file-logger.md
+++ b/docs/zh/latest/plugins/file-logger.md
@@ -55,6 +55,50 @@ description: API 网关 Apache APISIX file-logger 插件可用于将日志数据
| include_resp_body_expr | array | 否 | 当 `include_resp_body` 属性设置为 `true`
时,使用该属性并基于 [lua-resty-expr](https://github.com/api7/lua-resty-expr)
进行过滤。如果存在,则仅在表达式计算结果为 `true` 时记录响应。 |
| match | array[] | 否 | 当设置了这个选项后,只有匹配规则的日志才会被记录。`match`
是一个表达式列表,具体请参考
[lua-resty-expr](https://github.com/api7/lua-resty-expr#operator-list)。 |
+### 默认日志格式示例
+
+ ```json
+ {
+ "service_id": "",
+ "apisix_latency": 100.99999809265,
+ "start_time": 1703907485819,
+ "latency": 101.99999809265,
+ "upstream_latency": 1,
+ "client_ip": "127.0.0.1",
+ "route_id": "1",
+ "server": {
+ "version": "3.7.0",
+ "hostname": "localhost"
+ },
+ "request": {
+ "headers": {
+ "host": "127.0.0.1:1984",
+ "content-type": "application/x-www-form-urlencoded",
+ "user-agent": "lua-resty-http/0.16.1 (Lua) ngx_lua/10025",
+ "content-length": "12"
+ },
+ "method": "POST",
+ "size": 194,
+ "url": "http://127.0.0.1:1984/hello?log_body=no",
+ "uri": "/hello?log_body=no",
+ "querystring": {
+ "log_body": "no"
+ }
+ },
+ "response": {
+ "headers": {
+ "content-type": "text/plain",
+ "connection": "close",
+ "content-length": "12",
+ "server": "APISIX/3.7.0"
+ },
+ "status": 200,
+ "size": 123
+ },
+ "upstream": "127.0.0.1:1982"
+ }
+ ```
+
## 插件元数据设置
| 名称 | 类型 | 必选项 | 默认值 | 有效值 | 描述
|
diff --git a/docs/zh/latest/plugins/google-cloud-logging.md
b/docs/zh/latest/plugins/google-cloud-logging.md
index 693ae2f7c..a0bf33a9f 100644
--- a/docs/zh/latest/plugins/google-cloud-logging.md
+++ b/docs/zh/latest/plugins/google-cloud-logging.md
@@ -53,6 +53,36 @@ description: API 网关 Apache APISIX 的 google-cloud-logging
插件可用于
该插件支持使用批处理器来聚合并批量处理条目(日志和数据)。这样可以避免该插件频繁地提交数据。默认情况下每 `5` 秒钟或队列中的数据达到 `1000`
条时,批处理器会自动提交数据,如需了解更多信息或自定义配置,请参考 [Batch Processor](../batch-processor.md#配置)。
+### 默认日志格式示例
+
+```json
+{
+ "insertId": "0013a6afc9c281ce2e7f413c01892bdc",
+ "labels": {
+ "source": "apache-apisix-google-cloud-logging"
+ },
+ "logName": "projects/apisix/logs/apisix.apache.org%2Flogs",
+ "httpRequest": {
+ "requestMethod": "GET",
+ "requestUrl": "http://localhost:1984/hello",
+ "requestSize": 59,
+ "responseSize": 118,
+ "status": 200,
+ "remoteIp": "127.0.0.1",
+ "serverIp": "127.0.0.1:1980",
+ "latency": "0.103s"
+ },
+ "resource": {
+ "type": "global"
+ },
+ "jsonPayload": {
+ "service_id": "",
+ "route_id": "1"
+ },
+ "timestamp": "2024-01-06T03:34:45.065Z"
+}
+```
+
## 插件元数据
| 名称 | 类型 | 必选项 | 默认值 | 有效值 | 描述
|
diff --git a/docs/zh/latest/plugins/http-logger.md
b/docs/zh/latest/plugins/http-logger.md
index 06fc12030..7ea6e1242 100644
--- a/docs/zh/latest/plugins/http-logger.md
+++ b/docs/zh/latest/plugins/http-logger.md
@@ -50,6 +50,50 @@ description: 本文介绍了 API 网关 Apache APISIX 的 http-logger 插件。
该插件支持使用批处理器来聚合并批量处理条目(日志和数据)。这样可以避免该插件频繁地提交数据。默认情况下每 `5` 秒钟或队列中的数据达到 `1000`
条时,批处理器会自动提交数据,如需了解更多信息或自定义配置,请参考 [Batch Processor](../batch-processor.md#配置)。
+### 默认日志格式示例
+
+ ```json
+ {
+ "service_id": "",
+ "apisix_latency": 100.99999809265,
+ "start_time": 1703907485819,
+ "latency": 101.99999809265,
+ "upstream_latency": 1,
+ "client_ip": "127.0.0.1",
+ "route_id": "1",
+ "server": {
+ "version": "3.7.0",
+ "hostname": "localhost"
+ },
+ "request": {
+ "headers": {
+ "host": "127.0.0.1:1984",
+ "content-type": "application/x-www-form-urlencoded",
+ "user-agent": "lua-resty-http/0.16.1 (Lua) ngx_lua/10025",
+ "content-length": "12"
+ },
+ "method": "POST",
+ "size": 194,
+ "url": "http://127.0.0.1:1984/hello?log_body=no",
+ "uri": "/hello?log_body=no",
+ "querystring": {
+ "log_body": "no"
+ }
+ },
+ "response": {
+ "headers": {
+ "content-type": "text/plain",
+ "connection": "close",
+ "content-length": "12",
+ "server": "APISIX/3.7.0"
+ },
+ "status": 200,
+ "size": 123
+ },
+ "upstream": "127.0.0.1:1982"
+ }
+ ```
+
## 插件元数据
| 名称 | 类型 | 必选项 | 默认值 | 有效值 | 描述
|
diff --git a/docs/zh/latest/plugins/loggly.md b/docs/zh/latest/plugins/loggly.md
index 9c5b74010..27d813c4a 100644
--- a/docs/zh/latest/plugins/loggly.md
+++ b/docs/zh/latest/plugins/loggly.md
@@ -50,6 +50,12 @@ description: API 网关 Apache APISIX loggly 插件可用于将日志转发到 S
如果要生成用户令牌,请在 Loggly 系统中的 `<your assigned subdomain>/loggly.com/tokens`
设置,或者在系统中单击 `Logs > Source setup > Customer tokens`。
+### 默认日志格式示例
+
+```text
+<10>1 2024-01-06T06:50:51.739Z 127.0.0.1 apisix 58525 - [token-1@41058
tag="apisix"]
{"service_id":"","server":{"version":"3.7.0","hostname":"localhost"},"apisix_latency":100.99985313416,"request":{"url":"http://127.0.0.1:1984/opentracing","headers":{"content-type":"application/x-www-form-urlencoded","user-agent":"lua-resty-http/0.16.1
(Lua)
ngx_lua/10025","host":"127.0.0.1:1984"},"querystring":{},"uri":"/opentracing","size":155,"method":"GET"},"response":{"headers":{"content-type":"text
[...]
+```
+
## 插件元数据设置
你还可以通过插件元数据配置插件。详细配置如下:
diff --git a/docs/zh/latest/plugins/loki-logger.md
b/docs/zh/latest/plugins/loki-logger.md
index 37ce2398a..c39be2cba 100644
--- a/docs/zh/latest/plugins/loki-logger.md
+++ b/docs/zh/latest/plugins/loki-logger.md
@@ -55,6 +55,48 @@ description: 本文件包含关于 Apache APISIX loki-logger 插件的信息。
该插件支持使用批处理器对条目(日志/数据)进行批量聚合和处理,避免了频繁提交数据的需求。批处理器每隔 `5` 秒或当队列中的数据达到 `1000`
时提交数据。有关更多信息或设置自定义配置,请参阅 [批处理器](../batch-processor.md#configuration)。
+### 默认日志格式示例
+
+```json
+{
+ "request": {
+ "headers": {
+ "connection": "close",
+ "host": "localhost",
+ "test-header": "only-for-test#1"
+ },
+ "method": "GET",
+ "uri": "/hello",
+ "url": "http://localhost:1984/hello",
+ "size": 89,
+ "querystring": {}
+ },
+ "client_ip": "127.0.0.1",
+ "start_time": 1704525701293,
+ "apisix_latency": 100.99994659424,
+ "response": {
+ "headers": {
+ "content-type": "text/plain",
+ "server": "APISIX/3.7.0",
+ "content-length": "12",
+ "connection": "close"
+ },
+ "status": 200,
+ "size": 118
+ },
+ "route_id": "1",
+ "loki_log_time": "1704525701293000000",
+ "upstream_latency": 5,
+ "latency": 105.99994659424,
+ "upstream": "127.0.0.1:1980",
+ "server": {
+ "hostname": "localhost",
+ "version": "3.7.0"
+ },
+ "service_id": ""
+}
+```
+
## 元数据
您还可以通过配置插件元数据来设置日志的格式。以下配置项可供选择:
diff --git a/docs/zh/latest/plugins/rocketmq-logger.md
b/docs/zh/latest/plugins/rocketmq-logger.md
index 6bef7a6b4..faec6c48d 100644
--- a/docs/zh/latest/plugins/rocketmq-logger.md
+++ b/docs/zh/latest/plugins/rocketmq-logger.md
@@ -86,7 +86,6 @@ description: API 网关 Apache APISIX 的 rocketmq-logger 插件用于将日志
"content-length": "6",
"connection": "close"
},
- "body": "abcdef",
"method": "GET"
},
"response": {
diff --git a/docs/zh/latest/plugins/skywalking-logger.md
b/docs/zh/latest/plugins/skywalking-logger.md
index 3eb837e9f..87cabb3b4 100644
--- a/docs/zh/latest/plugins/skywalking-logger.md
+++ b/docs/zh/latest/plugins/skywalking-logger.md
@@ -49,6 +49,63 @@ description: 本文将介绍 API 网关 Apache APISIX 如何通过 skywalking-lo
该插件支持使用批处理器来聚合并批量处理条目(日志/数据)。这样可以避免插件频繁地提交数据,默认设置情况下批处理器会每 `5` 秒钟或队列中的数据达到
`1000` 条时提交数据,如需了解批处理器相关参数设置,请参考 [Batch-Processor](../batch-processor.md#配置)。
+### 默认日志格式示例
+
+ ```json
+ {
+ "serviceInstance": "APISIX Instance Name",
+ "body": {
+ "json": {
+ "json": "body-json"
+ }
+ },
+ "endpoint": "/opentracing",
+ "service": "APISIX"
+ }
+ ```
+
+对于 body-json 数据,它是一个转义后的 json 字符串,格式化后如下:
+
+ ```json
+ {
+ "response": {
+ "status": 200,
+ "headers": {
+ "server": "APISIX/3.7.0",
+ "content-type": "text/plain",
+ "transfer-encoding": "chunked",
+ "connection": "close"
+ },
+ "size": 136
+ },
+ "route_id": "1",
+ "upstream": "127.0.0.1:1982",
+ "upstream_latency": 8,
+ "apisix_latency": 101.00020599365,
+ "client_ip": "127.0.0.1",
+ "service_id": "",
+ "server": {
+ "hostname": "localhost",
+ "version": "3.7.0"
+ },
+ "start_time": 1704429712768,
+ "latency": 109.00020599365,
+ "request": {
+ "headers": {
+ "content-length": "9",
+ "host": "localhost",
+ "connection": "close"
+ },
+ "method": "POST",
+ "body": "body-data",
+ "size": 94,
+ "querystring": {},
+ "url": "http://localhost:1984/opentracing",
+ "uri": "/opentracing"
+ }
+ }
+ ```
+
## 配置插件元数据
`skywalking-logger` 也支持自定义日志格式,与 [http-logger](./http-logger.md) 插件类似。
diff --git a/docs/zh/latest/plugins/sls-logger.md
b/docs/zh/latest/plugins/sls-logger.md
index d5b57a21b..7bd85c81b 100644
--- a/docs/zh/latest/plugins/sls-logger.md
+++ b/docs/zh/latest/plugins/sls-logger.md
@@ -49,6 +49,33 @@ title: sls-logger
本插件支持使用批处理器来聚合并批量处理条目(日志/数据)。这样可以避免插件频繁地提交数据,默认设置情况下批处理器会每 `5` 秒钟或队列中的数据达到
`1000` 条时提交数据,如需了解或自定义批处理器相关参数设置,请参考
[Batch-Processor](../batch-processor.md#配置) 配置部分。
+### 默认日志格式示例
+
+```json
+{
+ "route_conf": {
+ "host": "100.100.99.135",
+ "buffer_duration": 60,
+ "timeout": 30000,
+ "include_req_body": false,
+ "logstore": "your_logstore",
+ "log_format": {
+ "vip": "$remote_addr"
+ },
+ "project": "your_project",
+ "inactive_timeout": 5,
+ "access_key_id": "your_access_key_id",
+ "access_key_secret": "your_access_key_secret",
+ "batch_max_size": 1000,
+ "max_retry_count": 0,
+ "retry_delay": 1,
+ "port": 10009,
+ "name": "sls-logger"
+ },
+ "data": "<46>1 2024-01-06T03:29:56.457Z localhost apisix 28063 -
[logservice project=\"your_project\" logstore=\"your_logstore\"
access-key-id=\"your_access_key_id\"
access-key-secret=\"your_access_key_secret\"]
{\"vip\":\"127.0.0.1\",\"route_id\":\"1\"}\n"
+}
+```
+
## 插件元数据设置
| 名称 | 类型 | 必选项 | 默认值 | 有效值 | 描述
|
diff --git a/docs/zh/latest/plugins/splunk-hec-logging.md
b/docs/zh/latest/plugins/splunk-hec-logging.md
index 112e8fedc..abaa12406 100644
--- a/docs/zh/latest/plugins/splunk-hec-logging.md
+++ b/docs/zh/latest/plugins/splunk-hec-logging.md
@@ -48,6 +48,37 @@ description: API 网关 Apache APISIX 的 splunk-hec-logging 插件可用于将
本插件支持使用批处理器来聚合并批量处理条目(日志和数据)。这样可以避免该插件频繁地提交数据。默认情况下每 `5` 秒钟或队列中的数据达到 `1000`
条时,批处理器会自动提交数据,如需了解更多信息或自定义配置,请参考 [Batch-Processor](../batch-processor.md#配置)。
+### 默认日志格式示例
+
+```json
+{
+ "sourcetype": "_json",
+ "time": 1704513555.392,
+ "event": {
+ "upstream": "127.0.0.1:1980",
+ "request_url": "http://localhost:1984/hello",
+ "request_query": {},
+ "request_size": 59,
+ "response_headers": {
+ "content-length": "12",
+ "server": "APISIX/3.7.0",
+ "content-type": "text/plain",
+ "connection": "close"
+ },
+ "response_status": 200,
+ "response_size": 118,
+ "latency": 108.00004005432,
+ "request_method": "GET",
+ "request_headers": {
+ "connection": "close",
+ "host": "localhost"
+ }
+ },
+ "source": "apache-apisix-splunk-hec-logging",
+ "host": "localhost"
+}
+```
+
## 插件元数据
| 名称 | 类型 | 必选项 | 默认值 | 有效值 | 描述
|
diff --git a/docs/zh/latest/plugins/syslog.md b/docs/zh/latest/plugins/syslog.md
index d32d8cddb..4707bba8b 100644
--- a/docs/zh/latest/plugins/syslog.md
+++ b/docs/zh/latest/plugins/syslog.md
@@ -53,6 +53,12 @@ description: API 网关 Apache APISIX syslog 插件可用于将日志推送到 S
该插件支持使用批处理器来聚合并批量处理条目(日志/数据)。这样可以避免插件频繁地提交数据,默认情况下批处理器每 `5` 秒钟或队列中的数据达到 `1000`
条时提交数据,如需了解批处理器相关参数设置,请参考 [Batch-Processor](../batch-processor.md#配置)。
+### 默认日志格式示例
+
+```text
+"<46>1 2024-01-06T02:30:59.145Z 127.0.0.1 apisix 82324 - -
{\"response\":{\"status\":200,\"size\":141,\"headers\":{\"content-type\":\"text/plain\",\"server\":\"APISIX/3.7.0\",\"transfer-encoding\":\"chunked\",\"connection\":\"close\"}},\"route_id\":\"1\",\"server\":{\"hostname\":\"baiyundeMacBook-Pro.local\",\"version\":\"3.7.0\"},\"request\":{\"uri\":\"/opentracing\",\"url\":\"http://127.0.0.1:1984/opentracing\",\"querystring\":{},\"method\":\"GET\",\"size\":155,\"headers\":{\"content-t
[...]
+```
+
## 插件元数据
| 名称 | 类型 | 必选项 | 默认值 | 描述
|
diff --git a/docs/zh/latest/plugins/tcp-logger.md
b/docs/zh/latest/plugins/tcp-logger.md
index 6a950784d..3984fb1d4 100644
--- a/docs/zh/latest/plugins/tcp-logger.md
+++ b/docs/zh/latest/plugins/tcp-logger.md
@@ -47,6 +47,46 @@ description: 本文介绍了 API 网关 Apache APISIX 如何使用 tcp-logger
该插件支持使用批处理器来聚合并批量处理条目(日志/数据)。这样可以避免插件频繁地提交数据,默认情况下批处理器每 `5` 秒钟或队列中的数据达到 `1000`
条时提交数据,如需了解批处理器相关参数设置,请参考 [Batch-Processor](../batch-processor.md#配置)。
+### 默认日志格式示例
+
+```json
+{
+ "response": {
+ "status": 200,
+ "headers": {
+ "server": "APISIX/3.7.0",
+ "content-type": "text/plain",
+ "content-length": "12",
+ "connection": "close"
+ },
+ "size": 118
+ },
+ "server": {
+ "version": "3.7.0",
+ "hostname": "localhost"
+ },
+ "start_time": 1704527628474,
+ "client_ip": "127.0.0.1",
+ "service_id": "",
+ "latency": 102.9999256134,
+ "apisix_latency": 100.9999256134,
+ "upstream_latency": 2,
+ "request": {
+ "headers": {
+ "connection": "close",
+ "host": "localhost"
+ },
+ "size": 59,
+ "method": "GET",
+ "uri": "/hello",
+ "url": "http://localhost:1984/hello",
+ "querystring": {}
+ },
+ "upstream": "127.0.0.1:1980",
+ "route_id": "1"
+}
+```
+
## 插件元数据
| 名称 | 类型 | 必选项 | 默认值 | 有效值 | 描述
|
diff --git a/docs/zh/latest/plugins/tencent-cloud-cls.md
b/docs/zh/latest/plugins/tencent-cloud-cls.md
index 2d567f8c1..88bff5b06 100644
--- a/docs/zh/latest/plugins/tencent-cloud-cls.md
+++ b/docs/zh/latest/plugins/tencent-cloud-cls.md
@@ -50,6 +50,46 @@ description: API 网关 Apache APISIX tencent-cloud-cls 插件可用于将日志
该插件支持使用批处理器来聚合并批量处理条目(日志/数据)。这样可以避免插件频繁地提交数据,默认情况下批处理器每 `5` 秒钟或队列中的数据达到 `1000`
条时提交数据,如需了解批处理器相关参数设置,请参考 [Batch-Processor](../batch-processor.md#配置)。
+### 默认日志格式示例
+
+```json
+{
+ "response": {
+ "headers": {
+ "content-type": "text/plain",
+ "connection": "close",
+ "server": "APISIX/3.7.0",
+ "transfer-encoding": "chunked"
+ },
+ "size": 136,
+ "status": 200
+ },
+ "route_id": "1",
+ "upstream": "127.0.0.1:1982",
+ "client_ip": "127.0.0.1",
+ "apisix_latency": 100.99985313416,
+ "service_id": "",
+ "latency": 103.99985313416,
+ "start_time": 1704525145772,
+ "server": {
+ "version": "3.7.0",
+ "hostname": "localhost"
+ },
+ "upstream_latency": 3,
+ "request": {
+ "headers": {
+ "connection": "close",
+ "host": "localhost"
+ },
+ "url": "http://localhost:1984/opentracing",
+ "querystring": {},
+ "method": "GET",
+ "size": 65,
+ "uri": "/opentracing"
+ }
+}
+```
+
## 插件元数据
| 名称 | 类型 | 必选项 | 默认值 | 有效值 | 描述
|
diff --git a/docs/zh/latest/plugins/udp-logger.md
b/docs/zh/latest/plugins/udp-logger.md
index 0966aaadf..00f00d641 100644
--- a/docs/zh/latest/plugins/udp-logger.md
+++ b/docs/zh/latest/plugins/udp-logger.md
@@ -46,6 +46,46 @@ description: 本文介绍了 API 网关 Apache APISIX 如何使用 udp-logger
该插件支持使用批处理器来聚合并批量处理条目(日志和数据)。这样可以避免插件频繁地提交数据,默认情况下批处理器每 `5` 秒钟或队列中的数据达到 `1000`
条时提交数据,如需了解批处理器相关参数设置,请参考 [Batch-Processor](../batch-processor.md#配置)。
+### 默认日志格式数据
+
+```json
+{
+ "apisix_latency": 99.999988555908,
+ "service_id": "",
+ "server": {
+ "version": "3.7.0",
+ "hostname": "localhost"
+ },
+ "request": {
+ "method": "GET",
+ "headers": {
+ "connection": "close",
+ "host": "localhost"
+ },
+ "url": "http://localhost:1984/opentracing",
+ "size": 65,
+ "querystring": {},
+ "uri": "/opentracing"
+ },
+ "start_time": 1704527399740,
+ "client_ip": "127.0.0.1",
+ "response": {
+ "status": 200,
+ "size": 136,
+ "headers": {
+ "server": "APISIX/3.7.0",
+ "content-type": "text/plain",
+ "transfer-encoding": "chunked",
+ "connection": "close"
+ }
+ },
+ "upstream": "127.0.0.1:1982",
+ "route_id": "1",
+ "upstream_latency": 12,
+ "latency": 111.99998855591
+}
+```
+
## 插件元数据
| 名称 | 类型 | 必选项 | 默认值 | 有效值 | 描述
|