This is an automated email from the ASF dual-hosted git repository.
juzhiyuan pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/apisix.git
The following commit(s) were added to refs/heads/master by this push:
new cec618104 docs: update "Loggers" Plugins 2/n (#7246)
cec618104 is described below
commit cec618104c86c154a584a9c09b230323c4f8b374
Author: Navendu Pottekkat <[email protected]>
AuthorDate: Thu Jun 23 18:50:38 2022 +0530
docs: update "Loggers" Plugins 2/n (#7246)
---
docs/en/latest/plugins/kafka-logger.md | 258 +++++++++++++++---------------
docs/en/latest/plugins/rocketmq-logger.md | 247 ++++++++++++++--------------
docs/en/latest/plugins/udp-logger.md | 49 +++---
3 files changed, 278 insertions(+), 276 deletions(-)
diff --git a/docs/en/latest/plugins/kafka-logger.md
b/docs/en/latest/plugins/kafka-logger.md
index c2f50b9bb..d492857e0 100644
--- a/docs/en/latest/plugins/kafka-logger.md
+++ b/docs/en/latest/plugins/kafka-logger.md
@@ -1,5 +1,11 @@
---
title: kafka-logger
+keywords:
+ - APISIX
+ - API Gateway
+ - Plugin
+ - Kafka Logger
+description: This document contains information about the Apache APISIX
kafka-logger Plugin.
---
<!--
@@ -23,117 +29,135 @@ title: kafka-logger
## Description
-`kafka-logger` is a plugin which works as a Kafka client driver for the
ngx_lua nginx module.
+The `kafka-logger` Plugin is used to push logs as JSON objects to Apache Kafka
clusters. It works as a Kafka client driver for the ngx_lua Nginx module.
-This plugin provides the ability to push requests log data as JSON objects to
your external Kafka clusters. In case if you did not receive the log data don't
worry give it some time it will automatically send the logs after the timer
function expires in our Batch Processor.
-
-For more info on Batch-Processor in Apache APISIX please refer.
-[Batch-Processor](../batch-processor.md)
+It might take some time to receive the log data. It will be automatically sent
after the timer function in the [batch processor](../batch-processor.md)
expires.
## Attributes
-| Name | Type | Requirement | Default | Valid |
Description
|
-| ---------------- | ------- | ----------- | -------------- | ------- |
----------------------------------------------------------------------------------------
|
-| broker_list | object | required | | | An
array of Kafka brokers.
|
-| kafka_topic | string | required | | | Target
topic to push data.
|
-| producer_type | string | optional | async | ["async",
"sync"] | Producer's mode of sending messages. |
-| required_acks | integer | optional | 1 | [0, 1, -1]
| The number of acknowledgments the producer requires the leader to have
received before considering a request complete. This controls the durability of
records that are sent. Semantics is the same as kafka producer acks(If set
`acks=0` then the producer will not wait for any acknowledgment from the
server at all. The record will be immediately added to the socket buffer and
considered sent. `acks=1` This wil [...]
-| key | string | optional | | | Used
for partition allocation of messages.
|
-| timeout | integer | optional | 3 | [1,...] |
Timeout for the upstream to send data.
|
-| name | string | optional | "kafka logger" | | A
unique identifier to identity the batch processor.
|
-| meta_format | enum | optional | "default" |
["default","origin"] | `default`: collect the request information with default
JSON way. `origin`: collect the request information with original HTTP request.
[example](#examples-of-meta_format)|
-| include_req_body | boolean | optional | false | [false, true] |
Whether to include the request body. false: indicates that the requested body
is not included; true: indicates that the requested body is included. Note: if
the request body is too big to be kept in the memory, it can't be logged due to
Nginx's limitation. |
-| include_req_body_expr | array | optional | | | When
`include_req_body` is true, control the behavior based on the result of the
[lua-resty-expr](https://github.com/api7/lua-resty-expr) expression. If
present, only log the request body when the result is true. |
-| include_resp_body| boolean | optional | false | [false, true] |
Whether to include the response body. The response body is included if and only
if it is `true`. |
-| include_resp_body_expr | array | optional | | | When
`include_resp_body` is true, control the behavior based on the result of the
[lua-resty-expr](https://github.com/api7/lua-resty-expr) expression. If
present, only log the response body when the result is true. |
-| cluster_name | integer | optional | 1 | [0,...] | the
name of the cluster. When there are two or more kafka clusters, you can specify
different names. And this only works with async producer_type.|
-| producer_batch_num | integer | optional | 200 | [1,...] | `batch_num`
param in [lua-resty-kafka](https://github.com/doujiang24/lua-resty-kafka),
merge message and batch send to server, unit is message count |
-| producer_batch_size | integer | optional | 1048576 | [0,...] |
`batch_size` param in
[lua-resty-kafka](https://github.com/doujiang24/lua-resty-kafka), unit is byte |
-| producer_max_buffering | integer | optional | 50000 | [1,...] |
`max_buffering` param in
[lua-resty-kafka](https://github.com/doujiang24/lua-resty-kafka), max buffer
size, unit is message count |
-| producer_time_linger | integer | optional | 1 | [1,...] | `flush_time`
param in [lua-resty-kafka](https://github.com/doujiang24/lua-resty-kafka), unit
is second |
-
-The plugin supports the use of batch processors to aggregate and process
entries(logs/data) in a batch. This avoids frequent data submissions by the
plugin, which by default the batch processor submits data every `5` seconds or
when the data in the queue reaches `1000`. For information or custom batch
processor parameter settings, see
[Batch-Processor](../batch-processor.md#configuration) configuration section.
-
-### examples of meta_format
-
-- **default**:
-
- ```json
- {
- "upstream": "127.0.0.1:1980",
- "start_time": 1619414294760,
- "client_ip": "127.0.0.1",
- "service_id": "",
- "route_id": "1",
- "request": {
- "querystring": {
- "ab": "cd"
- },
- "size": 90,
- "uri": "/hello?ab=cd",
- "url": "http://localhost:1984/hello?ab=cd",
- "headers": {
- "host": "localhost",
- "content-length": "6",
- "connection": "close"
- },
- "body": "abcdef",
- "method": "GET"
- },
- "response": {
- "headers": {
- "connection": "close",
- "content-type": "text/plain; charset=utf-8",
- "date": "Mon, 26 Apr 2021 05:18:14 GMT",
- "server": "APISIX/2.5",
- "transfer-encoding": "chunked"
- },
- "size": 190,
- "status": 200
- },
- "server": {
- "hostname": "localhost",
- "version": "2.5"
- },
- "latency": 0
- }
- ```
+| Name | Type | Required | Default | Valid values
| Description
|
+| ---------------------- | ------- | -------- | -------------- |
--------------------- |
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|
+| broker_list | object | True | |
| List of Kafka brokers (nodes).
|
+| kafka_topic | string | True | |
| Target topic to push the logs for organisation.
|
+| producer_type | string | False | async | ["async",
"sync"] | Message sending mode of the producer.
|
+| required_acks | integer | False | 1 | [0, 1, -1]
| Number of acknowledgements the leader needs to receive for the
producer to consider the request complete. This controls the durability of the
sent records. The attribute follows the same configuration as the Kafka `acks`
attribute. See [Apache Kafka
documentation](https://kafka.apache.org/documentation/#producerconfigs_acks)
for more. |
+| key | string | False | |
| Key used for allocating partitions for messages.
|
+| timeout | integer | False | 3 | [1,...]
| Timeout for the upstream to send data.
|
+| name | string | False | "kafka logger" |
| Unique identifier for the batch processor.
|
+| meta_format | enum | False | "default" |
["default","origin"] | Format to collect the request information. Setting to
`default` collects the information in JSON format and `origin` collects the
information with the original HTTP request. See
[examples](#meta_format-example) below.
|
+| include_req_body | boolean | False | false | [false, true]
| When set to `true` includes the request body in the log. If the
request body is too big to be kept in the memory, it can't be logged due to
Nginx's limitations.
|
+| include_req_body_expr | array | False | |
| Filter for when the `include_req_body` attribute is set to `true`.
Request body is only logged when the expression set here evaluates to `true`.
See [lua-resty-expr](https://github.com/api7/lua-resty-expr) for more.
|
+| include_resp_body | boolean | False | false | [false, true]
| When set to `true` includes the response body in the log.
|
+| include_resp_body_expr | array | False | |
| Filter for when the `include_resp_body` attribute is set to `true`.
Response body is only logged when the expression set here evaluates to `true`.
See [lua-resty-expr](https://github.com/api7/lua-resty-expr) for more.
|
+| cluster_name | integer | False | 1 | [0,...]
| Name of the cluster. Used when there are two or more Kafka clusters.
Only works if the `producer_type` attribute is set to `async`.
|
+| producer_batch_num | integer | optional | 200 | [1,...]
| `batch_num` parameter in
[lua-resty-kafka](https://github.com/doujiang24/lua-resty-kafka). The merge
message and batch is send to the server. Unit is message count.
[...]
+| producer_batch_size | integer | optional | 1048576 | [0,...]
| `batch_size` parameter in
[lua-resty-kafka](https://github.com/doujiang24/lua-resty-kafka) in bytes.
[...]
+| producer_max_buffering | integer | optional | 50000 | [1,...]
| `max_buffering` parameter in
[lua-resty-kafka](https://github.com/doujiang24/lua-resty-kafka) representing
maximum buffer size. Unit is message count.
[...]
+| producer_time_linger | integer | optional | 1 | [1,...]
| `flush_time` parameter in
[lua-resty-kafka](https://github.com/doujiang24/lua-resty-kafka) in seconds.
[...]
+
+This Plugin supports using batch processors to aggregate and process entries
(logs/data) in a batch. This avoids the need for frequently submitting the
data. The batch processor submits data every `5` seconds or when the data in
the queue reaches `1000`. See [Batch
Processor](../batch-processor.md#configuration) for more information or setting
your custom configuration.
+
+:::info IMPORTANT
+
+The data is first written to a buffer. When the buffer exceeds the
`batch_max_size` or `buffer_duration` attribute, the data is sent to the Kafka
server and the buffer is flushed.
+
+If the process is successful, it will return `true` and if it fails, returns
`nil` with a string with the "buffer overflow" error.
+
+:::
+
+### meta_format example
+
+- `default`:
+
+ ```json
+ {
+ "upstream": "127.0.0.1:1980",
+ "start_time": 1619414294760,
+ "client_ip": "127.0.0.1",
+ "service_id": "",
+ "route_id": "1",
+ "request": {
+ "querystring": {
+ "ab": "cd"
+ },
+ "size": 90,
+ "uri": "/hello?ab=cd",
+ "url": "http://localhost:1984/hello?ab=cd",
+ "headers": {
+ "host": "localhost",
+ "content-length": "6",
+ "connection": "close"
+ },
+ "body": "abcdef",
+ "method": "GET"
+ },
+ "response": {
+ "headers": {
+ "connection": "close",
+ "content-type": "text/plain; charset=utf-8",
+ "date": "Mon, 26 Apr 2021 05:18:14 GMT",
+ "server": "APISIX/2.5",
+ "transfer-encoding": "chunked"
+ },
+ "size": 190,
+ "status": 200
+ },
+ "server": {
+ "hostname": "localhost",
+ "version": "2.5"
+ },
+ "latency": 0
+ }
+ ```
-- **origin**:
+- `origin`:
- ```http
- GET /hello?ab=cd HTTP/1.1
- host: localhost
- content-length: 6
- connection: close
+ ```http
+ GET /hello?ab=cd HTTP/1.1
+ host: localhost
+ content-length: 6
+ connection: close
- abcdef
- ```
+ abcdef
+ ```
-## Info
+## Metadata
-The `message` will write to the buffer first.
-It will send to the kafka server when the buffer exceed the `batch_max_size`,
-or every `buffer_duration` flush the buffer.
+You can also set the format of the logs by configuring the Plugin metadata.
The following configurations are available:
-In case of success, returns `true`.
-In case of errors, returns `nil` with a string describing the error (`buffer
overflow`).
+| Name | Type | Required | Default
| Description
|
+| ---------- | ------ | -------- |
----------------------------------------------------------------------------- |
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|
+| log_format | object | False | {"host": "$host", "@timestamp":
"$time_iso8601", "client_ip": "$remote_addr"} | Log format declared as key
value pairs in JSON format. Values only support strings.
[APISIX](../apisix-variable.md) or
[Nginx](http://nginx.org/en/docs/varindex.html) variables can be used by
prefixing the string with `$`. |
-### Sample broker list
+:::info IMPORTANT
-This plugin supports to push in to more than one broker at a time. Specify the
brokers of the external kafka servers as below
-sample to take effect of this functionality.
+Configuring the Plugin metadata is global in scope. This means that it will
take effect on all Routes and Services which use the `kafka-logger` Plugin.
-```json
+:::
+
+The example below shows how you can configure through the Admin API:
+
+```shell
+curl http://127.0.0.1:9080/apisix/admin/plugin_metadata/kafka-logger -H
'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -d '
{
- "127.0.0.1":9092,
- "127.0.0.1":9093
-}
+ "log_format": {
+ "host": "$host",
+ "@timestamp": "$time_iso8601",
+ "client_ip": "$remote_addr"
+ }
+}'
+```
+
+With this configuration, your logs would be formatted as shown below:
+
+```shell
+{"host":"localhost","@timestamp":"2020-09-23T19:05:05-04:00","client_ip":"127.0.0.1","route_id":"1"}
+{"host":"localhost","@timestamp":"2020-09-23T19:05:05-04:00","client_ip":"127.0.0.1","route_id":"1"}
```
-## How To Enable
+## Enabling the Plugin
-The following is an example on how to enable the kafka-logger for a specific
route.
+The example below shows how you can enable the `kafka-logger` Plugin on a
specific Route:
```shell
curl http://127.0.0.1:9080/apisix/admin/routes/5 -H 'X-API-KEY:
edd1c9f034335f136f87ad84b625c8f1' -X PUT -d '
@@ -160,52 +184,30 @@ curl http://127.0.0.1:9080/apisix/admin/routes/5 -H
'X-API-KEY: edd1c9f034335f13
}'
```
-## Test Plugin
-
-success:
+This Plugin also supports pushing to more than one broker at a time. You can
specify multiple brokers in the Plugin configuration as shown below:
-```shell
-$ curl -i http://127.0.0.1:9080/hello
-HTTP/1.1 200 OK
-...
-hello, world
+```json
+"broker_list" :
+ {
+ "127.0.0.1":9092,
+ "127.0.0.1":9093
+ },
```
-## Metadata
-
-| Name | Type | Requirement | Default | Valid |
Description
|
-| ---------------- | ------- | ----------- | ------------- | ------- |
----------------------------------------------------------------------------------------
|
-| log_format | object | optional | {"host": "$host", "@timestamp":
"$time_iso8601", "client_ip": "$remote_addr"} | | Log format declared
as key value pair in JSON format. Only string is supported in the `value` part.
If the value starts with `$`, it means to get [APISIX
variable](../apisix-variable.md) or [Nginx
variable](http://nginx.org/en/docs/varindex.html). |
+## Example usage
- Note that **the metadata configuration is applied in global scope**, which
means it will take effect on all Route or Service which use kafka-logger plugin.
-
-### Example
+Now, if you make a request to APISIX, it will be logged in your Kafka server:
```shell
-curl http://127.0.0.1:9080/apisix/admin/plugin_metadata/kafka-logger -H
'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -d '
-{
- "log_format": {
- "host": "$host",
- "@timestamp": "$time_iso8601",
- "client_ip": "$remote_addr"
- }
-}'
-```
-
-It is expected to see some logs like that:
-
-```shell
-{"host":"localhost","@timestamp":"2020-09-23T19:05:05-04:00","client_ip":"127.0.0.1","route_id":"1"}
-{"host":"localhost","@timestamp":"2020-09-23T19:05:05-04:00","client_ip":"127.0.0.1","route_id":"1"}
+curl -i http://127.0.0.1:9080/hello
```
## Disable Plugin
-Remove the corresponding json configuration in the plugin configuration to
disable the `kafka-logger`.
-APISIX plugins are hot-reloaded, therefore no need to restart APISIX.
+To disable the `kafka-logger` Plugin, you can delete the corresponding JSON
configuration from the Plugin configuration. APISIX will automatically reload
and you do not have to restart for this to take effect.
```shell
-$ curl http://127.0.0.1:9080/apisix/admin/routes/1 -H 'X-API-KEY:
edd1c9f034335f136f87ad84b625c8f1' -X PUT -d '
+curl http://127.0.0.1:9080/apisix/admin/routes/1 -H 'X-API-KEY:
edd1c9f034335f136f87ad84b625c8f1' -X PUT -d '
{
"methods": ["GET"],
"uri": "/hello",
diff --git a/docs/en/latest/plugins/rocketmq-logger.md
b/docs/en/latest/plugins/rocketmq-logger.md
index 60137bc6a..f08397001 100644
--- a/docs/en/latest/plugins/rocketmq-logger.md
+++ b/docs/en/latest/plugins/rocketmq-logger.md
@@ -1,7 +1,12 @@
---
title: rocketmq-logger
+keywords:
+ - APISIX
+ - API Gateway
+ - Plugin
+ - RocketMQ Logger
+description: This document contains information about the Apache APISIX
rocketmq-logger Plugin.
---
-
<!--
#
# Licensed to the Apache Software Foundation (ASF) under one or more
@@ -23,113 +28,132 @@ title: rocketmq-logger
## Description
-`rocketmq-logger` is a plugin which provides the ability to push requests log
data as JSON objects to your external rocketmq clusters.
-
- In case if you did not receive the log data don't worry give it some time it
will automatically send the logs after the timer function expires in our Batch
Processor.
+The `rocketmq-logger` Plugin provides the ability to push logs as JSON objects
to your RocketMQ clusters.
-For more info on Batch-Processor in Apache APISIX please refer.
-[Batch-Processor](../batch-processor.md)
+It might take some time to receive the log data. It will be automatically sent
after the timer function in the [batch processor](../batch-processor.md)
expires.
## Attributes
-| Name | Type | Requirement | Default | Valid |
Description
|
-| ---------------- | ------- | ----------- | -------------- | ------- |
----------------------------------------------------------------------------------------
|
-| nameserver_list | object | required | | | An
array of rocketmq nameservers.
|
-| topic | string | required | | | Target
topic to push data.
|
-| key | string | optional | | | Keys
of messages to send. |
-| tag | string | optional | | | Tags of
messages to send. |
-| timeout | integer | optional | 3 | [1,...] |
Timeout for the upstream to send data.
|
-| use_tls | boolean | optional | false | | Whether
to open TLS |
-| access_key | string | optional | "" | | access
key for ACL, empty string means disable ACL. |
-| secret_key | string | optional | "" | | secret
key for ACL. |
-| name | string | optional | "rocketmq logger" | | A
unique identifier to identity the batch processor.
|
-| meta_format | enum | optional | "default" |
["default","origin"] | `default`: collect the request information with default
JSON way. `origin`: collect the request information with original HTTP request.
[example](#examples-of-meta_format)|
-| include_req_body | boolean | optional | false | [false, true] |
Whether to include the request body. false: indicates that the requested body
is not included; true: indicates that the requested body is included. Note: if
the request body is too big to be kept in the memory, it can't be logged due to
Nginx's limitation. |
-| include_req_body_expr | array | optional | | | When
`include_req_body` is true, control the behavior based on the result of the
[lua-resty-expr](https://github.com/api7/lua-resty-expr) expression. If
present, only log the request body when the result is true. |
-| include_resp_body| boolean | optional | false | [false, true] |
Whether to include the response body. The response body is included if and only
if it is `true`. |
-| include_resp_body_expr | array | optional | | | When
`include_resp_body` is true, control the behavior based on the result of the
[lua-resty-expr](https://github.com/api7/lua-resty-expr) expression. If
present, only log the response body when the result is true. |
-
-The plugin supports the use of batch processors to aggregate and process
entries(logs/data) in a batch. This avoids frequent data submissions by the
plugin, which by default the batch processor submits data every `5` seconds or
when the data in the queue reaches `1000`. For information or custom batch
processor parameter settings, see
[Batch-Processor](../batch-processor.md#configuration) configuration section.
-
-### examples of meta_format
-
-- **default**:
-
-```json
- {
- "upstream": "127.0.0.1:1980",
- "start_time": 1619414294760,
- "client_ip": "127.0.0.1",
- "service_id": "",
- "route_id": "1",
- "request": {
- "querystring": {
- "ab": "cd"
+| Name | Type | Required | Default | Valid
values | Description
|
+|------------------------|---------|----------|-------------------|-----------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| nameserver_list | object | True | |
| List of RocketMQ nameservers.
|
+| topic | string | True | |
| Target topic to push the data to.
|
+| key | string | False | |
| Key of the messages.
|
+| tag | string | False | |
| Tag of the messages.
|
+| timeout | integer | False | 3 | [1,...]
| Timeout for the upstream to send data.
|
+| use_tls | boolean | False | false |
| When set to `true`, uses TLS.
|
+| access_key | string | False | "" |
| Access key for ACL. Setting to an empty string will disable the
ACL.
|
+| secret_key | string | False | "" |
| secret key for ACL.
|
+| name | string | False | "rocketmq logger" |
| Unique identifier for the batch processor.
|
+| meta_format | enum | False | "default" |
["default","origin"] | Format to collect the request information. Setting to
`default` collects the information in JSON format and `origin` collects the
information with the original HTTP request. See
[examples](#meta_format-example) below. |
+| include_req_body | boolean | False | false | [false,
true] | When set to `true` includes the request body in the log. If the
request body is too big to be kept in the memory, it can't be logged due to
Nginx's limitations. |
+| include_req_body_expr | array | False | |
| Filter for when the `include_req_body` attribute is set to `true`.
Request body is only logged when the expression set here evaluates to `true`.
See [lua-resty-expr](https://github.com/api7/lua-resty-expr) for more. |
+| include_resp_body | boolean | False | false | [false,
true] | When set to `true` includes the response body in the log.
|
+| include_resp_body_expr | array | False | |
| Filter for when the `include_resp_body` attribute is set to
`true`. Response body is only logged when the expression set here evaluates to
`true`. See [lua-resty-expr](https://github.com/api7/lua-resty-expr) for more. |
+
+This Plugin supports using batch processors to aggregate and process entries
(logs/data) in a batch. This avoids the need for frequently submitting the
data. The batch processor submits data every `5` seconds or when the data in
the queue reaches `1000`. See [Batch
Processor](../batch-processor.md#configuration) for more information or setting
your custom configuration.
+
+:::info IMPORTANT
+
+The data is first written to a buffer. When the buffer exceeds the
`batch_max_size` or `buffer_duration` attribute, the data is sent to the
RocketMQ server and the buffer is flushed.
+
+If the process is successful, it will return `true` and if it fails, returns
`nil` with a string with the "buffer overflow" error.
+
+:::
+
+### meta_format example
+
+- `default`:
+
+ ```json
+ {
+ "upstream": "127.0.0.1:1980",
+ "start_time": 1619414294760,
+ "client_ip": "127.0.0.1",
+ "service_id": "",
+ "route_id": "1",
+ "request": {
+ "querystring": {
+ "ab": "cd"
+ },
+ "size": 90,
+ "uri": "/hello?ab=cd",
+ "url": "http://localhost:1984/hello?ab=cd",
+ "headers": {
+ "host": "localhost",
+ "content-length": "6",
+ "connection": "close"
+ },
+ "body": "abcdef",
+ "method": "GET"
},
- "size": 90,
- "uri": "/hello?ab=cd",
- "url": "http://localhost:1984/hello?ab=cd",
- "headers": {
- "host": "localhost",
- "content-length": "6",
- "connection": "close"
+ "response": {
+ "headers": {
+ "connection": "close",
+ "content-type": "text/plain; charset=utf-8",
+ "date": "Mon, 26 Apr 2021 05:18:14 GMT",
+ "server": "APISIX/2.5",
+ "transfer-encoding": "chunked"
+ },
+ "size": 190,
+ "status": 200
},
- "body": "abcdef",
- "method": "GET"
- },
- "response": {
- "headers": {
- "connection": "close",
- "content-type": "text/plain; charset=utf-8",
- "date": "Mon, 26 Apr 2021 05:18:14 GMT",
- "server": "APISIX/2.5",
- "transfer-encoding": "chunked"
+ "server": {
+ "hostname": "localhost",
+ "version": "2.5"
},
- "size": 190,
- "status": 200
- },
- "server": {
- "hostname": "localhost",
- "version": "2.5"
- },
- "latency": 0
- }
-```
+ "latency": 0
+ }
+ ```
-- **origin**:
+- `origin`:
-```http
- GET /hello?ab=cd HTTP/1.1
- host: localhost
- content-length: 6
- connection: close
+ ```http
+ GET /hello?ab=cd HTTP/1.1
+ host: localhost
+ content-length: 6
+ connection: close
- abcdef
-```
+ abcdef
+ ```
-## Info
+## Metadata
-The `message` will write to the buffer first.
-It will send to the rocketmq server when the buffer exceed the
`batch_max_size`,
-or every `buffer_duration` flush the buffer.
+You can also set the format of the logs by configuring the Plugin metadata.
The following configurations are available:
-In case of success, returns `true`.
-In case of errors, returns `nil` with a string describing the error (`buffer
overflow`).
+| Name | Type | Required | Default
| Description
|
+| ---------- | ------ | -------- |
----------------------------------------------------------------------------- |
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|
+| log_format | object | False | {"host": "$host", "@timestamp":
"$time_iso8601", "client_ip": "$remote_addr"} | Log format declared as key
value pairs in JSON format. Values only support strings.
[APISIX](../apisix-variable.md) or
[Nginx](http://nginx.org/en/docs/varindex.html) variables can be used by
prefixing the string with `$`. |
-### Sample Nameserver list
+:::info IMPORTANT
-Specify the nameservers of the external rocketmq servers as below sample.
+Configuring the Plugin metadata is global in scope. This means that it will
take effect on all Routes and Services which use the `rocketmq-logger` Plugin.
-```json
-[
- "127.0.0.1:9876",
- "127.0.0.2:9876"
-]
+:::
+
+The example below shows how you can configure through the Admin API:
+
+```shell
+curl http://127.0.0.1:9080/apisix/admin/plugin_metadata/rocketmq-logger -H
'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -d '
+{
+ "log_format": {
+ "host": "$host",
+ "@timestamp": "$time_iso8601",
+ "client_ip": "$remote_addr"
+ }
+}'
+```
+
+With this configuration, your logs would be formatted as shown below:
+
+```shell
+{"host":"localhost","@timestamp":"2020-09-23T19:05:05-04:00","client_ip":"127.0.0.1","route_id":"1"}
+{"host":"localhost","@timestamp":"2020-09-23T19:05:05-04:00","client_ip":"127.0.0.1","route_id":"1"}
```
-## How To Enable
+## Enabling the Plugin
-The following is an example on how to enable the rocketmq-logger for a
specific route.
+The example below shows how you can enable the `rocketmq-logger` Plugin on a
specific Route:
```shell
curl http://127.0.0.1:9080/apisix/admin/routes/5 -H 'X-API-KEY:
edd1c9f034335f136f87ad84b625c8f1' -X PUT -d '
@@ -152,52 +176,29 @@ curl http://127.0.0.1:9080/apisix/admin/routes/5 -H
'X-API-KEY: edd1c9f034335f13
}'
```
-## Test Plugin
-
-success:
+This Plugin also supports pushing to more than one nameserver at a time. You
can specify multiple nameserver in the Plugin configuration as shown below:
-```shell
-$ curl -i http://127.0.0.1:9080/hello
-HTTP/1.1 200 OK
-...
-hello, world
+```json
+"nameserver_list" : [
+ "127.0.0.1:9876",
+ "127.0.0.2:9876"
+]
```
-## Metadata
-
-| Name | Type | Requirement | Default | Valid |
Description
|
-| ---------------- | ------- | ----------- | ------------- | ------- |
----------------------------------------------------------------------------------------
|
-| log_format | object | optional | {"host": "$host", "@timestamp":
"$time_iso8601", "client_ip": "$remote_addr"} | | Log format declared
as key value pair in JSON format. Only string is supported in the `value` part.
If the value starts with `$`, it means to get [APISIX
variables](../apisix-variable.md) or [Nginx
variable](http://nginx.org/en/docs/varindex.html). |
-
- Note that **the metadata configuration is applied in global scope**, which
means it will take effect on all Route or Service which use rocketmq-logger
plugin.
-
-### Example
+## Example usage
-```shell
-curl http://127.0.0.1:9080/apisix/admin/plugin_metadata/rocketmq-logger -H
'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -d '
-{
- "log_format": {
- "host": "$host",
- "@timestamp": "$time_iso8601",
- "client_ip": "$remote_addr"
- }
-}'
-```
-
-It is expected to see some logs like that:
+Now, if you make a request to APISIX, it will be logged in your RocketMQ
server:
```shell
-{"host":"localhost","@timestamp":"2020-09-23T19:05:05-04:00","client_ip":"127.0.0.1","route_id":"1"}
-{"host":"localhost","@timestamp":"2020-09-23T19:05:05-04:00","client_ip":"127.0.0.1","route_id":"1"}
+curl -i http://127.0.0.1:9080/hello
```
## Disable Plugin
-Remove the corresponding json configuration in the plugin configuration to
disable the `rocketmq-logger`.
-APISIX plugins are hot-reloaded, therefore no need to restart APISIX.
+To disable the `rocketmq-logger` Plugin, you can delete the corresponding JSON
configuration from the Plugin configuration. APISIX will automatically reload
and you do not have to restart for this to take effect.
```shell
-$ curl http://127.0.0.1:9080/apisix/admin/routes/1 -H 'X-API-KEY:
edd1c9f034335f136f87ad84b625c8f1' -X PUT -d '
+curl http://127.0.0.1:9080/apisix/admin/routes/1 -H 'X-API-KEY:
edd1c9f034335f136f87ad84b625c8f1' -X PUT -d '
{
"methods": ["GET"],
"uri": "/hello",
diff --git a/docs/en/latest/plugins/udp-logger.md
b/docs/en/latest/plugins/udp-logger.md
index 8e6b09a7e..b94231f17 100644
--- a/docs/en/latest/plugins/udp-logger.md
+++ b/docs/en/latest/plugins/udp-logger.md
@@ -1,5 +1,11 @@
---
title: udp-logger
+keywords:
+ - APISIX
+ - API Gateway
+ - Plugin
+ - UDP Logger
+description: This document contains information about the Apache APISIX
udp-logger Plugin.
---
<!--
@@ -23,30 +29,27 @@ title: udp-logger
## Description
-`udp-logger` is a plugin which push Log data requests to UDP servers.
+The `udp-logger` Plugin can be used to push log data requests to UDP servers.
-This will provide the ability to send Log data requests as JSON objects to
Monitoring tools and other UDP servers.
+This provides the ability to send log data requests as JSON objects to
monitoring tools and other UDP servers.
-This plugin provides the ability to push Log data as a batch to you're
external UDP servers. In case if you did not receive the log data don't worry
give it some time it will automatically send the logs after the timer function
expires in our Batch Processor.
-
-For more info on Batch-Processor in Apache APISIX please refer.
-[Batch-Processor](../batch-processor.md)
+This plugin also allows to push logs as a batch to your external UDP server.
It might take some time to receive the log data. It will be automatically sent
after the timer function in the [batch processor](../batch-processor.md)
expires.
## Attributes
-| Name | Type | Requirement | Default | Valid |
Description
|
-| ---------------- | ------- | ----------- | ------------ | ------- |
----------------------------------------------------------------------------------------
|
-| host | string | required | | | IP
address or the Hostname of the UDP server.
|
-| port | integer | required | | [0,...] | Target
upstream port.
|
-| timeout | integer | optional | 3 | [1,...] | Timeout
for the upstream to send data.
|
-| name | string | optional | "udp logger" | | A unique
identifier to identity the batch processor
|
-| include_req_body | boolean | optional | false | | Whether
to include the request body
|
+| Name | Type | Required | Default | Valid values |
Description |
+|------------------|---------|----------|--------------|--------------|----------------------------------------------------------|
+| host | string | True | | | IP
address or the hostname of the UDP server. |
+| port | integer | True | | [0,...] | Target
upstream port. |
+| timeout | integer | False | 3 | [1,...] |
Timeout for the upstream to send data. |
+| name | string | False | "udp logger" | | Unique
identifier for the batch processor. |
+| include_req_body | boolean | False | false | | When
set to `true` includes the request body in the log. |
-The plugin supports the use of batch processors to aggregate and process
entries(logs/data) in a batch. This avoids frequent data submissions by the
plugin, which by default the batch processor submits data every `5` seconds or
when the data in the queue reaches `1000`. For information or custom batch
processor parameter settings, see
[Batch-Processor](../batch-processor.md#configuration) configuration section.
+This Plugin supports using batch processors to aggregate and process entries
(logs/data) in a batch. This avoids the need for frequently submitting the
data. The batch processor submits data every `5` seconds or when the data in
the queue reaches `1000`. See [Batch
Processor](../batch-processor.md#configuration) for more information or setting
your custom configuration.
-## How To Enable
+## Enabling the Plugin
-The following is an example on how to enable the udp-logger for a specific
route.
+The example below shows how you can enable the Plugin on a specific Route:
```shell
curl http://127.0.0.1:9080/apisix/admin/routes/5 -H 'X-API-KEY:
edd1c9f034335f136f87ad84b625c8f1' -X PUT -d '
@@ -69,24 +72,20 @@ curl http://127.0.0.1:9080/apisix/admin/routes/5 -H
'X-API-KEY: edd1c9f034335f13
}'
```
-## Test Plugin
+## Example usage
-* success:
+Now, if you make a request to APISIX, it will be logged in your UDP server:
```shell
-$ curl -i http://127.0.0.1:9080/hello
-HTTP/1.1 200 OK
-...
-hello, world
+curl -i http://127.0.0.1:9080/hello
```
## Disable Plugin
-Remove the corresponding json configuration in the plugin configuration to
disable the `udp-logger`.
-APISIX plugins are hot-reloaded, therefore no need to restart APISIX.
+To disable the `udp-logger` Plugin, you can delete the corresponding JSON
configuration from the Plugin configuration. APISIX will automatically reload
and you do not have to restart for this to take effect.
```shell
-$ curl http://127.0.0.1:9080/apisix/admin/routes/1 -H 'X-API-KEY:
edd1c9f034335f136f87ad84b625c8f1' -X PUT -d '
+curl http://127.0.0.1:9080/apisix/admin/routes/1 -H 'X-API-KEY:
edd1c9f034335f136f87ad84b625c8f1' -X PUT -d '
{
"methods": ["GET"],
"uri": "/hello",