ccxhwmy commented on code in PR #7643: URL: https://github.com/apache/apisix/pull/7643#discussion_r951371097
########## docs/en/latest/plugins/elasticsearch-logger.md: ########## @@ -0,0 +1,206 @@ +--- +title: elasticsearch-logger +keywords: + - APISIX + - API Gateway + - Plugin + - Elasticsearch-logger +description: This document contains information about the Apache APISIX elasticsearch-logger Plugin. +--- + +<!-- +# +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +--> + +## Description + +The `elasticsearch-logger` Plugin is used to forward logs to [Elasticsearch](https://www.elastic.co/guide/en/welcome-to-elastic/current/getting-started-general-purpose.html) for analysis and storage. + +When the Plugin is enabled, APISIX will serialize the request context information to [Elasticsearch Bulk format](https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-bulk.html#docs-bulk) and submit it to the batch queue. When the maximum batch size is exceeded, the data in the queue is pushed to Elasticsearch. See [batch processor](../batch-processor.md) for more details. + +## Attributes + +| Name | Type | Required | Default | Description | +| ------------- | ------- | -------- | --------------------------- | ------------------------------------------------------------ | +| endpoint_addr | string | True | | Elasticsearch API | +| field | array | True | | Elasticsearch `field` configuration | +| field.index | string | True | | Elasticsearch [_index field](https://www.elastic.co/guide/en/elasticsearch/reference/current/mapping-index-field.html#mapping-index-field) | +| field.type | string | False | Elasticsearch default value | Elasticsearch [_type field](https://www.elastic.co/guide/en/elasticsearch/reference/7.17/mapping-type-field.html#mapping-type-field) | +| auth | array | False | | Elasticsearch [authentication](https://www.elastic.co/guide/en/elasticsearch/reference/current/setting-up-authentication.html) configuration | +| auth.username | string | True | | Elasticsearch [authentication](https://www.elastic.co/guide/en/elasticsearch/reference/current/setting-up-authentication.html) username | +| auth.password | string | True | | Elasticsearch [authentication](https://www.elastic.co/guide/en/elasticsearch/reference/current/setting-up-authentication.html) password | +| ssl_verify | boolean | False | true | When set to `true` enables SSL verification as per [OpenResty docs](https://github.com/openresty/lua-nginx-module#tcpsocksslhandshake). | +| timeout | integer | False | 10 | Elasticsearch send data timeout in seconds. | + +This Plugin supports using batch processors to aggregate and process entries (logs/data) in a batch. This avoids the need for frequently submitting the data. The batch processor submits data every `5` seconds or when the data in the queue reaches `1000`. See [Batch Processor](../batch-processor.md#configuration) for more information or setting your custom configuration. + +## Enabling the Plugin + +### Full configuration + +The example below shows a complete configuration of the Plugin on a specific Route: + +```shell +curl http://127.0.0.1:9080/apisix/admin/routes/1 \ +-H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -d ' +{ + "plugins":{ + "elasticsearch-logger":{ + "endpoint_addr":"http://127.0.0.1:9200", + "field":{ + "index":"services", + "type":"collector" + }, + "auth":{ + "username":"elastic", + "password":"123456" + }, + "ssl_verify":false, + "timeout": 60, + "retry_delay":1, + "buffer_duration":60, + "max_retry_count":0, + "batch_max_size":1000, + "inactive_timeout":5, + "name":"elasticsearch-logger" + } + }, + "upstream":{ + "type":"roundrobin", + "nodes":{ + "127.0.0.1:1980":1 + } + }, + "uri":"/elasticsearch.do" +}' +``` + +### Minimal configuration example + +The example below shows a bare minimum configuration of the Plugin on a Route: + +```shell +curl http://127.0.0.1:9080/apisix/admin/routes/1 \ +-H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -d ' +{ + "plugins":{ + "elasticsearch-logger":{ + "endpoint_addr":"http://127.0.0.1:9200", + "field":{ + "index":"services" + } + } + }, + "upstream":{ + "type":"roundrobin", + "nodes":{ + "127.0.0.1:1980":1 + } + }, + "uri":"/elasticsearch.do" +}' +``` + +## Example usage + +Once you have configured the Route to use the Plugin, when you make a request to APISIX, it will be logged in your Elasticsearch server: + +```shell +curl -i http://127.0.0.1:9080/elasticsearch.do?q=hello +HTTP/1.1 200 OK +... +hello, world +``` + +You should be able to login and search these logs from your Kibana discover: + + + +## Metadata + +You can also set the format of the logs by configuring the Plugin metadata. The following configurations are available: + +| Name | Type | Required | Default | Description | +| ---------- | ------ | -------- | ------------------------------------------------------------ | ------------------------------------------------------------ | +| log_format | object | False | {"host": "$host", "@timestamp": "$time_iso8601", "client_ip": "$remote_addr"} | Log format declared as key value pairs in JSON format. Values only support strings. [APISIX](https://github.com/apache/apisix/blob/master/docs/en/latest/apisix-variable.md) or [Nginx](http://nginx.org/en/docs/varindex.html) variables can be used by prefixing the string with `$`. | + +:::info IMPORTANT + +Configuring the Plugin metadata is global in scope. This means that it will take effect on all Routes and Services which use the `elasticsearch-logger` Plugin. + +::: + +The example below shows how you can configure through the Admin API: + +```shell +curl http://127.0.0.1:9080/apisix/admin/plugin_metadata/elasticsearch-logger \ +-H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -d ' +{ + "log_format": { + "host": "$host", + "@timestamp": "$time_iso8601", + "client_ip": "$remote_addr" + } +}' +``` + +With this configuration, your logs would be formatted as shown below: + +```shell +{"host":"localhost","@timestamp":"2020-09-23T19:05:05-04:00","client_ip":"127.0.0.1","route_id":"1"} +{"host":"localhost","@timestamp":"2020-09-23T19:05:05-04:00","client_ip":"127.0.0.1","route_id":"1"} +``` + + make a request to APISIX again: + +```shell +curl -i http://127.0.0.1:9080/elasticsearch.do?q=hello Review Comment: I get it, thank you. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
