This is an automated email from the ASF dual-hosted git repository.
xiaoyu pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/shenyu-website.git
The following commit(s) were added to refs/heads/main by this push:
new 18ea9b2d1e Logging Kafka plugin documentation (#684)
18ea9b2d1e is described below
commit 18ea9b2d1ef675852e5b5ae52ad40e9869e00d4d
Author: qifanyyy <[email protected]>
AuthorDate: Mon Aug 8 09:31:19 2022 -0400
Logging Kafka plugin documentation (#684)
* Logging Kafka plugin documentation
* update default group id
* Add three images
* file name change from jpg to png
* Update logging-kafka.md
* update config image
* file name changed
* Update logging-kafka.md
---
docs/plugin-center/observability/logging-kafka.md | 162 +++++++++++++++++++++
.../plugin-center/observability/logging-kafka.md | 156 ++++++++++++++++++++
.../plugin/logging/logging-kafka/log-rule-en.png | Bin 0 -> 456847 bytes
.../plugin/logging/logging-kafka/log-rule-zh.png | Bin 0 -> 469201 bytes
.../logging/logging-kafka/logging-config-cn.png | Bin 0 -> 437392 bytes
.../logging/logging-kafka/logging-config.png | Bin 0 -> 425535 bytes
.../logging/logging-kafka/logging-kafka-arch.jpg | Bin 0 -> 90279 bytes
.../logging/logging-kafka/logging-kafka-config.jpg | Bin 0 -> 73773 bytes
.../logging/logging-kafka/logging-option-topic.png | Bin 0 -> 488477 bytes
9 files changed, 318 insertions(+)
diff --git a/docs/plugin-center/observability/logging-kafka.md
b/docs/plugin-center/observability/logging-kafka.md
new file mode 100644
index 0000000000..9395eecf45
--- /dev/null
+++ b/docs/plugin-center/observability/logging-kafka.md
@@ -0,0 +1,162 @@
+---
+title: Logging-Kafka Plugin
+keywords: ["Logging", "kafka"]
+description: Logging-Kafka Plugin
+---
+
+# 1. Overview
+
+## 1.1 Plugin Name
+
+* Logging-Kafka Plugin
+
+## 1.2 Appropriate Scenario
+
+* collect http request log to Kafka, consume Kafka message to another
application and analysis.
+
+## 1.3 Plugin functionality
+
+>`Apache ShenYu` The gateway receives requests from the client, forwards them
to the server, and returns the server results to the client. The gateway can
record the details of each request,
+> The list includes: request time, request parameters, request path, response
result, response status code, time consumption, upstream IP, exception
information waiting.
+> The Logging-Kafka plugin is a plugin that records access logs and sends them
to the Kafka cluster.
+
+## 1.4 Plugin code
+
+* Core Module `shenyu-plugin-logging-kafka`.
+
+* Core Class `org.apache.shenyu.plugin.logging.kafka.LoggingKafkaPlugin`
+* Core Claas
`org.apache.shenyu.plugin.logging.kafka.client.KafkaLogCollectClient`
+
+## 1.5 Added Since Which shenyu version
+
+* Since ShenYu 2.5.0
+
+## 1.6 Technical Solutions
+
+* Architecture Diagram
+
+
+
+* Full asynchronous collection and delivery of `Logging` inside the `Apache
ShenYu` gateway
+
+* Logging platform by consuming the logs in the `Kafka` cluster for
repository, and then using `Grafana`, `Kibana` or other visualization platform
to display
+
+
+# 2. How to use plugin
+
+## 2.1 Plugin-use procedure chart
+
+
+
+## 2.2 Import pom
+
+* Add the `Logging-Kafka` dependency to the gateway's `pom.xml` file.
+
+```xml
+<!--shenyu logging-kafka plugin start-->
+<dependency>
+ <groupId>org.apache.shenyu</groupId>
+ <artifactId>shenyu-spring-boot-starter-plugin-logging-kafka</artifactId>
+ <version>${project.version}</version>
+</dependency>
+<!--shenyu logging-kafka plugin end-->
+```
+
+## 2.3 Enable plugin
+
+* In `shenyu-admin` --> Basic Configuration --> Plugin Management -->
`loggingKafka`, configure the kafka parameter and set it to on.
+
+## 2.4 Config plugin
+
+### 2.4.1 Open the plugin and configure kafka, configure it as follows.
+
+
+
+* The individual configuration items are described as follows:
+
+| | |
| |
+|:----------------------------------|:---------------------|:----------------------------------|:------------|
+| config-item | type | description
| remarks |
+| topic | String | Message Queue
Topics | must |
+| namesrvAddr | String | Message queue
nameserver address | must |
+| sampleRate | String | Sampling rate,
range 0~1, 0: off, 0.01: acquisition 1%, 1: acquisition 100% | Optional,
default 1, all collection |
+| compressAlg | String | Compression
algorithm, no compression by default, currently supports LZ4 compression
| Optional, no compression by default |
+| maxResponseBody | Ingeter | Maximum response
size, above the threshold no response will be collected |
Optional, default 512KB |
+| maxRequestBody | Ingeter | Maximum request
body size, above the threshold no request body will be collected
| Optional, default 512KB |
+Except for topic, namesrvAddr, all others are optional, in most cases only
these 3 items need to be configured. The default group id is
"shenyu-access-logging"
+
+### 2.4.2 Configuring Selectors and Rulers
+
+* For detailed configuration of selectors and rules, please refer to:
[Selector and rule management](../../user-guide/admin-usage/selector-and-rule)。
+
+In addition sometimes a large gateway cluster corresponds to multiple
applications (business), different applications (business) corresponds to
different topics, related to isolation,
+then you can configure different topics (optional) and sampling rate
(optional) by selector, the meaning of the configuration items as shown in the
table above.
+The operation is shown below:
+
+
+## 2.5 Logging Info
+
+collect request info as follows
+
+| Field Name |
Meaning
| Description |
Remarks |
+|:----------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:------------------------------|:----|
+| clientIp |
Client IP
| | |
+| timeLocal |
Request time string, format: yyyy-MM-dd HH:mm:ss.SSS
| | |
+| method | request method (different rpc type is not the same,
http class for: get, post wait, rpc class for the interface name) |
| |
+| requestHeader |
Request header (json format)
| | |
+| responseHeader |
Response header (json format)
| | |
+| queryParams |
Request query parameters
| | |
+| requestBody |
Request Body (body of binary type will not be captured)
| | |
+| requestUri |
Request uri
| | |
+| responseBody |
Response body
| | |
+| responseContentLength |
Response body size
| | |
+| rpcType |
rpc type
| | |
+| status |
response status
| | |
+| upstreamIp |
Upstream (program providing the service) IP
|
| |
+| upstreamResponseTime |
Time taken by the upstream (program providing the service) to
respond to the request (ms ms)
| | |
+| userAgent |
Requested user agent
| |
|
+| host |
The requested host
| |
|
+| module |
Requested modules
| |
|
+| path |
The requested path
| | |
+| traceId |
Requested Link Tracking ID
| Need to access link tracking
plugins, such as skywalking,zipkin | |
+
+
+## 2.6 Examples
+
+### 2.6.1 Collect Http Log by Kafka
+
+#### 2.6.1.1 Plugin Configuration
+
+Open the plugin and configure kafka, configure it as follows.
+
+
+
+#### 2.6.1.2 Selector Configuration
+
+For detailed configuration of selectors and rules, please refer to: [Selector
and rule management](../../user-guide/admin-usage/selector-and-rule)。
+
+In addition sometimes a large gateway cluster corresponds to multiple
applications (business), different applications (business) corresponds to
different topics, related to isolation,
+then you can configure different topics (optional) and samplingf rate
(optional) by selector, the meaning of the configuration items as shown in the
table above.
+The operation is shown below:
+
+
+#### 2.6.1.3 Rule Configuration
+
+
+
+#### 2.6.1.4 Request Service
+
+
+
+#### 2.6.1.5 Consumption and display of Logging
+
+As each logging platform has differences, such as storage available
clickhouse, ElasticSearch, etc., visualization has self-developed or open
source Grafana, Kibana, etc..
+Logging-Kafka plugin uses Kafka to decouple production and consumption, while
outputting logs in json format,
+consumption and visualization require users to choose different technology
stacks to achieve their own situation.=
+
+# 3. How to disable plugin
+
+- In `shenyu-admin` --> BasicConfig --> Plugin --> `loggingKafka` set Status
disable.
+
+
+
diff --git
a/i18n/zh/docusaurus-plugin-content-docs/current/plugin-center/observability/logging-kafka.md
b/i18n/zh/docusaurus-plugin-content-docs/current/plugin-center/observability/logging-kafka.md
new file mode 100644
index 0000000000..2ca98cde3b
--- /dev/null
+++
b/i18n/zh/docusaurus-plugin-content-docs/current/plugin-center/observability/logging-kafka.md
@@ -0,0 +1,156 @@
+---
+title: Logging-Kafka插件
+keywords: ["Logging", "Kafka"]
+description: Logging-kafka插件
+---
+
+# 1. 概述
+
+## 1.1 插件名称
+
+* Logging-Kafka Plugin
+
+## 1.2 适用场景
+
+* 通过Kafka收集网关http请求日志,通过其他应用消费Kafka消息,并且对日志进行分析。
+
+## 1.3 插件功能
+
+>`Apache ShenYu` 网关接收客户端请求,向服务端转发请求,并将服务端结果返回给客户端.网关可以记录下每次请求对应的详细信息,
+> 列如: 请求时间、请求参数、请求路径、响应结果、响应状态码、耗时、上游IP、异常信息等待.
+> Logging-Kafka插件便是记录访问日志并将访问日志发送到Kafka集群的插件.
+
+## 1.4 插件代码
+
+* 核心模块 `shenyu-plugin-logging-kafka`.
+
+* 核心类 `org.apache.shenyu.plugin.logging.kafka.LoggingKafkaPlugin`
+* 核心类 `org.apache.shenyu.plugin.logging.kafka.client.KafkaLogCollectClient`
+
+## 1.5 添加自哪个shenyu版本
+
+* ShenYu 2.5.0
+
+## 1.6 技术方案
+
+* 架构图
+ 
+
+* 在 `Apache ShenYu` 网关里面进行 `Logging` 全程异步采集、异步发送
+
+* 日志平台通过消费`Kafka`集群中的日志进行落库
+
+# 2. 如何使用插件
+
+## 2.1 插件使用流程图
+
+
+
+## 2.2 导入pom
+
+* 在网关的 `pom.xml` 文件中添加 `Logging-Kafka` 的依赖。
+
+```xml
+ <!--shenyu logging-kafka plugin start-->
+<dependency>
+ <groupId>org.apache.shenyu</groupId>
+ <artifactId>shenyu-spring-boot-starter-plugin-logging-kafka</artifactId>
+ <version>${project.version}</version>
+</dependency>
+<!--shenyu logging-kafka plugin end-->
+```
+
+## 2.3 启用插件
+
+* 在 `shenyu-admin`--> 基础配置 --> 插件管理-> `loggingKafka` ,配置kafka参数,并设置为开启。
+
+## 2.4 配置插件
+
+### 2.4.1 开启插件,并配置Kafka,配置如下
+
+
+
+* 各个配置项说明如下:
+
+| 配置项 | 类型 | 说明
| 备注 |
+|:-------------------------------------|:-----------------------|:----------------------------------|:-------------------------------------|
+| config-item | type | description
| remarks |
+| topic | String | 消息队列主题
| 必须 |
+| namesrvAddr | String | 消息队列命名服务器地址
| 必须 |
+| sampleRate | String |
采样率,范围0~1,0:关闭,0.01:采集1%,1:采集100% | 可选,默认1,全部采集 |
+| compressAlg | String | 压缩算法,默认不压缩,目前支持LZ4压缩
| 可选,默认不压缩 |
+| maxResponseBody | Ingeter | 最大响应体大小,超过阈值将不采集响应体
| 可选,默认512KB |
+| maxRequestBody | Ingeter | 最大请求体大小,超过阈值将不采集请求体
| 可选,默认512KB |
+*除了topic、namesrvAddr其它都是可选*,大部分情况下只需要配置这2项就可以了。默认的Group-id是"shenyu-access-logging"。
+
+### 2.4.2 配置选择器和规则器
+
+* 选择器和规则详细配置,请参考: [选择器和规则管理](../../user-guide/admin-usage/selector-and-rule)。
+
+另外有时候一个大网关集群对应多个应用程序(业务),不同应用程序(业务)对应不同的主题,相关隔离,这时候可以按选择器配置不同的主题(可选)和采样率(可选),配置项的含义如上表所示。
+操作如下图:
+
+
+
+## 2.5 Logging信息
+
+采集的access log的字段如下:
+
+| 字段名称 | 含义 | 说明
| 备注 |
+|:----------------------|:----------------------------------------------:|:------------------------------|:----|
+| clientIp | 客户端IP |
| |
+| timeLocal | 请求时间字符串, 格式:yyyy-MM-dd HH:mm:ss.SSS |
| |
+| method | 请求方法(不同rpc类型不一样,http类的为:get,post等待,rpc类的为接口名称) |
| |
+| requestHeader | 请求头(json格式) |
| |
+| responseHeader | 响应头(json格式) |
| |
+| queryParams | 请求查询参数 |
| |
+| requestBody | 请求Body(二进制类型的body不会采集) |
| |
+| requestUri | 请求uri |
| |
+| responseBody | 响应body |
| |
+| responseContentLength | 响应body大小 |
| |
+| rpcType | rpc类型 |
| |
+| status | 响应码 |
| |
+| upstreamIp | 上游(提供服务的程序)IP |
| |
+| upstreamResponseTime | 上游(提供服务的程序)响应请求的耗时(毫秒ms) |
| |
+| userAgent | 请求的用户代理 |
| |
+| host | 请求的host |
| |
+| module | 请求的模块 |
| |
+| path | 请求的路径path |
| |
+| traceId | 请求的链路追踪ID |
需要接入链路追踪插件,如skywalking,zipkin | |
+
+## 2.6 示例
+
+### 2.6.1 通过Kafka收集请求日志
+
+#### 2.6.1.1 插件配置
+
+开启插件,并配置Kafka,配置如下:
+
+
+
+#### 2.6.1.2 选择器配置
+
+* 选择器和规则详细配置,请参考: [选择器和规则管理](../../user-guide/admin-usage/selector-and-rule)。
+
+另外有时候一个大网关集群对应多个应用程序(业务),不同应用程序(业务)对应不同的主题,相关隔离,这时候可以按选择器配置不同的主题(可选)和采样率(可选),配置项的含义如上表所示。
+操作如下图:
+
+
+#### 2.6.1.3 规则配置
+
+
+
+#### 2.6.1.4 请求服务
+
+
+
+#### 2.6.1.5 消费以及展示Logging
+
+由于各个日志平台有差异,如存储可用clickhouse,ElasticSearch等待,可视化有自研的或开源的Grafana、Kibana等。
+Logging-Kafka插件利用Kafka进行生产和消费解耦,同时以json格式输出日志,消费和可视化需要用户结合自身情况选择不同的技术栈来实现。
+
+# 3. 如何禁用插件
+
+- 在 `shenyu-admin` --> 基础配置 --> 插件管理-> loggingKafka ,设置为关闭。
+
+
diff --git a/static/img/shenyu/plugin/logging/logging-kafka/log-rule-en.png
b/static/img/shenyu/plugin/logging/logging-kafka/log-rule-en.png
new file mode 100644
index 0000000000..2da6f9e098
Binary files /dev/null and
b/static/img/shenyu/plugin/logging/logging-kafka/log-rule-en.png differ
diff --git a/static/img/shenyu/plugin/logging/logging-kafka/log-rule-zh.png
b/static/img/shenyu/plugin/logging/logging-kafka/log-rule-zh.png
new file mode 100644
index 0000000000..98b2d62e8f
Binary files /dev/null and
b/static/img/shenyu/plugin/logging/logging-kafka/log-rule-zh.png differ
diff --git
a/static/img/shenyu/plugin/logging/logging-kafka/logging-config-cn.png
b/static/img/shenyu/plugin/logging/logging-kafka/logging-config-cn.png
new file mode 100644
index 0000000000..ec7de5b10a
Binary files /dev/null and
b/static/img/shenyu/plugin/logging/logging-kafka/logging-config-cn.png differ
diff --git a/static/img/shenyu/plugin/logging/logging-kafka/logging-config.png
b/static/img/shenyu/plugin/logging/logging-kafka/logging-config.png
new file mode 100644
index 0000000000..478450a4fb
Binary files /dev/null and
b/static/img/shenyu/plugin/logging/logging-kafka/logging-config.png differ
diff --git
a/static/img/shenyu/plugin/logging/logging-kafka/logging-kafka-arch.jpg
b/static/img/shenyu/plugin/logging/logging-kafka/logging-kafka-arch.jpg
new file mode 100644
index 0000000000..ab2f7b128b
Binary files /dev/null and
b/static/img/shenyu/plugin/logging/logging-kafka/logging-kafka-arch.jpg differ
diff --git
a/static/img/shenyu/plugin/logging/logging-kafka/logging-kafka-config.jpg
b/static/img/shenyu/plugin/logging/logging-kafka/logging-kafka-config.jpg
new file mode 100644
index 0000000000..d2a8a30839
Binary files /dev/null and
b/static/img/shenyu/plugin/logging/logging-kafka/logging-kafka-config.jpg differ
diff --git
a/static/img/shenyu/plugin/logging/logging-kafka/logging-option-topic.png
b/static/img/shenyu/plugin/logging/logging-kafka/logging-option-topic.png
new file mode 100644
index 0000000000..85c20d9cb1
Binary files /dev/null and
b/static/img/shenyu/plugin/logging/logging-kafka/logging-option-topic.png differ