This is an automated email from the ASF dual-hosted git repository.
dockerzhang pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/inlong-website.git
The following commit(s) were added to refs/heads/master by this push:
new bfaf3e916d4 [INLONG-1057][Doc] Add documentation for newly introduced
source and sink metrics in inlong-sort (#1059)
bfaf3e916d4 is described below
commit bfaf3e916d44d12b6547aa7d79b843362ea09a23
Author: PeterZh6 <[email protected]>
AuthorDate: Thu Oct 17 17:12:42 2024 +0800
[INLONG-1057][Doc] Add documentation for newly introduced source and sink
metrics in inlong-sort (#1059)
---
docs/modules/sort/metrics.md | 19 ++++++++++++++++---
.../current/modules/sort/metrics.md | 20 +++++++++++++++++---
2 files changed, 33 insertions(+), 6 deletions(-)
diff --git a/docs/modules/sort/metrics.md b/docs/modules/sort/metrics.md
index 6807fda9787..b7071c7eb14 100644
--- a/docs/modules/sort/metrics.md
+++ b/docs/modules/sort/metrics.md
@@ -38,8 +38,14 @@ Sort will export metric by flink metric group, So user can
use [metric reporter]
| groupId_streamId_nodeId_database_table_numBytesInPerSecond | mysql-cdc |
input bytes number per second |
| groupId_streamId_nodeId_database_schema_table_numBytesInPerSecond |
oracle-cdc,postgresql-cdc | input bytes number per second |
| groupId_streamId_nodeId_database_collection_numBytesInPerSecond |
mongodb-cdc | input bytes number per second |
-
-### supporting load node
+| groupId_streamId_nodeId_database_collection_numSnapshotCreate |
postgresql-cdc,pulsar | checkpoint creation attempt number |
+| groupId_streamId_nodeId_database_collection_numSnapshotError |
postgresql-cdc,pulsar | checkpoint creation exception number |
+| groupId_streamId_nodeId_database_collection_numSnapshotComplete |
postgresql-cdc,pulsar | successful checkpoint creation number |
+| groupId_streamId_nodeId_database_collection_snapshotToCheckpointTimeLag |
postgresql-cdc,pulsar | time lag from start to completion of checkpoint
creation (ms) |
+| groupId_streamId_nodeId_database_collection_numDeserializeSuccess |
postgresql-cdc,pulsar | successful deserialization number |
+| groupId_streamId_nodeId_database_collection_numDeserializeError |
postgresql-cdc,pulsar | deserialization error number |
+| groupId_streamId_nodeId_database_collection_deserializeTimeLag |
postgresql-cdc,pulsar | deserialization time lag (ms) |
+### Supporting load node
#### Node level metric
@@ -74,6 +80,13 @@ Sort will export metric by flink metric group, So user can
use [metric reporter]
| groupId_streamId_nodeId_database_table_dirtyBytesOut |
doris,iceberg,starRocks | out byte number |
| groupId_streamId_nodeId_database_schema_table_dirtyBytesOut | postgresql |
out byte number |
| groupId_streamId_nodeId_topic_dirtyBytesOut | kafka | out byte number |
+| groupId_streamId_nodeId_numSerializeSuccess | starRocks | successful
deserialization number |
+| groupId_streamId_nodeId_numSerializeError | starRocks | deserialization
exception number |
+| groupId_streamId_nodeId_serializeTimeLag | starRocks | serialization time
lag (ms) |
+| groupId_streamId_nodeId_numSnapshotCreate | starRocks | checkpoint creation
attempt number |
+| groupId_streamId_nodeId_numSnapshotError | starRocks | checkpoint creation
exception number |
+| groupId_streamId_nodeId_numSnapshotComplete | starRocks | successful
checkpoint creation number |
+| groupId_streamId_nodeId_snapshotToCheckpointTimeLag | starRocks | time lag
from start to completion of checkpoint creation (ms) |
## Usage
@@ -121,7 +134,7 @@ One example about sync mysql data to postgresql data. And
We will introduce usag
FROM `table_groupId_streamId_nodeId1`;
```
-* We can add metric report in flink-conf.yaml
+* To report the metrics to an external system, we can add metric report in
flink-conf.yaml. Take the `Prometheus` reporter as an example.
```yaml
metric.reporters: promgateway
diff --git
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/modules/sort/metrics.md
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/modules/sort/metrics.md
index 82a35793358..61780607aad 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/current/modules/sort/metrics.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current/modules/sort/metrics.md
@@ -36,6 +36,13 @@ sidebar_position: 4
| groupId_streamId_nodeId_database_table_numBytesInPerSecond | mysql-cdc |
每秒输入字节数 |
| groupId_streamId_nodeId_database_schema_table_numBytesInPerSecond |
oracle-cdc,postgresql-cdc | 每秒输入字节数 |
| groupId_streamId_nodeId_database_collection_numBytesInPerSecond |
mongodb-cdc | 每秒输入字节数 |
+| groupId_streamId_nodeId_database_collection_numSnapshotCreate |
postgresql-cdc,pulsar | 尝试创建Checkpoint数 |
+| groupId_streamId_nodeId_database_collection_numSnapshotError |
postgresql-cdc,pulsar | 创建Checkpoint异常数 |
+| groupId_streamId_nodeId_database_collection_numSnapshotComplete |
postgresql-cdc,pulsar | 创建Checkpoint成功数 |
+| groupId_streamId_nodeId_database_collection_snapshotToCheckpointTimeLag |
postgresql-cdc,pulsar | 从开始创建Checkpoint到完成创建延迟(毫秒) |
+| groupId_streamId_nodeId_database_collection_numDeserializeSuccess |
postgresql-cdc,pulsar | 反序列化成功数 |
+| groupId_streamId_nodeId_database_collection_numDeserializeSuccess |
postgresql-cdc,pulsar | 反序列化异常数 |
+| groupId_streamId_nodeId_database_collection_deserializeTimeLag |
postgresql-cdc,pulsar | 反序列化延迟(毫秒) |
### 支持的 load 节点
@@ -72,10 +79,17 @@ sidebar_position: 4
| groupId_streamId_nodeId_database_table_dirtyBytesOut |
doris,iceberg,starRocks | 输出脏数据字节数据 |
| groupId_streamId_nodeId_database_schema_table_dirtyBytesOut | postgresql |
输出脏数据字节数据 |
| groupId_streamId_nodeId_topic_dirtyBytesOut | kafka | 输出脏数据字节数据 |
+| groupId_streamId_nodeId_numSerializeSuccess | starRocks | 序列化成功数 |
+| groupId_streamId_nodeId_numSerializeError | starRocks | 序列化异常数 |
+| groupId_streamId_nodeId_serializeTimeLag | starRocks | 序列化延迟(毫秒) |
+| groupId_streamId_nodeId_numSnapshotCreate | starRocks | 尝试创建Checkpoint数 |
+| groupId_streamId_nodeId_numSnapshotError | starRocks | 创建Checkpoint异常数 |
+| groupId_streamId_nodeId_numSnapshotComplete | starRocks | 创建Checkpoint成功数 |
+| groupId_streamId_nodeId_snapshotToCheckpointTimeLag | starRocks |
从开始创建Checkpoint到完成创建延迟(毫秒) |
## 用法
-这里将介绍一个同步MYSQL数据到PostgreSQL的例子,同时介绍指标的使用。
+这里将介绍一个同步 MYSQL 数据到 PostgreSQL 的例子,同时介绍指标的使用。
* flink sql 的使用
```sql
@@ -108,7 +122,7 @@ sidebar_position: 4
'username' = 'postgres',
'password' = 'inlong',
'table-name' = 'public.user',
- 'inlong.metric' = 'pggroup&pgStream&pgNode'
+ 'inlong.metric.labels' =
'groupId=xxgroup&streamId=xxstream&nodeId=xxnode'
);
INSERT INTO `table_groupId_streamId_nodeId2`
@@ -119,7 +133,7 @@ sidebar_position: 4
FROM `table_groupId_streamId_nodeId1`;
```
-* 我们可以在flink-conf.yaml中添加metric report配置
+* 要将指标上报到外部系统,我们可以在 flink-conf.yaml 中添加 metric report 配置(以`Prometheus`为例)
```yaml
metric.reporters: promgateway