This is an automated email from the ASF dual-hosted git repository.
dockerzhang pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/inlong-website.git
The following commit(s) were added to refs/heads/master by this push:
new 7f62c6aa25 [INLONG-940][Release] Add blog for the 1.12.0 release (#953)
7f62c6aa25 is described below
commit 7f62c6aa25bf5195a5a8af0c3b568671e8d18906
Author: Mingyu Bao <[email protected]>
AuthorDate: Sun May 12 20:41:54 2024 +0800
[INLONG-940][Release] Add blog for the 1.12.0 release (#953)
---
blog/2024-04-21-release-1.12.0.md | 141 +++++++++++++++++++++
blog/img/1.12.0-agent-kafka.png | Bin 0 -> 118792 bytes
blog/img/1.12.0-agent-mongodb.png | Bin 0 -> 140774 bytes
blog/img/1.12.0-agent-package.png | Bin 0 -> 173943 bytes
blog/img/1.12.0-agent-pulsar.png | Bin 0 -> 188864 bytes
blog/img/1.12.0-agent-upgrade.png | Bin 0 -> 301053 bytes
blog/img/1.12.0-audit-checkpoint.png | Bin 0 -> 348209 bytes
blog/img/1.12.0-audit-process.png | Bin 0 -> 462457 bytes
blog/img/1.12.0-audit-recovery.png | Bin 0 -> 229579 bytes
blog/img/1.12.0-redis-connector.png | Bin 0 -> 680701 bytes
.../2024-04-21-release-1.12.0.md | 133 +++++++++++++++++++
.../img/1.12.0-agent-kafka.png | Bin 0 -> 315327 bytes
.../img/1.12.0-agent-mongodb.png | Bin 0 -> 372839 bytes
.../img/1.12.0-agent-package.png | Bin 0 -> 343710 bytes
.../img/1.12.0-agent-pulsar.png | Bin 0 -> 478189 bytes
.../img/1.12.0-agent-upgrade.png | Bin 0 -> 301053 bytes
.../img/1.12.0-audit-checkpoint.png | Bin 0 -> 348209 bytes
.../img/1.12.0-audit-process.png | Bin 0 -> 462457 bytes
.../img/1.12.0-audit-recovery.png | Bin 0 -> 229579 bytes
.../img/1.12.0-redis-connector.png | Bin 0 -> 680701 bytes
20 files changed, 274 insertions(+)
diff --git a/blog/2024-04-21-release-1.12.0.md
b/blog/2024-04-21-release-1.12.0.md
new file mode 100644
index 0000000000..36abc33ba9
--- /dev/null
+++ b/blog/2024-04-21-release-1.12.0.md
@@ -0,0 +1,141 @@
+---
+title: Release 1.12.0
+author: Mingyu Bao
+author_url: https://github.com/baomingyu
+author_image_url: https://avatars.githubusercontent.com/u/8108604?s=400&v=4
+tags: [Apache InLong, Version]
+---
+
+Apache InLong recently released version 1.12.0, which closed about 140+
issues, including 7+ major features and 90+ optimizations. The main features
include Manager supports for agent install package management and it's
self-upgrading processe, Agent ability for self-upgrading process, Agent
ability for collecting data from Kafka、Pulsar and MongoDB, Support for Redis
connector in Sort module, Optimization for Audit and enhancement of its
capabilities
+. After the release of 1.12.0, Apache InLong has enriched and optimized Agent
function scenarios, enhanced the accuracy of Audit data measurement, and
enriched the capabilities and applicable scenarios of Sort, solved the demand
for quick troubleshooting in development and operation, and optimized the user
experience of Apache InLong operation and maintenance.
+<!--truncate-->
+
+## About Apache InLong
+
+As the industry's first one-stop, full-scenario, open-source massive data
integration framework, Apache InLong provides automatic, safe, reliable, and
high-performance data transmission capabilities to facilitate businesses to
build stream-based data analysis, modeling, and applications quickly. At
present, InLong is widely used in various industries such as advertising,
payment, social networking, games, artificial intelligence, etc., serving
thousands of businesses, among which the sca [...]
+
+The core keywords of InLong project positioning are "one-stop" and "massive
data". For "one-stop", we hope to shield technical details, provide complete
data integration and support services, and implement out-of-the-box; With its
advantages, such as multi-cluster management, it can stably support
larger-scale data volumes based on trillions of lines per day.
+
+## 1.12.0 Version Overview
+
+Apache InLong recently released version 1.12.0, which closed about 140+
issues, including 7+ major features and 90+ optimizations. The main features
include Manager supports for agent install package management and it's
self-upgrading processe, Agent ability for self-upgrading process, Agent
ability for collecting data from Kafka、Pulsar and MongoDB, Support for Redis
connector in Sort module, Optimization for Audit and enhancement of its
capabilities
+. After the release of 1.12.0, Apache InLong has enriched and optimized Agent
function scenarios, enhanced the accuracy of Audit data measurement, and
enriched the capabilities and applicable scenarios of Sort, solved the demand
for quick troubleshooting in development and operation, and optimized the user
experience of Apache InLong operation and maintenance. In Apache InLong 1.12.0
version, a large number of other features have also been completed, mainly
including:
+
+### Agent Module
+- Agent ability for self-upgrading process
+- Optimize initialization logic to reduce IO usage
+- Optimize message acknowledgment logic to reduce semaphore competition
+- Increase auditing for sending exceptions and resending
+- Optimize message recovery logic to avoid data loss caused by too many
supplementary files
+
+### Manager Module
+- Add an agent installer module management for agent installation
+- Support parsing specific field information based on data types such as CSV
while previewing data
+- Supports pulsar multi cluster while previewing data
+- Supports returning header and specific field information while previewing
data
+- Support adding data and tasks for file collection
+- Audit data query switches from jdbc to Audit OpenAPI.
+- Support to set compression type in Pulsar DataNode
+- Provide OpenAPI for batch saving InLongGroup, InLongStream, and other
information.
+- Support to config Kafka data node
+
+### Dashboard Module
+- Optimize audit data query
+- Optimize audit data display
+- Support for underscore "_" in Sink field mapping
+- Support paginated display of resource details
+- Support MongoDB data source configuration
+
+### Audit Module
+- Support user-defined ways to obtain Audit proxy information
+- Audit SDK supports reporting version numbers
+- Audit SDK supports both singleton and non-singleton usage
+- Audit SDK supports data reporting in Flink Checkpoint feature
+- Audit Service supports HA (High Availability) capabilities
+- Audit Service supports local caching and OpenAPI
+- Audit Service supports multi-datasource
+
+### Sort Module
+- Supports using state key during StarRocks connector sinitialization
+- Supports parsing KV and CSV data containing split symbols
+- Using ZLIB as the default compression type for Pulsar Sink
+- Pulsar Connector supports authentication configuration
+- Pulsar Sink supports authentication configuration
+- Redis Source supports String, Hash, and ZSet data types
+- Redis Sink supports Bitmap, Hash, and String data types
+
+## 1.12.0 Version Feature Introduction
+
+### Manager supports for agent install package management and it's
self-upgrading processe
+In version 1.12.0, Operator can manage Agent installation packages through the
Dashboard, including Agent installation, upgrade, heartbeat management, etc.
Users can create/manage installation packages on the System Operation ->
Installation Packages -> Agent page. Thanks to @haifxu and @fuweng11. For more
information, please refer to: INLONG-9932.
+
+
+### Agent ability for self-upgrading process
+Agents can perform self-upgrades through a pre-deployed Installer.The
Installer will obtain the upgrade configuration information from InLong Manager
via IP and determine whether to proceed with the upgrade based on the
configuration. The main process includes:
+- ADD: Download the installation package -> Unzip the installation package ->
Start the process
+- DELETE: Stop the process -> Delete the installation files
+- UPDATE: Download the installation package -> Stop the process -> Delete the
installation files -> Unzip the installation package -> Start the process
+Thanks to @justinwwhuang. For more information, please refer to: INLONG-9801.
+
+
+### Agent ability for collecting data from Kafka
+In version 1.12.0, Agent supports data collection from Kafka. When creating a
data source, you can directly select Kafka and fill all relevant data source
information to start using it. The parameters include:
+- Data source name: Used to distinguish between different data sources
+- Cluster name: The cluster of data source belongs
+- Data source IP: The IP information of data source
+- Bootstrap Servers: Kafka cluster address
+- Pulsar namespace: Pulsar namespace
+- Kafka topic: Kafka topic
+- Automatic offset reset: Set offset strategy
+- Partition offset: Set specific partition offset
+Thanks to @justinwwhuang. For more information, please refer to: INLONG-9741.
+
+
+### Agent ability for collecting data from Pulsar
+In version 1.12.0, Agent supports data collection from Pulsar. When creating a
data source, you can directly select Pulsar and fill all relevant data source
information to start using it. The parameters include:
+- Data source name: Used to distinguish between different data sources
+- Cluster name: The cluster of data source belongs
+- Data source IP: The IP information of data source
+- Pulsar tenant: Pulsar tenant
+- Pulsar namespace: Pulsar namespace
+- Pulsar topic: Pulsar topic
+- Pulsar admin url: Pulsar admin url
+- Pulsar service url: Pulsar service url
+Thanks to @justinwwhuang. For more information, please refer to: INLONG-9804.
+
+
+### Agent ability for collecting data from MongoDB
+In version 1.12.0, Agent supports data collection from MongoDB. When creating
a data source, you can directly select MongoDB and fill all relevant data
source information to start using it. The parameters include:
+- Data source name: Used to distinguish between different data sources
+- Cluster name: The cluster of data source belongs
+- Data source IP: The IP information of data source
+- Server host: MongoDB address
+- Username: MongoDB username
+- Password: MongoDB password
+- Database name: MongoDB database name
+- Collection name: MongoDB collection name
+- Read mode: Optional "Full + Incremental" or "Incremental"
+Thanks to @justinwwhuang. For more information, please refer to: INLONG-10006。.
+
+
+### Optimization for Audit and enhancement of its capabilities
+In version 1.12.0, InLong enhancement the Audit reconciliation scenarios,
including support for Agent data supplementation scenarios, Sort on Flink
Checkpoint scenarios, etc.
+Thanks to @doleyzi. For more information, please refer to:
INLONG-9904、INLONG-9926、INLONG-9928、INLONG-9957.
+- Support OpenAPI capabilities
+In version 1.12.0, Audit has supported the OpenAPI, and each OpenAPI can be
elected as the leader through HA. The leader node is responsible for real-time
and retroactive aggregation of audit data, and the aggregated results are saved
in the DB. The slave node is responsible for caching the data in the DB to
memory and providing services externally. The leader node also provides the
same service.
+
+- Support Agent data supplementation capabilities
+In version 1.12.0, Audit supports the Agent data supplementation scenario by
adding audit-version, which distinguishes the audit reconciliation for each
supplementation.
+
+- Support Sort Flink Checkpoint capabilities
+In version 1.12.0, Audit supports the Sort Flink checkpoint scenario, ensuring
that audit data is not lost or duplicated when the Flink job restarts or fails
over.
+
+
+### Support for Redis connector in Sort module
+In version 1.12.0, An additional Flink 1.15-based Redis connector
implementation has been added, supporting read and write operations for String,
Hash, ZSet, and Bitmap, four common data types in Redis clusters and standalone
instances. Schema conversion is supported in the Redis connector, allowing
users to specify a Schema that can be converted to different Redis data types.
The specific Schema conversion logic is shown in the following figure. In the
bitmap conversion logic in the fig [...]
+Thanks to @XiaoYou201. For more information, please refer to:
INLONG-9835、INLONG-8948 .
+## Future plans
+In version 1.12.0, the community refactored InLong Agent, InLong
+Audit, enriched Flink 1.15 Connector and other functions. In subsequent
versions, InLong will continue to enrich Flink 1.15 Connector, enhance
Transform capabilities, support offline data integration, unify DataProxy data
protocols, optimize Dashboard experience, etc. We look forward to more
developers participating and contributing.
+
+
diff --git a/blog/img/1.12.0-agent-kafka.png b/blog/img/1.12.0-agent-kafka.png
new file mode 100644
index 0000000000..37fb4055f6
Binary files /dev/null and b/blog/img/1.12.0-agent-kafka.png differ
diff --git a/blog/img/1.12.0-agent-mongodb.png
b/blog/img/1.12.0-agent-mongodb.png
new file mode 100644
index 0000000000..7968899efc
Binary files /dev/null and b/blog/img/1.12.0-agent-mongodb.png differ
diff --git a/blog/img/1.12.0-agent-package.png
b/blog/img/1.12.0-agent-package.png
new file mode 100644
index 0000000000..55ac51341c
Binary files /dev/null and b/blog/img/1.12.0-agent-package.png differ
diff --git a/blog/img/1.12.0-agent-pulsar.png b/blog/img/1.12.0-agent-pulsar.png
new file mode 100644
index 0000000000..856b280c5b
Binary files /dev/null and b/blog/img/1.12.0-agent-pulsar.png differ
diff --git a/blog/img/1.12.0-agent-upgrade.png
b/blog/img/1.12.0-agent-upgrade.png
new file mode 100644
index 0000000000..58fe67dbec
Binary files /dev/null and b/blog/img/1.12.0-agent-upgrade.png differ
diff --git a/blog/img/1.12.0-audit-checkpoint.png
b/blog/img/1.12.0-audit-checkpoint.png
new file mode 100644
index 0000000000..ad8f03542a
Binary files /dev/null and b/blog/img/1.12.0-audit-checkpoint.png differ
diff --git a/blog/img/1.12.0-audit-process.png
b/blog/img/1.12.0-audit-process.png
new file mode 100644
index 0000000000..f2b397d7e6
Binary files /dev/null and b/blog/img/1.12.0-audit-process.png differ
diff --git a/blog/img/1.12.0-audit-recovery.png
b/blog/img/1.12.0-audit-recovery.png
new file mode 100644
index 0000000000..abdb3a82c1
Binary files /dev/null and b/blog/img/1.12.0-audit-recovery.png differ
diff --git a/blog/img/1.12.0-redis-connector.png
b/blog/img/1.12.0-redis-connector.png
new file mode 100644
index 0000000000..bb8a281497
Binary files /dev/null and b/blog/img/1.12.0-redis-connector.png differ
diff --git
a/i18n/zh-CN/docusaurus-plugin-content-blog/2024-04-21-release-1.12.0.md
b/i18n/zh-CN/docusaurus-plugin-content-blog/2024-04-21-release-1.12.0.md
new file mode 100644
index 0000000000..54a6cbdef6
--- /dev/null
+++ b/i18n/zh-CN/docusaurus-plugin-content-blog/2024-04-21-release-1.12.0.md
@@ -0,0 +1,133 @@
+---
+title: 1.12.0 版本发布
+author: Mingyu Bao
+author_url: https://github.com/baomingyu
+author_image_url: https://avatars.githubusercontent.com/u/8108604?s=400&v=4
+tags: [Apache InLong, Version]
+---
+
+Apache InLong(应龙)最近发布了 1.12.0 版本,该版本关闭了 140+ 个 Issues ,包含 7+ 个大特性和 90+
个优化,主要完成了 Manager 对 Agent 安装包的管理和自升级流程的管理、Agent 支持自升级流程、Agent 对
Kafka/Pulsar/MongoDB 采集的支持、Audit 方案优化及能力增强、Sort 新增支持 Redis Connector 等特性。1.12.0
发布后,Apache InLong 丰富并优化了 Agent 功能场景, 增强了 Audit 数据度量的准确性,丰富了 Sort 的能力和适用场景,同时优化了
Apache InLong 运营、运维过程中遇到的一些问题和使用体验。
+<!--truncate-->
+
+## 关于 Apache InLong
+作为业界首个一站式、全场景海量数据集成框架,Apache
InLong(应龙)提供了自动、安全、可靠和高性能的数据传输能力,方便业务快速构建基于流式的数据分析、建模和应用。目前 InLong
正广泛应用于广告、支付、社交、游戏、人工智能等各个行业领域,服务上千个业务,其中高性能场景数据规模超百万亿条/天,高可靠场景数据规模超十万亿条/天。
+
+InLong
项目定位的核心关键词是“一站式”、“全场景”和“海量数据”。对于“一站式”,我们希望屏蔽技术细节、提供完整数据集成及配套服务,实现开箱即用;对于“全场景”,我们希望提供全方位的解决方案,覆盖大数据领域常见的数据集成场景;对于“海量数据”,我们希望通过架构上的数据链路分层、全组件可扩展、自带多集群管理等优势,在百万亿条/天的基础上,稳定支持更大规模的数据量。
+
+## 1.12.0 版本总览
+Apache InLong(应龙)最近发布了 1.12.0 版本,该版本关闭了 140+ 个 Issues ,包含 7+ 个大特性和 90+
个优化,主要完成了 Manager 对 Agent 安装包的管理和自升级流程的管理、Agent 支持自升级流程、Agent 对
Kafka/Pulsar/MongoDB 采集的支持、Audit 方案优化及能力增强、Sort 新增支持 Redis Connector 等特性。1.12.0
发布后,Apache InLong 丰富并优化了 Agent 功能场景,增强了 Audit 数据度量的准确性,丰富了 Sort 的能力和适用场景,同时优化了
Apache InLong 运营、运维过程中遇到的一些问题和使用体验。Apache InLong 1.12.0 版本中,还完成了大量其它特性,主要包括:
+
+### Agent 模块
+- 支持 Agent 自升级版本
+- 优化初始化逻辑降低 IO 使用率
+- 优化消息确认逻辑,减少信号量竞争
+- 增加异常和重发审计数据
+- 优化了补数据流程,避免因补文件过多导致丢数据
+
+### Manager 模块
+- 支持 Agent、Installer 安装包管理以及自升级流程管理
+- 支持在数据预览时根据 CSV 等数据类型解析具体字段信息
+- 数据预览支持多集群条件下查询数据
+- 数据预览支持获取消息头等额外信息
+- 支持配置文件采集补录任务
+- 审计数据查询从直接与数据库交互切换至 Audit OpenAPI
+- Pulsar Sink 支持配置 Pulsar SDK 消费时所需压缩格式
+- 提供批量保存 InLongGroup、InLongStream 等信息的 OpenAPI
+- 支持 Kafka Datanode 管理
+
+### Dashboard 模块
+- 审计数据查询优化
+- 审计数据展示优化
+- Sink 字段映射支持下划线“_”
+- 资源详情展示支持分页
+- 支持 MongoDB 数据源配置
+
+### Audit 模块
+- 支持用户自定义方式获取 Audit proxy 信息
+- Audit SDK 支持上报版本号
+- Audit SDK 支持单例、非单例两种使用方式
+- Audit SDK 支持 Flink Checkpoint 特性下的数据上报方式
+- Audit Service 支持 HA 能力
+- Audit Service 支持本地缓存及 OpenApi
+- Audit Service 支持多数据源集群
+
+### Sort 模块
+- StarRocks Connector 在初始化时支持使用 state key
+- 支持解析含有分割服的 KV、CSV 数据
+- 使用 ZLIB 作为 Pulsar Sink 的默认压缩类型
+- Pulsar Connector 支持认证配置
+- Pulsar Sink 支持认证配置
+- Redis Source 支持 String、Hash、ZSet 数据类型的读取
+- Redis Sink 支持 Bitmap、Hash、String 数据类型
+
+## 1.12.0 版本特性介绍
+
+### Manager 支持对 Agent 安装管理
+通过此特性,运维人员可以通过 Dashboard 管理 Agent 的发布包,包括 Agent 安装、升级、心跳管理等。用户在系统运维 ->> 安装包
->> Agent 页面创建/管理安装包。感谢 @haifxu、@fuweng11 两位同学在 Dashboard 及 Manager
部分对此功能的贡献。具体可参考:INLONG-9932。
+
+
+### Agent 支持自升级
+Agent 可以通过提前部署的 Installer 完成自升级操作,Installer 会通过 IP 从 InLong manager
获取升级的配置信息,根据配置判断是否升级主要流程:
+- 新增流程:下载安装包 -> 解压安装包 -> 启动进程
+- 删除流程:停止进程 -> 删除安装文件
+- 更新流程:下载安装包 -> 停止进程 -> 删除安装文件 -> 解压安装包 -> 启动进程
+感谢 @justinwwhuang 对此功能的贡献,具体可参考 INLONG-9801 。
+
+
+### Agent 支持 Kafka 采集
+在 1.12.0 版本中,Agent 支持了从 Kafka 采集数据,在数据源创建时可以直接选择 Kafka,填写相关数据源信息即可开始使用,参数包括:
+- 数据源名称:便于快速区分不同的数据源
+- 集群名称:选择该数据源所属的集群
+- 数据源 IP:选择该数据源所属的机器
+- Bootstrap Servers:Kafka 集群地址
+- Topic:改数据源要订阅的 Kafka Topic
+- 自动偏移重置:设置偏移策略
+- 分区位点:可精确指定具体分区位点
+感谢 @haifxu 对此功能的贡献,具体请参考 INLONG-9741 。
+
+
+### Agent 支持 Pulsar 采集
+在 1.12.0 版本中,Agent 支持了从 Pulsar 采集数据,在数据源创建时可以直接选择 Pulsar,填写相关数据源信息即可开始使用,参数包括:
+- 数据源名称:便于快速区分不同的数据源
+- 集群名称:选择该数据源所属的集群
+- 数据源 IP:选择该数据源所属的机器
+- Pulsar 租户:Pulsar 租户
+- 命名空间:Pulsar 命名空间
+- Pulsar topic:数据源要订阅的 topic
+- Admin url:Pulsar admin url
+- Service url:Pulsar service url
+感谢 @justinwwhuang 同学对此功能的贡献,具体可参考 INLONG-9804
+
+
+### Agent 支持 MongoDB 采集
+在 1.12.0 版本中,Agent 支持了从 MongoDB 采集数据,在数据源创建时可以直接选择
MongoDB,填写相关数据源信息即可开始使用,参数包括:
+- 数据源名称:便于快速区分不同的数据源
+- 集群名称:选择该数据源所属的集群
+- 数据源 IP:选择该数据源所属的机器
+- 服务器主机:MongoDB 地址
+- 用户名:MongoDB 用户名
+- 密码:MongoDB 密码
+- 数据库名:MongoDB 数据库名
+- 集合名称:MongoDB 集合名称
+- 读取模式:可选“全量 + 增量” 或 “增量”
+感谢 @justinwwhuang 同学对此功能的贡献,具体可参考 INLONG-10006。
+
+
+### Audit 支持多场景对账
+在 1.12.0 版本中,InLong 丰富了 Audit 审计对账的场景,包括支持 Agent 数据补录场景、Sort on Flink
Checkpoint 场景等,特别感谢 @doleyzi 同学对此功能的贡献,具体可参考
INLONG-9904、INLONG-9926、INLONG-9928、INLONG-9957 等。
+- 新增 OpenAPI 能力
+在 1.12.0 版本,Audit 新增了 OpenAPI 的能力,各个 OpenAPI 可以通过 HA 进行选主,Leader
节点负责对审计数据源进行实时、回溯聚合,并且将聚合结果保存在 DB 中,Slave 节点负责将 DB 的数据 cache 到内存,对外提供服务( Leader
节点同样也提供该服务)
+
+- 新增 Agent 数据补录的能力
+在 1.12.0 版本,审计支持了 Agent 数据补录的场景,通过新增 audit-version,区分每次补录的审计对账
+
+- 支持 Sort Flink Checkpoint 能力
+在 1.12.0 版本,审计支持了 Sort Flink 的 checkpoint 的场景,在 Flink 作业重启或者 failover
时,能够保证审计数据不丢不重,从而保证全流程的审计对账
+
+
+### Sort 新增 Redis Connector
+在 1.12.0 版本,增加了基于 Flink 1.15 支持的 Redis connector 实现,支持对 Redis 集群和单机的
String、Hash、ZSet、Bitmap 四种常用数据类型读取和写入,在 Redis connector 内部实现了 Schema
转换,可以将用户指定的 Schema 转换为不同的 Redis Data Type。具体 Schema 转换逻辑如下图所示,在下图的 Bitmap
转换逻辑中,field1 作为 Bitmap 的 key, filed2、field4 作为 Bitmap 中的位置(index),
filed3、field5 为设置的值(0 或 1)。具体可参考 Redis 原生命令 SETBIT key index value。感谢
@XiaoYou201 同学对此功能的贡献,具体可参考 INLONG-9835、INLONG-8948 。
+
+
+## 未来规划
+在 1.12.0 版本中,社区重构了 InLong Agent,InLong Audit,丰富了 Flink 1.15 Connector
等功能。在后续的版本中,InLong 将继续丰富 Flink 1.15 Connector 、丰富 Transform 能力、支持离线集成、统一
DataProxy 数据协议、Dashboard 体验优化等,期待更多开发者参与贡献。
diff --git
a/i18n/zh-CN/docusaurus-plugin-content-blog/img/1.12.0-agent-kafka.png
b/i18n/zh-CN/docusaurus-plugin-content-blog/img/1.12.0-agent-kafka.png
new file mode 100644
index 0000000000..e06a63d88a
Binary files /dev/null and
b/i18n/zh-CN/docusaurus-plugin-content-blog/img/1.12.0-agent-kafka.png differ
diff --git
a/i18n/zh-CN/docusaurus-plugin-content-blog/img/1.12.0-agent-mongodb.png
b/i18n/zh-CN/docusaurus-plugin-content-blog/img/1.12.0-agent-mongodb.png
new file mode 100644
index 0000000000..4bdb26271b
Binary files /dev/null and
b/i18n/zh-CN/docusaurus-plugin-content-blog/img/1.12.0-agent-mongodb.png differ
diff --git
a/i18n/zh-CN/docusaurus-plugin-content-blog/img/1.12.0-agent-package.png
b/i18n/zh-CN/docusaurus-plugin-content-blog/img/1.12.0-agent-package.png
new file mode 100644
index 0000000000..81941acae9
Binary files /dev/null and
b/i18n/zh-CN/docusaurus-plugin-content-blog/img/1.12.0-agent-package.png differ
diff --git
a/i18n/zh-CN/docusaurus-plugin-content-blog/img/1.12.0-agent-pulsar.png
b/i18n/zh-CN/docusaurus-plugin-content-blog/img/1.12.0-agent-pulsar.png
new file mode 100644
index 0000000000..4a2554077e
Binary files /dev/null and
b/i18n/zh-CN/docusaurus-plugin-content-blog/img/1.12.0-agent-pulsar.png differ
diff --git
a/i18n/zh-CN/docusaurus-plugin-content-blog/img/1.12.0-agent-upgrade.png
b/i18n/zh-CN/docusaurus-plugin-content-blog/img/1.12.0-agent-upgrade.png
new file mode 100644
index 0000000000..58fe67dbec
Binary files /dev/null and
b/i18n/zh-CN/docusaurus-plugin-content-blog/img/1.12.0-agent-upgrade.png differ
diff --git
a/i18n/zh-CN/docusaurus-plugin-content-blog/img/1.12.0-audit-checkpoint.png
b/i18n/zh-CN/docusaurus-plugin-content-blog/img/1.12.0-audit-checkpoint.png
new file mode 100644
index 0000000000..ad8f03542a
Binary files /dev/null and
b/i18n/zh-CN/docusaurus-plugin-content-blog/img/1.12.0-audit-checkpoint.png
differ
diff --git
a/i18n/zh-CN/docusaurus-plugin-content-blog/img/1.12.0-audit-process.png
b/i18n/zh-CN/docusaurus-plugin-content-blog/img/1.12.0-audit-process.png
new file mode 100644
index 0000000000..f2b397d7e6
Binary files /dev/null and
b/i18n/zh-CN/docusaurus-plugin-content-blog/img/1.12.0-audit-process.png differ
diff --git
a/i18n/zh-CN/docusaurus-plugin-content-blog/img/1.12.0-audit-recovery.png
b/i18n/zh-CN/docusaurus-plugin-content-blog/img/1.12.0-audit-recovery.png
new file mode 100644
index 0000000000..abdb3a82c1
Binary files /dev/null and
b/i18n/zh-CN/docusaurus-plugin-content-blog/img/1.12.0-audit-recovery.png differ
diff --git
a/i18n/zh-CN/docusaurus-plugin-content-blog/img/1.12.0-redis-connector.png
b/i18n/zh-CN/docusaurus-plugin-content-blog/img/1.12.0-redis-connector.png
new file mode 100644
index 0000000000..bb8a281497
Binary files /dev/null and
b/i18n/zh-CN/docusaurus-plugin-content-blog/img/1.12.0-redis-connector.png
differ