This is an automated email from the ASF dual-hosted git repository.
luzhijing pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/doris.git
The following commit(s) were added to refs/heads/master by this push:
new e44aad2b86 [typo](docs)add new attention of doris flink connector
(#18930)
e44aad2b86 is described below
commit e44aad2b86b013db547e88363c51479911c8d100
Author: lsy3993 <[email protected]>
AuthorDate: Sat Apr 22 23:38:48 2023 +0800
[typo](docs)add new attention of doris flink connector (#18930)
---
docs/en/docs/ecosystem/flink-doris-connector.md | 5 ++++-
docs/zh-CN/docs/ecosystem/flink-doris-connector.md | 4 ++++
2 files changed, 8 insertions(+), 1 deletion(-)
diff --git a/docs/en/docs/ecosystem/flink-doris-connector.md
b/docs/en/docs/ecosystem/flink-doris-connector.md
index ccfb144827..c57d757491 100644
--- a/docs/en/docs/ecosystem/flink-doris-connector.md
+++ b/docs/en/docs/ecosystem/flink-doris-connector.md
@@ -512,4 +512,7 @@ It usually occurs before Connector1.1.0, because the
writing frequency is too fa
10. **Flink imports dirty data, how to skip it? **
-When Flink imports data, if there is dirty data, such as field format, length,
etc., it will cause StreamLoad to report an error, and Flink will continue to
retry at this time. If you need to skip, you can disable the strict mode of
StreamLoad (strict_mode=false, max_filter_ratio=1) or filter the data before
the Sink operator.
\ No newline at end of file
+When Flink imports data, if there is dirty data, such as field format, length,
etc., it will cause StreamLoad to report an error, and Flink will continue to
retry at this time. If you need to skip, you can disable the strict mode of
StreamLoad (strict_mode=false, max_filter_ratio=1) or filter the data before
the Sink operator.
+
+11. **How should the source table and Doris table correspond?**
+When using Flink Connector to import data, pay attention to two aspects. The
first is that the columns and types of the source table correspond to the
columns and types in flink sql; the second is that the columns and types in
flink sql must match those of the doris table For the correspondence between
columns and types, please refer to the above "Doris & Flink Column Type
Mapping" for details
diff --git a/docs/zh-CN/docs/ecosystem/flink-doris-connector.md
b/docs/zh-CN/docs/ecosystem/flink-doris-connector.md
index 5a71f4b424..c137dcba07 100644
--- a/docs/zh-CN/docs/ecosystem/flink-doris-connector.md
+++ b/docs/zh-CN/docs/ecosystem/flink-doris-connector.md
@@ -505,3 +505,7 @@ Connector1.1.0版本以前,是攒批写入的,写入均是由数据驱动,
10. **Flink导入有脏数据,如何跳过?**
Flink在数据导入时,如果有脏数据,比如字段格式、长度等问题,会导致StreamLoad报错,此时Flink会不断的重试。如果需要跳过,可以通过禁用StreamLoad的严格模式(strict_mode=false,max_filter_ratio=1)或者在Sink算子之前对数据做过滤。
+
+11. **源表和Doris表应如何对应?**
+使用Flink Connector导入数据时,要注意两个方面,第一是源表的列和类型跟flink sql中的列和类型要对应上;第二个是flink
sql中的列和类型要跟doris表的列和类型对应上,具体可以参考上面的"Doris 和 Flink 列类型映射关系"
+
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]