fsk119 commented on a change in pull request #14126:
URL: https://github.com/apache/flink/pull/14126#discussion_r527357666



##########
File path: docs/dev/table/connectors/upsert-kafka.zh.md
##########
@@ -101,112 +89,97 @@ GROUP BY region;
 {% endhighlight %}
 </div>
 </div>
-<span class="label label-danger">Attention</span> Make sure to define the 
primary key in the DDL.
+<span class="label label-danger">注意</span> 确保在 DDL 中定义主键。
 
-Connector Options
+连接器参数
 ----------------
 
 <table class="table table-bordered">
     <thead>
       <tr>
-      <th class="text-left" style="width: 25%">Option</th>
-      <th class="text-center" style="width: 8%">Required</th>
-      <th class="text-center" style="width: 7%">Default</th>
-      <th class="text-center" style="width: 10%">Type</th>
-      <th class="text-center" style="width: 50%">Description</th>
+      <th class="text-left" style="width: 25%">参数</th>
+      <th class="text-center" style="width: 10%">是否必选</th>
+      <th class="text-center" style="width: 10%">默认参数</th>
+      <th class="text-center" style="width: 10%">数据类型</th>
+      <th class="text-center" style="width: 50%">描述</th>
     </tr>
     </thead>
     <tbody>
     <tr>
       <td><h5>connector</h5></td>
-      <td>required</td>
+      <td>必选</td>
       <td style="word-wrap: break-word;">(none)</td>
       <td>String</td>
-      <td>Specify which connector to use, for the Upsert Kafka use: 
<code>'upsert-kafka'</code>.</td>
+      <td>指定要使用的连接器,Upsert Kafka 连接器使用:<code>'upsert-kafka'</code>。</td>
     </tr>
     <tr>
       <td><h5>topic</h5></td>
-      <td>required</td>
+      <td>必选</td>
       <td style="word-wrap: break-word;">(none)</td>
       <td>String</td>
-      <td>The Kafka topic name to read from and write to.</td>
+      <td>用于读取和写入的 Kafka topic 名称。</td>
     </tr>
     <tr>
       <td><h5>properties.bootstrap.servers</h5></td>
-      <td>required</td>
+      <td>必选</td>
       <td style="word-wrap: break-word;">(none)</td>
       <td>String</td>
-      <td>Comma separated list of Kafka brokers.</td>
+      <td>以逗号分隔的 Kafka brokers 列表。</td>
     </tr>
     <tr>
       <td><h5>key.format</h5></td>
-      <td>required</td>
+      <td>必选</td>
       <td style="word-wrap: break-word;">(none)</td>
       <td>String</td>
-      <td>The format used to deserialize and serialize the key part of the 
Kafka messages. The key part
-      fields are specified by the PRIMARY KEY syntax. The supported formats 
include <code>'csv'</code>,
-      <code>'json'</code>, <code>'avro'</code>. Please refer to <a href="{% 
link dev/table/connectors/formats/index.zh.md %}">Formats</a>
-      page for more details and more format options.
+      <td>用于对 Kafka 消息中 key 部分反序列化和序列化的格式。key 字段由 PRIMARY KEY 语法指定。支持的格式包括 
<code>'csv'</code>、<code>'json'</code>、<code>'avro'</code>。请参考<a href="{% link 
dev/table/connectors/formats/index.zh.md %}">格式</a>页面以获取更多详细信息和格式参数。
       </td>
     </tr>
     <tr>
       <td><h5>value.format</h5></td>
-      <td>required</td>
+      <td>必选</td>
       <td style="word-wrap: break-word;">(none)</td>
       <td>String</td>
-      <td>The format used to deserialize and serialize the value part of the 
Kafka messages.
-      The supported formats include <code>'csv'</code>, <code>'json'</code>, 
<code>'avro'</code>.
-      Please refer to <a href="{% link 
dev/table/connectors/formats/index.zh.md %}">Formats</a> page for more details 
and more format options.
+      <td>用于对 Kafka 消息中 value 部分反序列化和序列化的格式。支持的格式包括 
<code>'csv'</code>、<code>'json'</code>、<code>'avro'</code>。请参考<a href="{% link 
dev/table/connectors/formats/index.zh.md %}">格式</a>页面以获取更多详细信息和格式参数。
       </td>
     </tr>
     <tr>
        <td><h5>value.fields-include</h5></td>
-       <td>required</td>
+       <td>必选</td>
        <td style="word-wrap: break-word;"><code>'ALL'</code></td>
        <td>String</td>
-       <td>Controls which fields should end up in the value as well. Available 
values:
+       <td>控制哪些字段应该出现在 value 中。可取值:
        <ul>
-         <li><code>ALL</code>: the value part of the record contains all 
fields of the schema, even if they are part of the key.</li>
-         <li><code>EXCEPT_KEY</code>: the value part of the record contains 
all fields of the schema except the key fields.</li>
+         <li><code>ALL</code>:记录的 value 部分包含 schema 的所有字段,即使它们是 key 的部分。</li>
+         <li><code>EXCEPT_KEY</code>:记录的 value 部分包含 schema 的所有字段,key 字段除外。</li>
        </ul>
        </td>
     </tr>
     <tr>
       <td><h5>sink.parallelism</h5></td>
-      <td>optional</td>
+      <td>可选</td>
       <td style="word-wrap: break-word;">(none)</td>
       <td>Integer</td>
-      <td>Defines the parallelism of the upsert-kafka sink operator. By 
default, the parallelism is determined by the framework using the same 
parallelism of the upstream chained operator.</td>
+      <td>定义 upsert-kafka sink 算子的并行度。默认情况下,并行度由框架确定,使用与上游链接算子相同的并行度。</td>
     </tr>
     </tbody>
 </table>
 
-Features
+特性
 ----------------
 
-### Primary Key Constraints
+### 主键约束
 
-The Upsert Kafka always works in the upsert fashion and requires to define the 
primary key in the DDL.
-With the assumption that records with the same key should be ordered in the 
same partition, the
-primary key semantic on the changelog source means the materialized changelog 
is unique on the primary
-keys. The primary key definition will also control which fields should end up 
in Kafka’s key.
+Upsert Kafka 始终以 upsert 方式工作,并且需要在 DDL 中定义主键。假设具有相同主键的记录在同一分区内是有序的,变更日志 source 
上的主键语义意味着物化的变更日志在主键上是唯一的。主键定义还将控制哪些字段应出现在 Kafka 的 key 中。
 
-### Consistency Guarantees
+### 一致性保证
 
-By default, an Upsert Kafka sink ingests data with at-least-once guarantees 
into a Kafka topic if
-the query is executed with [checkpointing enabled]({% link 
dev/stream/state/checkpointing.zh.md %}#enabling-and-configuring-checkpointing).
+默认情况下,如果[启用 checkpoint]({% link dev/stream/state/checkpointing.zh.md 
%}#enabling-and-configuring-checkpointing),Upsert Kafka sink 会保证至少一次将数据插入 Kafka 
topic。
 
-This means, Flink may write duplicate records with the same key into the Kafka 
topic. But as the
-connector is working in the upsert mode, the last record on the same key will 
take effect when
-reading back as a source. Therefore, the upsert-kafka connector achieves 
idempotent writes just like
-the [HBase sink]({{ site.baseurl }}/dev/table/connectors/hbase.html).
+这意味着,Flink 可以将具有相同 key 的重复记录写入 Kafka topic。但由于连接器是在 upsert 模式下工作的,在作为 source 
回读时同一 key 的最后一条记录才会生效。因此,upsert-kafka 连接器可以像 [HBase sink]({{ site.baseurl 
}}/dev/table/connectors/hbase.html) 一样实现幂等写入。
 
-Data Type Mapping
+数据类型映射
 ----------------
 
-Upsert Kafka stores message keys and values as bytes, so Upsert Kafka doesn't 
have schema or data types.
-The messages are deserialized and serialized by formats, e.g. csv, json, avro. 
Thus, the data type mapping
-is determined by specific formats. Please refer to [Formats]({% link 
dev/table/connectors/formats/index.zh.md %})
-pages for more details.
+Upsert Kafka 用字节存储消息的 key 和 value,因此没有 schema 
或数据类型。消息按格式进行反序列化和序列化,例如:csv、json、avro。因此数据类型映射表由指定的格式确定。请参考[格式]({% link 
dev/table/connectors/formats/index.zh.md %})页面以获取更多详细信息。

Review comment:
       oh. I don't notice the order in English verison. If you have time, 
please help me to adjust the order. 
   




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to