leesf commented on a change in pull request #1006: [HUDI-276] Translate the 
Configurations page into Chinese
URL: https://github.com/apache/incubator-hudi/pull/1006#discussion_r344466257
 
 

 ##########
 File path: docs/configurations.cn.md
 ##########
 @@ -51,385 +49,419 @@ inputDF.write()
 .save(basePath);
 ```
 
-Options useful for writing datasets via `write.format.option(...)`
+用于通过`write.format.option(...)`写入数据集的选项
 
 ##### TABLE_NAME_OPT_KEY {#TABLE_NAME_OPT_KEY}
-  Property: `hoodie.datasource.write.table.name` [Required]<br/>
-  <span style="color:grey">Hive table name, to register the dataset 
into.</span>
+  属性:`hoodie.datasource.write.table.name` [必须]<br/>
+  <span style="color:grey">Hive表名,用于将数据集注册到其中。</span>
   
 ##### OPERATION_OPT_KEY {#OPERATION_OPT_KEY}
-  Property: `hoodie.datasource.write.operation`, Default: `upsert`<br/>
-  <span style="color:grey">whether to do upsert, insert or bulkinsert for the 
write operation. Use `bulkinsert` to load new data into a table, and there on 
use `upsert`/`insert`. 
-  bulk insert uses a disk based write path to scale to load large inputs 
without need to cache it.</span>
+  属性:`hoodie.datasource.write.operation`, 默认值:`upsert`<br/>
+  <span 
style="color:grey">是否为写操作进行插入更新、插入或批量插入。使用`bulkinsert`将新数据加载到表中,之后使用`upsert`或`insert`。
+  批量插入使用基于磁盘的写入路径来扩展以加载大量输入,而无需对其进行缓存。</span>
   
 ##### STORAGE_TYPE_OPT_KEY {#STORAGE_TYPE_OPT_KEY}
-  Property: `hoodie.datasource.write.storage.type`, Default: `COPY_ON_WRITE` 
<br/>
-  <span style="color:grey">The storage type for the underlying data, for this 
write. This can't change between writes.</span>
+  属性:`hoodie.datasource.write.storage.type`, 默认值:`COPY_ON_WRITE` <br/>
+  <span style="color:grey">此写入的基础数据的存储类型。两次写入之间不能改变。</span>
   
 ##### PRECOMBINE_FIELD_OPT_KEY {#PRECOMBINE_FIELD_OPT_KEY}
-  Property: `hoodie.datasource.write.precombine.field`, Default: `ts` <br/>
-  <span style="color:grey">Field used in preCombining before actual write. 
When two records have the same key value,
-we will pick the one with the largest value for the precombine field, 
determined by Object.compareTo(..)</span>
+  属性:`hoodie.datasource.write.precombine.field`, 默认值:`ts` <br/>
+  <span style="color:grey">实际写入之前在preCombining中使用的字段。
+  当两个记录具有相同的键值时,我们将使用Object.compareTo(..)从precombine字段中选择一个值最大的记录。</span>
 
 ##### PAYLOAD_CLASS_OPT_KEY {#PAYLOAD_CLASS_OPT_KEY}
-  Property: `hoodie.datasource.write.payload.class`, Default: 
`org.apache.hudi.OverwriteWithLatestAvroPayload` <br/>
-  <span style="color:grey">Payload class used. Override this, if you like to 
roll your own merge logic, when upserting/inserting. 
-  This will render any value set for `PRECOMBINE_FIELD_OPT_VAL` 
in-effective</span>
+  属性:`hoodie.datasource.write.payload.class`, 
默认值:`org.apache.hudi.OverwriteWithLatestAvroPayload` <br/>
+  <span style="color:grey">使用的有效载荷类。如果您想在插入更新或插入时使用自己的合并逻辑,请重写此方法。
+  这将使为`PRECOMBINE_FIELD_OPT_VAL`设置的任何值无效</span>
 
 Review comment:
   这将使为 -> 这将使得?

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

Reply via email to