leesf commented on a change in pull request #1006: [HUDI-276] Translate the
Configurations page into Chinese
URL: https://github.com/apache/incubator-hudi/pull/1006#discussion_r344466334
##########
File path: docs/configurations.cn.md
##########
@@ -51,385 +49,419 @@ inputDF.write()
.save(basePath);
```
-Options useful for writing datasets via `write.format.option(...)`
+用于通过`write.format.option(...)`写入数据集的选项
##### TABLE_NAME_OPT_KEY {#TABLE_NAME_OPT_KEY}
- Property: `hoodie.datasource.write.table.name` [Required]<br/>
- <span style="color:grey">Hive table name, to register the dataset
into.</span>
+ 属性:`hoodie.datasource.write.table.name` [必须]<br/>
+ <span style="color:grey">Hive表名,用于将数据集注册到其中。</span>
##### OPERATION_OPT_KEY {#OPERATION_OPT_KEY}
- Property: `hoodie.datasource.write.operation`, Default: `upsert`<br/>
- <span style="color:grey">whether to do upsert, insert or bulkinsert for the
write operation. Use `bulkinsert` to load new data into a table, and there on
use `upsert`/`insert`.
- bulk insert uses a disk based write path to scale to load large inputs
without need to cache it.</span>
+ 属性:`hoodie.datasource.write.operation`, 默认值:`upsert`<br/>
+ <span
style="color:grey">是否为写操作进行插入更新、插入或批量插入。使用`bulkinsert`将新数据加载到表中,之后使用`upsert`或`insert`。
+ 批量插入使用基于磁盘的写入路径来扩展以加载大量输入,而无需对其进行缓存。</span>
##### STORAGE_TYPE_OPT_KEY {#STORAGE_TYPE_OPT_KEY}
- Property: `hoodie.datasource.write.storage.type`, Default: `COPY_ON_WRITE`
<br/>
- <span style="color:grey">The storage type for the underlying data, for this
write. This can't change between writes.</span>
+ 属性:`hoodie.datasource.write.storage.type`, 默认值:`COPY_ON_WRITE` <br/>
+ <span style="color:grey">此写入的基础数据的存储类型。两次写入之间不能改变。</span>
##### PRECOMBINE_FIELD_OPT_KEY {#PRECOMBINE_FIELD_OPT_KEY}
- Property: `hoodie.datasource.write.precombine.field`, Default: `ts` <br/>
- <span style="color:grey">Field used in preCombining before actual write.
When two records have the same key value,
-we will pick the one with the largest value for the precombine field,
determined by Object.compareTo(..)</span>
+ 属性:`hoodie.datasource.write.precombine.field`, 默认值:`ts` <br/>
+ <span style="color:grey">实际写入之前在preCombining中使用的字段。
+ 当两个记录具有相同的键值时,我们将使用Object.compareTo(..)从precombine字段中选择一个值最大的记录。</span>
##### PAYLOAD_CLASS_OPT_KEY {#PAYLOAD_CLASS_OPT_KEY}
- Property: `hoodie.datasource.write.payload.class`, Default:
`org.apache.hudi.OverwriteWithLatestAvroPayload` <br/>
- <span style="color:grey">Payload class used. Override this, if you like to
roll your own merge logic, when upserting/inserting.
- This will render any value set for `PRECOMBINE_FIELD_OPT_VAL`
in-effective</span>
+ 属性:`hoodie.datasource.write.payload.class`,
默认值:`org.apache.hudi.OverwriteWithLatestAvroPayload` <br/>
+ <span style="color:grey">使用的有效载荷类。如果您想在插入更新或插入时使用自己的合并逻辑,请重写此方法。
+ 这将使为`PRECOMBINE_FIELD_OPT_VAL`设置的任何值无效</span>
##### RECORDKEY_FIELD_OPT_KEY {#RECORDKEY_FIELD_OPT_KEY}
- Property: `hoodie.datasource.write.recordkey.field`, Default: `uuid` <br/>
- <span style="color:grey">Record key field. Value to be used as the
`recordKey` component of `HoodieKey`. Actual value
-will be obtained by invoking .toString() on the field value. Nested fields can
be specified using
-the dot notation eg: `a.b.c`</span>
+ 属性:`hoodie.datasource.write.recordkey.field`, 默认值:`uuid` <br/>
+ <span style="color:grey">记录键字段。用作`HoodieKey`中`recordKey`部分的值。
+ 实际值将通过在字段值上调用.toString()来获得。可以使用点符号指定嵌套字段,例如:`a.b.c`</span>
##### PARTITIONPATH_FIELD_OPT_KEY {#PARTITIONPATH_FIELD_OPT_KEY}
- Property: `hoodie.datasource.write.partitionpath.field`, Default:
`partitionpath` <br/>
- <span style="color:grey">Partition path field. Value to be used at the
`partitionPath` component of `HoodieKey`.
-Actual value ontained by invoking .toString()</span>
+ 属性:`hoodie.datasource.write.partitionpath.field`, 默认值:`partitionpath` <br/>
+ <span style="color:grey">分区路径字段。用作`HoodieKey`中`partitionPath`部分的值。
+ 通过调用.toString()获得实际的值</span>
##### KEYGENERATOR_CLASS_OPT_KEY {#KEYGENERATOR_CLASS_OPT_KEY}
- Property: `hoodie.datasource.write.keygenerator.class`, Default:
`org.apache.hudi.SimpleKeyGenerator` <br/>
- <span style="color:grey">Key generator class, that implements will extract
the key out of incoming `Row` object</span>
+ 属性:`hoodie.datasource.write.keygenerator.class`,
默认值:`org.apache.hudi.SimpleKeyGenerator` <br/>
+ <span style="color:grey">键生成器类,实现从输入的`Row`对象中提取键</span>
##### COMMIT_METADATA_KEYPREFIX_OPT_KEY {#COMMIT_METADATA_KEYPREFIX_OPT_KEY}
- Property: `hoodie.datasource.write.commitmeta.key.prefix`, Default: `_` <br/>
- <span style="color:grey">Option keys beginning with this prefix, are
automatically added to the commit/deltacommit metadata.
-This is useful to store checkpointing information, in a consistent way with
the hudi timeline</span>
+ 属性:`hoodie.datasource.write.commitmeta.key.prefix`, 默认值:`_` <br/>
+ <span style="color:grey">以该前缀开头的选项键会自动添加到提交/增量提交的元数据中。
+ 这对于以与hudi时间轴一致的方式存储检查点信息很有用</span>
Review comment:
这对于以与 -> 这对于与 ?
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services