boneanxs commented on code in PR #6046:
URL: https://github.com/apache/hudi/pull/6046#discussion_r946538664
##########
hudi-client/hudi-client-common/src/main/java/org/apache/hudi/config/HoodieWriteConfig.java:
##########
@@ -211,6 +211,23 @@ public class HoodieWriteConfig extends HoodieConfig {
+ " optimally for common query patterns. For now we support a
build-in user defined bulkinsert partitioner
org.apache.hudi.execution.bulkinsert.RDDCustomColumnsSortPartitioner"
+ " which can does sorting based on specified column values set by "
+ BULKINSERT_USER_DEFINED_PARTITIONER_SORT_COLUMNS.key());
+ public static final ConfigProperty<String> BULKINSERT_ROW_IDENTIFY_ID =
ConfigProperty
+ .key("hoodie.bulkinsert.row.writestatus.id")
+ .noDefaultValue()
+ .withDocumentation("The unique id for each write operation,
HoodieInternalWriteStatusCoordinator will use "
Review Comment:
This is an internal configure used by `HoodieDataSourceInternalBatchWrite`
and `HoodieDataSourceInternalWriter` to pass `writeStatuses` to the clustering
job.
At first I thought setting this in `DataSourceInternalWriterHelper` like
`INSTANT_TIME_OPT_KEY`, but it's difficult for `ClusteringExecutionStrategy` to
access this as package `hudi-client` cannot access `hudi-spark-datasource`, So
I keep this config in `HoodieWriteConfig`.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]