yihua commented on code in PR #17477:
URL: https://github.com/apache/hudi/pull/17477#discussion_r2595998438


##########
hudi-client/hudi-spark-client/src/main/java/org/apache/hudi/client/SparkRDDTableServiceClient.java:
##########
@@ -102,30 +94,12 @@ protected HoodieWriteMetadata<HoodieData<WriteStatus>> 
partialUpdateTableMetadat
       String instantTime,
       WriteOperationType writeOperationType) {
     if (isStreamingWriteToMetadataEnabled(table)) {
-      boolean enforceCoalesceWithRepartition = writeOperationType == 
WriteOperationType.CLUSTER; // for other table services, 
enforceCoalesceWithRepartition will be false.
-      if (enforceCoalesceWithRepartition) {
-        enforceCoalesceWithRepartition = 
computeEnforceCoalesceWithRepartitionForClustering(table, instantTime);
-      }
       
writeMetadata.setWriteStatuses(streamingMetadataWriteHandler.streamWriteToMetadataTable(table,
 writeMetadata.getWriteStatuses(), instantTime,
-          enforceCoalesceWithRepartition, 
config.getMetadataConfig().getStreamingWritesCoalesceDivisorForDataTableWrites()));
+              
config.getMetadataConfig().getStreamingWritesCoalesceDivisorForDataTableWrites()));

Review Comment:
   nit: indentation should be 4 spaces?



##########
hudi-client/hudi-spark-client/src/main/java/org/apache/hudi/client/SparkRDDWriteClient.java:
##########
@@ -109,9 +108,8 @@ public boolean commit(String instantTime, 
JavaRDD<WriteStatus> rawWriteStatuses,
     final JavaRDD<WriteStatus> writeStatuses;
     if 
(WriteOperationType.streamingWritesToMetadataSupported((getOperationType())) && 
isStreamingWriteToMetadataEnabled(table)) {
       // this code block is expected to create a new Metadata Writer, start a 
new commit in metadata table and trigger streaming write to metadata table.
-      boolean enforceCoalesceWithRepartition = getOperationType() == 
WriteOperationType.BULK_INSERT && config.getBulkInsertSortMode() == 
BulkInsertSortMode.NONE;
       writeStatuses = 
HoodieJavaRDD.getJavaRDD(streamingMetadataWriteHandler.streamWriteToMetadataTable(table,
 HoodieJavaRDD.of(rawWriteStatuses), instantTime,
-          enforceCoalesceWithRepartition, 
config.getMetadataConfig().getStreamingWritesCoalesceDivisorForDataTableWrites()));
+              
config.getMetadataConfig().getStreamingWritesCoalesceDivisorForDataTableWrites()));

Review Comment:
   nit: indentation should be 4 spaces?



##########
hudi-client/hudi-spark-client/src/main/java/org/apache/hudi/client/SparkRDDTableServiceClient.java:
##########
@@ -102,30 +94,12 @@ protected HoodieWriteMetadata<HoodieData<WriteStatus>> 
partialUpdateTableMetadat
       String instantTime,
       WriteOperationType writeOperationType) {
     if (isStreamingWriteToMetadataEnabled(table)) {
-      boolean enforceCoalesceWithRepartition = writeOperationType == 
WriteOperationType.CLUSTER; // for other table services, 
enforceCoalesceWithRepartition will be false.
-      if (enforceCoalesceWithRepartition) {
-        enforceCoalesceWithRepartition = 
computeEnforceCoalesceWithRepartitionForClustering(table, instantTime);
-      }
       
writeMetadata.setWriteStatuses(streamingMetadataWriteHandler.streamWriteToMetadataTable(table,
 writeMetadata.getWriteStatuses(), instantTime,
-          enforceCoalesceWithRepartition, 
config.getMetadataConfig().getStreamingWritesCoalesceDivisorForDataTableWrites()));

Review Comment:
   Note to myself: previously, the `enforceCoalesceWithRepartition` is set to 
`true` for clustering table service without sorting.



##########
hudi-client/hudi-spark-client/src/main/java/org/apache/hudi/client/SparkRDDWriteClient.java:
##########
@@ -109,9 +108,8 @@ public boolean commit(String instantTime, 
JavaRDD<WriteStatus> rawWriteStatuses,
     final JavaRDD<WriteStatus> writeStatuses;
     if 
(WriteOperationType.streamingWritesToMetadataSupported((getOperationType())) && 
isStreamingWriteToMetadataEnabled(table)) {
       // this code block is expected to create a new Metadata Writer, start a 
new commit in metadata table and trigger streaming write to metadata table.
-      boolean enforceCoalesceWithRepartition = getOperationType() == 
WriteOperationType.BULK_INSERT && config.getBulkInsertSortMode() == 
BulkInsertSortMode.NONE;
       writeStatuses = 
HoodieJavaRDD.getJavaRDD(streamingMetadataWriteHandler.streamWriteToMetadataTable(table,
 HoodieJavaRDD.of(rawWriteStatuses), instantTime,
-          enforceCoalesceWithRepartition, 
config.getMetadataConfig().getStreamingWritesCoalesceDivisorForDataTableWrites()));

Review Comment:
   Note to myself: previously, the `enforceCoalesceWithRepartition` is set to 
`true` for bulk insert NONE sort mode.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to