the-other-tim-brown commented on code in PR #13882:
URL: https://github.com/apache/hudi/pull/13882#discussion_r2342351606


##########
hudi-client/hudi-spark-client/src/main/java/org/apache/hudi/client/clustering/run/strategy/MultipleSparkJobExecutionStrategy.java:
##########
@@ -109,11 +108,7 @@ public HoodieWriteMetadata<HoodieData<WriteStatus>> 
performClustering(final Hood
         Math.min(clusteringPlan.getInputGroups().size(), 
writeConfig.getClusteringMaxParallelism()),
         new CustomizedThreadFactory("clustering-job-group", true));
     try {
-      boolean canUseRowWriter = 
getWriteConfig().getBooleanOrDefault("hoodie.datasource.write.row.writer.enable",
 true)
-          && HoodieDataTypeUtils.canUseRowWriter(schema, 
engineContext.hadoopConfiguration());
-      if (canUseRowWriter) {
-        
HoodieDataTypeUtils.tryOverrideParquetWriteLegacyFormatProperty(writeConfig.getProps(),
 schema);
-      }

Review Comment:
   Yes, this check was mainly for the decimal support in the past and that is 
all handled in `HoodieRowParquetWriteSupport` now



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to