xiarixiaoyao commented on a change in pull request #3330:
URL: https://github.com/apache/hudi/pull/3330#discussion_r701533938



##########
File path: 
hudi-client/hudi-spark-client/src/main/java/org/apache/hudi/table/HoodieSparkCopyOnWriteTable.java
##########
@@ -149,6 +155,37 @@ public HoodieWriteMetadata 
insertOverwrite(HoodieEngineContext context, String i
     return new SparkInsertOverwriteTableCommitActionExecutor(context, config, 
this, instantTime, records).execute();
   }
 
+  @Override
+  public HoodieWriteMetadata<JavaRDD<WriteStatus>> 
optimize(HoodieEngineContext context, String instantTime, 
JavaRDD<HoodieRecord<T>> records) {

Review comment:
       already implementd as a another clustering strategy。 
   we have implemented two way to do z-order。 one is as clustering strategy, 
the other is use optimize  api methond。




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to