vinothchandar commented on a change in pull request #2918:
URL: https://github.com/apache/hudi/pull/2918#discussion_r638404203



##########
File path: 
hudi-client/hudi-client-common/src/main/java/org/apache/hudi/io/CreateFixedFileHandleFactory.java
##########
@@ -0,0 +1,57 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hudi.io;
+
+import org.apache.hudi.common.engine.TaskContextSupplier;
+import org.apache.hudi.common.model.HoodieRecordPayload;
+import org.apache.hudi.config.HoodieWriteConfig;
+import org.apache.hudi.exception.HoodieIOException;
+import org.apache.hudi.table.HoodieTable;
+
+import java.util.concurrent.atomic.AtomicBoolean;
+
+/**
+ * A HoodieCreateHandleFactory is used to write all data in the spark 
partition into a single data file.
+ *
+ * Please use this with caution. This can end up creating very large files if 
not used correctly.
+ */
+public class CreateFixedFileHandleFactory<T extends HoodieRecordPayload, I, K, 
O> extends WriteHandleFactory<T, I, K, O> {

Review comment:
       can we subclass this from CreateHandleFactory? or call this 
`SingleFileCreateHandleFactory`?

##########
File path: 
hudi-client/hudi-spark-client/src/main/java/org/apache/hudi/table/action/cluster/SparkExecuteClusteringCommitActionExecutor.java
##########
@@ -148,9 +151,12 @@ private void 
validateWriteResult(HoodieWriteMetadata<JavaRDD<WriteStatus>> write
       JavaSparkContext jsc = HoodieSparkEngineContext.getSparkContext(context);
       JavaRDD<HoodieRecord<? extends HoodieRecordPayload>> inputRecords = 
readRecordsForGroup(jsc, clusteringGroup);
       Schema readerSchema = HoodieAvroUtils.addMetadataFields(new 
Schema.Parser().parse(config.getSchema()));
+      List<HoodieFileGroupId> inputFileIds = 
clusteringGroup.getSlices().stream()

Review comment:
       so the input file ids are already in the serialized plan? This PR just 
passes this around additionally?

##########
File path: 
hudi-client/hudi-spark-client/src/main/java/org/apache/hudi/table/action/cluster/SparkExecuteClusteringCommitActionExecutor.java
##########
@@ -163,8 +169,10 @@ protected String getCommitActionType() {
 
   @Override
   protected Map<String, List<String>> 
getPartitionToReplacedFileIds(JavaRDD<WriteStatus> writeStatuses) {
-    return 
ClusteringUtils.getFileGroupsFromClusteringPlan(clusteringPlan).collect(
-        Collectors.groupingBy(fg -> fg.getPartitionPath(), 
Collectors.mapping(fg -> fg.getFileId(), Collectors.toList())));
+    Set<HoodieFileGroupId> newFilesWritten = new HashSet(writeStatuses.map(s 
-> s.getFileId()).collect());

Review comment:
       rename: `newFileIds`

##########
File path: 
hudi-client/hudi-client-common/src/main/java/org/apache/hudi/table/action/cluster/strategy/ClusteringExecutionStrategy.java
##########
@@ -51,7 +53,7 @@ public ClusteringExecutionStrategy(HoodieTable table, 
HoodieEngineContext engine
    * Note that commit is not done as part of strategy. commit is callers 
responsibility.
    */
   public abstract O performClustering(final I inputRecords, final int 
numOutputGroups, final String instantTime,
-                                      final Map<String, String> 
strategyParams, final Schema schema);
+                                      final Map<String, String> 
strategyParams, final Schema schema, final List<HoodieFileGroupId> 
inputFileIds);

Review comment:
       can you please add javadocs for this method explaining what each param 
is. 

##########
File path: 
hudi-client/hudi-client-common/src/main/java/org/apache/hudi/io/HoodieCreateFixedHandle.java
##########
@@ -0,0 +1,68 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hudi.io;
+
+import org.apache.avro.Schema;
+import org.apache.hudi.common.engine.TaskContextSupplier;
+import org.apache.hudi.common.model.HoodieRecord;
+import org.apache.hudi.common.model.HoodieRecordPayload;
+import org.apache.hudi.common.util.collection.Pair;
+import org.apache.hudi.config.HoodieWriteConfig;
+import org.apache.hudi.table.HoodieTable;
+import org.apache.log4j.LogManager;
+import org.apache.log4j.Logger;
+
+import java.util.Map;
+
+/**
+ * A HoodieCreateHandle which writes all data into a single file.

Review comment:
       This is bit of a misnomer. Even HoodieCreateHandle only writes to a 
single file. 
   
   Rename: HoodieUnboundedCreateHandle or something that captures that intent , 
that this does not respect the sizing aspects.

##########
File path: 
hudi-client/hudi-client-common/src/main/java/org/apache/hudi/io/HoodieCreateFixedHandle.java
##########
@@ -0,0 +1,68 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hudi.io;
+
+import org.apache.avro.Schema;
+import org.apache.hudi.common.engine.TaskContextSupplier;
+import org.apache.hudi.common.model.HoodieRecord;
+import org.apache.hudi.common.model.HoodieRecordPayload;
+import org.apache.hudi.common.util.collection.Pair;
+import org.apache.hudi.config.HoodieWriteConfig;
+import org.apache.hudi.table.HoodieTable;
+import org.apache.log4j.LogManager;
+import org.apache.log4j.Logger;
+
+import java.util.Map;
+
+/**
+ * A HoodieCreateHandle which writes all data into a single file.
+ *
+ * Please use this with caution. This can end up creating very large files if 
not used correctly.
+ */
+public class HoodieCreateFixedHandle<T extends HoodieRecordPayload, I, K, O> 
extends HoodieCreateHandle<T, I, K, O> {
+
+  private static final Logger LOG = 
LogManager.getLogger(HoodieCreateFixedHandle.class);
+
+  public HoodieCreateFixedHandle(HoodieWriteConfig config, String instantTime, 
HoodieTable<T, I, K, O> hoodieTable,
+                                 String partitionPath, String fileId, 
TaskContextSupplier taskContextSupplier) {
+    super(config, instantTime, hoodieTable, partitionPath, fileId, 
getWriterSchemaIncludingAndExcludingMetadataPair(config),
+        taskContextSupplier);
+  }
+
+  public HoodieCreateFixedHandle(HoodieWriteConfig config, String instantTime, 
HoodieTable<T, I, K, O> hoodieTable,
+                                 String partitionPath, String fileId, 
Pair<Schema, Schema> writerSchemaIncludingAndExcludingMetadataPair,
+                                 TaskContextSupplier taskContextSupplier) {
+    super(config, instantTime, hoodieTable, partitionPath, fileId, 
writerSchemaIncludingAndExcludingMetadataPair,
+        taskContextSupplier);
+  }
+
+  /**
+   * Called by the compactor code path.
+   */
+  public HoodieCreateFixedHandle(HoodieWriteConfig config, String instantTime, 
HoodieTable<T, I, K, O> hoodieTable,
+                                 String partitionPath, String fileId, 
Map<String, HoodieRecord<T>> recordMap,
+                                 TaskContextSupplier taskContextSupplier) {
+    this(config, instantTime, hoodieTable, partitionPath, fileId, 
taskContextSupplier);
+  }
+
+  @Override
+  public boolean canWrite(HoodieRecord record) {

Review comment:
       Let's just reuse CreateHandle with a large target file size? if we are 
doing all this for just a specific clustering strategy?

##########
File path: 
hudi-client/hudi-spark-client/src/main/java/org/apache/hudi/table/action/cluster/SparkExecuteClusteringCommitActionExecutor.java
##########
@@ -163,8 +169,10 @@ protected String getCommitActionType() {
 
   @Override
   protected Map<String, List<String>> 
getPartitionToReplacedFileIds(JavaRDD<WriteStatus> writeStatuses) {
-    return 
ClusteringUtils.getFileGroupsFromClusteringPlan(clusteringPlan).collect(
-        Collectors.groupingBy(fg -> fg.getPartitionPath(), 
Collectors.mapping(fg -> fg.getFileId(), Collectors.toList())));
+    Set<HoodieFileGroupId> newFilesWritten = new HashSet(writeStatuses.map(s 
-> s.getFileId()).collect());
+    return ClusteringUtils.getFileGroupsFromClusteringPlan(clusteringPlan)
+        .filter(fg -> !newFilesWritten.contains(fg))

Review comment:
       sorry. not following. why do we need this filter?

##########
File path: 
hudi-client/hudi-spark-client/src/test/java/org/apache/hudi/client/TestHoodieClientOnCopyOnWriteStorage.java
##########
@@ -1167,7 +1177,7 @@ public void testPendingClusteringRollback() throws 
Exception {
     fileIdIntersection.retainAll(fileIds2);
     assertEquals(0, fileIdIntersection.size());
 
-    config = 
getConfigBuilder(HoodieFailedWritesCleaningPolicy.LAZY).withAutoCommit(completeClustering)
+    config = 
getConfigBuilder(HoodieFailedWritesCleaningPolicy.LAZY).withAutoCommit(false)

Review comment:
       so we don't honor `completeClustering` anymore? Not following why this 
change was needed

##########
File path: 
hudi-client/hudi-client-common/src/main/java/org/apache/hudi/io/CreateFixedFileHandleFactory.java
##########
@@ -0,0 +1,57 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hudi.io;
+
+import org.apache.hudi.common.engine.TaskContextSupplier;
+import org.apache.hudi.common.model.HoodieRecordPayload;
+import org.apache.hudi.config.HoodieWriteConfig;
+import org.apache.hudi.exception.HoodieIOException;
+import org.apache.hudi.table.HoodieTable;
+
+import java.util.concurrent.atomic.AtomicBoolean;
+
+/**
+ * A HoodieCreateHandleFactory is used to write all data in the spark 
partition into a single data file.
+ *
+ * Please use this with caution. This can end up creating very large files if 
not used correctly.
+ */
+public class CreateFixedFileHandleFactory<T extends HoodieRecordPayload, I, K, 
O> extends WriteHandleFactory<T, I, K, O> {
+
+  private AtomicBoolean isHandleCreated = new AtomicBoolean(false);
+  private String fileId;
+  
+  public CreateFixedFileHandleFactory(String fileId) {
+    super();
+    this.fileId = fileId;
+  }
+
+  @Override
+  public HoodieWriteHandle<T, I, K, O> create(final HoodieWriteConfig 
hoodieConfig, final String commitTime,

Review comment:
       wondering why we need this actually. Would n't just passing 
`Long.MAX_VALUE` as the target file size, get the create handle to do this? 

##########
File path: 
hudi-client/hudi-spark-client/src/main/java/org/apache/hudi/table/action/cluster/SparkExecuteClusteringCommitActionExecutor.java
##########
@@ -257,12 +265,4 @@ protected String getCommitActionType() {
     return hoodieRecord;
   }
 
-  private HoodieWriteMetadata<JavaRDD<WriteStatus>> 
buildWriteMetadata(JavaRDD<WriteStatus> writeStatusJavaRDD) {

Review comment:
       this was removed, because the constructor does the same job?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to