[GitHub] [incubator-hudi] kwondw commented on a change in pull request #1500: [HUDI-772] Make UserDefinedBulkInsertPartitioner configurable for DataSource
kwondw commented on a change in pull request #1500: [HUDI-772] Make UserDefinedBulkInsertPartitioner configurable for DataSource URL: https://github.com/apache/incubator-hudi/pull/1500#discussion_r409868821 ## File path: hudi-spark/src/test/java/org/apache/hudi/table/NoOpBulkInsertPartitioner.java ## @@ -0,0 +1,32 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hudi.table; + +import org.apache.hudi.common.model.HoodieRecord; +import org.apache.hudi.common.model.HoodieRecordPayload; +import org.apache.spark.api.java.JavaRDD; + +public class NoOpBulkInsertPartitioner Review comment: Sure, I will update it. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-hudi] kwondw commented on a change in pull request #1500: [HUDI-772] Make UserDefinedBulkInsertPartitioner configurable for DataSource
kwondw commented on a change in pull request #1500: [HUDI-772] Make UserDefinedBulkInsertPartitioner configurable for DataSource URL: https://github.com/apache/incubator-hudi/pull/1500#discussion_r409868673 ## File path: hudi-spark/src/main/java/org/apache/hudi/DataSourceUtils.java ## @@ -152,6 +154,24 @@ public static KeyGenerator createKeyGenerator(TypedProperties props) throws IOEx } } + /** + * Create a UserDefinedBulkInsertPartitioner class via reflection, + * + * if the class name of UserDefinedBulkInsertPartitioner is configured through the HoodieWriteConfig. + * @see HoodieWriteConfig#getUserDefinedBulkInsertPartitionerClass() + */ + private static Option createUserDefinedBulkInsertPartitioner(HoodieWriteConfig config) + throws IOException { +String bulkInsertPartitionerClass = config.getUserDefinedBulkInsertPartitionerClass(); +try { + return bulkInsertPartitionerClass == null || bulkInsertPartitionerClass.isEmpty() + ? Option.empty() : + Option.of((UserDefinedBulkInsertPartitioner) ReflectionUtils.loadClass(bulkInsertPartitionerClass)); +} catch (Throwable e) { + throw new IOException("Could not create UserDefinedBulkInsertPartitioner class " + bulkInsertPartitionerClass, e); Review comment: Ok, I will change it. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-hudi] kwondw commented on a change in pull request #1500: [HUDI-772] Make UserDefinedBulkInsertPartitioner configurable for DataSource
kwondw commented on a change in pull request #1500: [HUDI-772] Make UserDefinedBulkInsertPartitioner configurable for DataSource URL: https://github.com/apache/incubator-hudi/pull/1500#discussion_r409198506 ## File path: hudi-spark/src/test/java/DataSourceUtilsTest.java ## @@ -17,18 +17,58 @@ */ import org.apache.hudi.DataSourceUtils; +import org.apache.hudi.DataSourceWriteOptions; +import org.apache.hudi.client.HoodieWriteClient; +import org.apache.hudi.common.model.HoodieRecord; +import org.apache.hudi.common.util.Option; +import org.apache.hudi.config.HoodieWriteConfig; +import org.apache.hudi.table.NoOpBulkInsertPartitioner; import org.apache.avro.Schema; import org.apache.avro.generic.GenericData; import org.apache.avro.generic.GenericRecord; -import org.junit.jupiter.api.Test; +import org.apache.spark.api.java.JavaRDD; +import org.junit.Before; +import org.junit.Test; +import org.junit.runner.RunWith; Review comment: I'm using mockito, until mockito upgrade to version that support junit 5, it doesn't seem to work. I think same as https://github.com/apache/incubator-hudi/blob/master/hudi-common/src/test/java/org/apache/hudi/common/table/view/TestPriorityBasedFileSystemView.java#L54 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-hudi] kwondw commented on a change in pull request #1500: [HUDI-772] Make UserDefinedBulkInsertPartitioner configurable for DataSource
kwondw commented on a change in pull request #1500: [HUDI-772] Make UserDefinedBulkInsertPartitioner configurable for DataSource URL: https://github.com/apache/incubator-hudi/pull/1500#discussion_r408507089 ## File path: hudi-client/src/main/java/org/apache/hudi/config/HoodieWriteConfig.java ## @@ -55,6 +55,7 @@ private static final String DEFAULT_PARALLELISM = "1500"; private static final String INSERT_PARALLELISM = "hoodie.insert.shuffle.parallelism"; private static final String BULKINSERT_PARALLELISM = "hoodie.bulkinsert.shuffle.parallelism"; + private static final String BULKINSERT_USER_DEFINED_PARTITIONER_CLASS = "hoodie.bulkinsert.user_defined.partitioner.class"; Review comment: it has been updated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-hudi] kwondw commented on a change in pull request #1500: [HUDI-772] Make UserDefinedBulkInsertPartitioner configurable for DataSource
kwondw commented on a change in pull request #1500: [HUDI-772] Make UserDefinedBulkInsertPartitioner configurable for DataSource URL: https://github.com/apache/incubator-hudi/pull/1500#discussion_r407780799 ## File path: hudi-spark/src/test/java/org/apache/hudi/table/NoOpBulkInsertPartitioner.java ## @@ -0,0 +1,32 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hudi.table; + +import org.apache.hudi.common.model.HoodieRecord; +import org.apache.hudi.common.model.HoodieRecordPayload; +import org.apache.spark.api.java.JavaRDD; + +public class NoOpBulkInsertPartitioner Review comment: I can use Lambda to implement it, however to test it, the class should be able to be loaded(created) by reflection API(```Class.forName(clazzName)```) with its class name. I wasn't sure how classLoader can load lambda class from its class name? From my quick test, it doesn't work? do you have any suggestion? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-hudi] kwondw commented on a change in pull request #1500: [HUDI-772] Make UserDefinedBulkInsertPartitioner configurable for DataSource
kwondw commented on a change in pull request #1500: [HUDI-772] Make UserDefinedBulkInsertPartitioner configurable for DataSource URL: https://github.com/apache/incubator-hudi/pull/1500#discussion_r40827 ## File path: hudi-client/src/main/java/org/apache/hudi/config/HoodieWriteConfig.java ## @@ -55,6 +55,7 @@ private static final String DEFAULT_PARALLELISM = "1500"; private static final String INSERT_PARALLELISM = "hoodie.insert.shuffle.parallelism"; private static final String BULKINSERT_PARALLELISM = "hoodie.bulkinsert.shuffle.parallelism"; + private static final String BULKINSERT_USER_DEFINED_PARTITIONER_CLASS = "hoodie.bulkinsert.user_defined.partitioner.class"; Review comment: Sure, I will change it. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-hudi] kwondw commented on a change in pull request #1500: [HUDI-772] Make UserDefinedBulkInsertPartitioner configurable for DataSource
kwondw commented on a change in pull request #1500: [HUDI-772] Make UserDefinedBulkInsertPartitioner configurable for DataSource URL: https://github.com/apache/incubator-hudi/pull/1500#discussion_r40801 ## File path: hudi-spark/src/main/java/org/apache/hudi/DataSourceUtils.java ## @@ -152,6 +154,24 @@ public static KeyGenerator createKeyGenerator(TypedProperties props) throws IOEx } } + /** + * Create a UserDefinedBulkInsertPartitioner class via reflection, + * + * if the class name of UserDefinedBulkInsertPartitioner is configured through the HoodieWriteConfig. + * @see HoodieWriteConfig#getUserDefinedBulkInsertPartitionerClass() + */ + private static Option createUserDefinedBulkInsertPartitioner(HoodieWriteConfig config) + throws IOException { +String bulkInsertPartitionerClass = config.getUserDefinedBulkInsertPartitionerClass(); +try { + return bulkInsertPartitionerClass == null || bulkInsertPartitionerClass.isEmpty() Review comment: Please let me update it. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-hudi] kwondw commented on a change in pull request #1500: [HUDI-772] Make UserDefinedBulkInsertPartitioner configurable for DataSource
kwondw commented on a change in pull request #1500: [HUDI-772] Make UserDefinedBulkInsertPartitioner configurable for DataSource URL: https://github.com/apache/incubator-hudi/pull/1500#discussion_r40746 ## File path: hudi-spark/src/main/java/org/apache/hudi/DataSourceUtils.java ## @@ -152,6 +154,24 @@ public static KeyGenerator createKeyGenerator(TypedProperties props) throws IOEx } } + /** + * Create a UserDefinedBulkInsertPartitioner class via reflection, + * + * if the class name of UserDefinedBulkInsertPartitioner is configured through the HoodieWriteConfig. + * @see HoodieWriteConfig#getUserDefinedBulkInsertPartitionerClass() + */ + private static Option createUserDefinedBulkInsertPartitioner(HoodieWriteConfig config) + throws IOException { +String bulkInsertPartitionerClass = config.getUserDefinedBulkInsertPartitionerClass(); +try { + return bulkInsertPartitionerClass == null || bulkInsertPartitionerClass.isEmpty() + ? Option.empty() : + Option.of((UserDefinedBulkInsertPartitioner) ReflectionUtils.loadClass(bulkInsertPartitionerClass)); +} catch (Throwable e) { + throw new IOException("Could not create UserDefinedBulkInsertPartitioner class " + bulkInsertPartitionerClass, e); Review comment: While I was reading the code of [DataSourceUtils](https://github.com/apache/incubator-hudi/blob/master/hudi-spark/src/main/java/org/apache/hudi/DataSourceUtils.java#L58), I had the impression that it tries to throw IOException as a configuration issue of user's while Hoodie*Exception would means as more toward some sort of Hudi internal error. For example, failed to load of ```hoodie.datasource.write.keygenerator.class``` class throws as IOException from [createKeyGenerator](https://github.com/apache/incubator-hudi/blob/master/hudi-spark/src/main/java/org/apache/hudi/DataSourceUtils.java#L151), and ```hoodie.datasource.write.payload.class``` too from [createPayload](https://github.com/apache/incubator-hudi/blob/master/hudi-spark/src/main/java/org/apache/hudi/DataSourceUtils.java#L164), so question is when a user mis-configure as class name, should it be considered as "hoodie" error? or general configuration error as user mistake? I hope there is specific user configuration exception separately, but I wasn't sure the intention of IOException from other configurations, so I followed existing ways. Please let me know if you think this should be HoodieException, I can update it. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services