Xuehai-Chen opened a new issue, #10831:
URL: https://github.com/apache/hudi/issues/10831

   **_Tips before filing an issue_**
   
   - Have you gone through our [FAQs](https://hudi.apache.org/learn/faq/)?
   
   - Join the mailing list to engage in conversations and get faster support at 
[email protected].
   
   - If you have triaged this as a bug, then file an 
[issue](https://issues.apache.org/jira/projects/HUDI/issues) directly.
   
   **Describe the problem you faced**
   
   Spark3.3 overwrite partitioned hudi mor table failed
   
   **To Reproduce**
   
   Steps to reproduce the behavior:
   
   1.  use spark-sql cli
   2. create partitioned table 
   ```
   create table if not exists hudi_mor_pt_autotest (id bigint,name string,ts 
bigint,dt string,hh string) using hudi tblproperties (type = 'mor',primaryKey = 
'id',preCombineField = 'ts')partitioned by (dt, hh);
   ```
   3. insert overwrite partition
   ```
   insert overwrite hudi_mor_pt_autotest partition(dt = '2021-12-10', hh='11') 
select 2, 'spark_auotest', 1000;
   ```
   
   **Expected behavior**
   the overwrite operation complete successfully.
   
   **Environment Description**
   
   * Hudi version : 0.14.1
   
   * Spark version : 3.3.3
   
   * Hive version : x
   
   * Hadoop version : x
   
   * Storage (HDFS/S3/GCS..) : HDFS
   
   * Running on Docker? (yes/no) : no 
   
   
   **Additional context**
   
   
   **Stacktrace**
   
   ```
   24/03/06 11:31:25 ERROR SparkSQLDriver: Failed in [insert overwrite 
hudi_mor_pt_autotest partition(dt = '2021-12-10', hh='11') select 2, 
'spark_auotest', 1000]
   java.lang.IllegalArgumentException: 'path' or 'glob paths' option required
           at 
org.apache.hudi.HoodieFileIndex$.$anonfun$getQueryPaths$1(HoodieFileIndex.scala:518)
           at 
scala.collection.immutable.HashMap$HashMap1.getOrElse0(HashMap.scala:361)
           at 
scala.collection.immutable.HashMap$HashTrieMap.getOrElse0(HashMap.scala:594)
           at scala.collection.immutable.HashMap.getOrElse(HashMap.scala:73)
           at 
org.apache.hudi.HoodieFileIndex$.org$apache$hudi$HoodieFileIndex$$getQueryPaths(HoodieFileIndex.scala:518)
           at org.apache.hudi.HoodieFileIndex.<init>(HoodieFileIndex.scala:85)
           at 
org.apache.spark.sql.hudi.ProvidesHoodieConfig.deduceOverwriteConfig(ProvidesHoodieConfig.scala:377)
           at 
org.apache.spark.sql.hudi.ProvidesHoodieConfig.deduceOverwriteConfig$(ProvidesHoodieConfig.scala:349)
           at 
org.apache.spark.sql.hudi.command.InsertIntoHoodieTableCommand$.deduceOverwriteConfig(InsertIntoHoodieTableCommand.scala:66)
           at 
org.apache.spark.sql.hudi.command.InsertIntoHoodieTableCommand$.run(InsertIntoHoodieTableCommand.scala:92)
           at 
org.apache.spark.sql.hudi.command.InsertIntoHoodieTableCommand.run(InsertIntoHoodieTableCommand.scala:61)
           at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:75)
           at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:73)
           at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.executeCollect(commands.scala:84)
           at 
org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.$anonfun$applyOrElse$1(QueryExecution.scala:98)
           at 
org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$6(SQLExecution.scala:111)
           at 
org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:171)
           at 
org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:95)
           at 
org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:779)
           at 
org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:64)
           at 
org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.applyOrElse(QueryExecution.scala:98)
           at 
org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.applyOrElse(QueryExecution.scala:94)
           at 
org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDownWithPruning$1(TreeNode.scala:584)
           at 
org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:176)
           at 
org.apache.spark.sql.catalyst.trees.TreeNode.transformDownWithPruning(TreeNode.scala:584)
           at 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.org$apache$spark$sql$catalyst$plans$logical$AnalysisHelper$$super$transformDownWithPruning(LogicalPlan.scala:30)
           at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning(AnalysisHelper.scala:267)
           at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning$(AnalysisHelper.scala:263)
           at 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:30)
           at 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:30)
           at 
org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:560)
           at 
org.apache.spark.sql.execution.QueryExecution.eagerlyExecuteCommands(QueryExecution.scala:94)
           at 
org.apache.spark.sql.execution.QueryExecution.commandExecuted$lzycompute(QueryExecution.scala:81)
           at 
org.apache.spark.sql.execution.QueryExecution.commandExecuted(QueryExecution.scala:79)
           at org.apache.spark.sql.Dataset.<init>(Dataset.scala:219)
           at org.apache.spark.sql.Dataset$.$anonfun$ofRows$2(Dataset.scala:99)
           at 
org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:779)
           at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:96)
           at 
org.apache.spark.sql.SparkSession.$anonfun$sql$1(SparkSession.scala:622)
           at 
org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:779)
           at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:617)
           at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:651)
           at 
org.apache.spark.sql.hive.thriftserver.SparkSQLDriver.run(SparkSQLDriver.scala:67)
           at 
org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.processCmd(SparkSQLCLIDriver.scala:384)
           at 
org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.$anonfun$processLine$1(SparkSQLCLIDriver.scala:504)
           at 
org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.$anonfun$processLine$1$adapted(SparkSQLCLIDriver.scala:498)
           at scala.collection.Iterator.foreach(Iterator.scala:943)
           at scala.collection.Iterator.foreach$(Iterator.scala:943)
           at scala.collection.AbstractIterator.foreach(Iterator.scala:1431)
           at scala.collection.IterableLike.foreach(IterableLike.scala:74)
           at scala.collection.IterableLike.foreach$(IterableLike.scala:73)
           at scala.collection.AbstractIterable.foreach(Iterable.scala:56)
           at 
org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.processLine(SparkSQLCLIDriver.scala:498)
           at 
org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver$.main(SparkSQLCLIDriver.scala:286)
           at 
org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.main(SparkSQLCLIDriver.scala)
           at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
           at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
           at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
           at java.lang.reflect.Method.invoke(Method.java:498)
           at 
org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
           at 
org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:984)
           at 
org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:191)
           at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:214)
           at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90)
           at 
org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:1072)
           at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1081)
           at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
   java.lang.IllegalArgumentException: 'path' or 'glob paths' option required
           at 
org.apache.hudi.HoodieFileIndex$.$anonfun$getQueryPaths$1(HoodieFileIndex.scala:518)
           at 
scala.collection.immutable.HashMap$HashMap1.getOrElse0(HashMap.scala:361)
           at 
scala.collection.immutable.HashMap$HashTrieMap.getOrElse0(HashMap.scala:594)
           at scala.collection.immutable.HashMap.getOrElse(HashMap.scala:73)
           at 
org.apache.hudi.HoodieFileIndex$.org$apache$hudi$HoodieFileIndex$$getQueryPaths(HoodieFileIndex.scala:518)
           at org.apache.hudi.HoodieFileIndex.<init>(HoodieFileIndex.scala:85)
           at 
org.apache.spark.sql.hudi.ProvidesHoodieConfig.deduceOverwriteConfig(ProvidesHoodieConfig.scala:377)
           at 
org.apache.spark.sql.hudi.ProvidesHoodieConfig.deduceOverwriteConfig$(ProvidesHoodieConfig.scala:349)
           at 
org.apache.spark.sql.hudi.command.InsertIntoHoodieTableCommand$.deduceOverwriteConfig(InsertIntoHoodieTableCommand.scala:66)
           at 
org.apache.spark.sql.hudi.command.InsertIntoHoodieTableCommand$.run(InsertIntoHoodieTableCommand.scala:92)
           at 
org.apache.spark.sql.hudi.command.InsertIntoHoodieTableCommand.run(InsertIntoHoodieTableCommand.scala:61)
           at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:75)
           at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:73)
           at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.executeCollect(commands.scala:84)
           at 
org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.$anonfun$applyOrElse$1(QueryExecution.scala:98)
           at 
org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$6(SQLExecution.scala:111)
           at 
org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:171)
           at 
org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:95)
           at 
org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:779)
           at 
org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:64)
           at 
org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.applyOrElse(QueryExecution.scala:98)
           at 
org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.applyOrElse(QueryExecution.scala:94)
           at 
org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDownWithPruning$1(TreeNode.scala:584)
           at 
org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:176)
           at 
org.apache.spark.sql.catalyst.trees.TreeNode.transformDownWithPruning(TreeNode.scala:584)
           at 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.org$apache$spark$sql$catalyst$plans$logical$AnalysisHelper$$super$transformDownWithPruning(LogicalPlan.scala:30)
           at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning(AnalysisHelper.scala:267)
           at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning$(AnalysisHelper.scala:263)
           at 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:30)
           at 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:30)
           at 
org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:560)
           at 
org.apache.spark.sql.execution.QueryExecution.eagerlyExecuteCommands(QueryExecution.scala:94)
           at 
org.apache.spark.sql.execution.QueryExecution.commandExecuted$lzycompute(QueryExecution.scala:81)
           at 
org.apache.spark.sql.execution.QueryExecution.commandExecuted(QueryExecution.scala:79)
           at org.apache.spark.sql.Dataset.<init>(Dataset.scala:219)
           at org.apache.spark.sql.Dataset$.$anonfun$ofRows$2(Dataset.scala:99)
           at 
org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:779)
           at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:96)
           at 
org.apache.spark.sql.SparkSession.$anonfun$sql$1(SparkSession.scala:622)
           at 
org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:779)
           at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:617)
           at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:651)
           at 
org.apache.spark.sql.hive.thriftserver.SparkSQLDriver.run(SparkSQLDriver.scala:67)
           at 
org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.processCmd(SparkSQLCLIDriver.scala:384)
           at 
org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.$anonfun$processLine$1(SparkSQLCLIDriver.scala:504)
           at 
org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.$anonfun$processLine$1$adapted(SparkSQLCLIDriver.scala:498)
           at scala.collection.Iterator.foreach(Iterator.scala:943)
           at scala.collection.Iterator.foreach$(Iterator.scala:943)
           at scala.collection.AbstractIterator.foreach(Iterator.scala:1431)
           at scala.collection.IterableLike.foreach(IterableLike.scala:74)
           at scala.collection.IterableLike.foreach$(IterableLike.scala:73)
           at scala.collection.AbstractIterable.foreach(Iterable.scala:56)
           at 
org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.processLine(SparkSQLCLIDriver.scala:498)
           at 
org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver$.main(SparkSQLCLIDriver.scala:286)
           at 
org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.main(SparkSQLCLIDriver.scala)
           at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
           at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
           at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
           at java.lang.reflect.Method.invoke(Method.java:498)
           at 
org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
           at 
org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:984)
           at 
org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:191)
           at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:214)
           at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90)
           at 
org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:1072)
           at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1081)
           at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
   ```
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to