Pavan created HUDI-4979:
---------------------------

             Summary: Delete operation is failing with nulpointer exception
                 Key: HUDI-4979
                 URL: https://issues.apache.org/jira/browse/HUDI-4979
             Project: Apache Hudi
          Issue Type: Bug
            Reporter: Pavan


On a particular hudi table, when performing a delete operation is causing 
nullpointerexception.Here is the stack trace.

 

Hudi version :0.11.1

Table is 

java.lang.NullPointerException

  at java.util.Hashtable.put(Hashtable.java:460)

  at java.util.Properties.setProperty(Properties.java:166)

  at org.apache.hudi.common.config.HoodieConfig.setValue(HoodieConfig.java:61)

  at 
org.apache.hudi.config.metrics.HoodieMetricsGraphiteConfig$Builder.usePrefix(HoodieMetricsGraphiteConfig.java:131)

  at 
org.apache.hudi.metadata.HoodieBackedTableMetadataWriter.createMetadataWriteConfig(HoodieBackedTableMetadataWriter.java:301)

  at 
org.apache.hudi.metadata.HoodieBackedTableMetadataWriter.<init>(HoodieBackedTableMetadataWriter.java:152)

  at 
org.apache.hudi.metadata.SparkHoodieBackedTableMetadataWriter.<init>(SparkHoodieBackedTableMetadataWriter.java:89)

  at 
org.apache.hudi.metadata.SparkHoodieBackedTableMetadataWriter.create(SparkHoodieBackedTableMetadataWriter.java:75)

  at 
org.apache.hudi.client.SparkRDDWriteClient.initializeMetadataTable(SparkRDDWriteClient.java:446)

  at 
org.apache.hudi.client.SparkRDDWriteClient.doInitTable(SparkRDDWriteClient.java:431)

  at 
org.apache.hudi.client.BaseHoodieWriteClient.initTable(BaseHoodieWriteClient.java:1458)

  at 
org.apache.hudi.client.BaseHoodieWriteClient.initTable(BaseHoodieWriteClient.java:1490)

  at 
org.apache.hudi.client.SparkRDDWriteClient.delete(SparkRDDWriteClient.java:257)

  at org.apache.hudi.DataSourceUtils.doDeleteOperation(DataSourceUtils.java:225)

  at org.apache.hudi.HoodieSparkSqlWriter$.write(HoodieSparkSqlWriter.scala:211)

  at org.apache.hudi.DefaultSource.createRelation(DefaultSource.scala:171)

  at 
org.apache.spark.sql.execution.datasources.SaveIntoDataSourceCommand.run(SaveIntoDataSourceCommand.scala:45)

  at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:75)

  at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:73)

  at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.executeCollect(commands.scala:84)

  at 
org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.$anonfun$applyOrElse$1(QueryExecution.scala:115)

  at 
org.apache.spark.sql.catalyst.QueryPlanningTracker$.withTracker(QueryPlanningTracker.scala:107)

  at 
org.apache.spark.sql.execution.SQLExecution$.withTracker(SQLExecution.scala:232)

  at 
org.apache.spark.sql.execution.SQLExecution$.executeQuery$1(SQLExecution.scala:110)

  at 
org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$6(SQLExecution.scala:135)

  at 
org.apache.spark.sql.catalyst.QueryPlanningTracker$.withTracker(QueryPlanningTracker.scala:107)

  at 
org.apache.spark.sql.execution.SQLExecution$.withTracker(SQLExecution.scala:232)

  at 
org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$5(SQLExecution.scala:135)

  at 
org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:253)

  at 
org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:134)

  at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:775)

  at 
org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:68)

  at 
org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.applyOrElse(QueryExecution.scala:112)

  at 
org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.applyOrElse(QueryExecution.scala:108)

  at 
org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDownWithPruning$1(TreeNode.scala:519)

  at 
org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:83)

  at 
org.apache.spark.sql.catalyst.trees.TreeNode.transformDownWithPruning(TreeNode.scala:519)

  at 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.org$apache$spark$sql$catalyst$plans$logical$AnalysisHelper$$super$transformDownWithPruning(LogicalPlan.scala:30)

  at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning(AnalysisHelper.scala:267)

  at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning$(AnalysisHelper.scala:263)

  at 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:30)

  at 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:30)

  at 
org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:495)

  at 
org.apache.spark.sql.execution.QueryExecution.eagerlyExecuteCommands(QueryExecution.scala:108)

  at 
org.apache.spark.sql.execution.QueryExecution.commandExecuted$lzycompute(QueryExecution.scala:95)

  at 
org.apache.spark.sql.execution.QueryExecution.commandExecuted(QueryExecution.scala:93)

  at 
org.apache.spark.sql.execution.QueryExecution.assertCommandExecuted(QueryExecution.scala:136)

  at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:848)

  at 
org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:382)

  at 
org.apache.spark.sql.DataFrameWriter.saveInternal(DataFrameWriter.scala:355)

  at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:239)

  ... 68 elided



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to