chandu-1101 opened a new issue, #9141:
URL: https://github.com/apache/hudi/issues/9141

   **_Tips before filing an issue_**
   
   - Have you gone through our [FAQs](https://hudi.apache.org/learn/faq/)?
   
   - Join the mailing list to engage in conversations and get faster support at 
[email protected].
   
   - If you have triaged this as a bug, then file an 
[issue](https://issues.apache.org/jira/projects/HUDI/issues) directly.
   
   **Describe the problem you faced**
   I am trying to merge CDC json data into snapshot. For this I first tookthe 
dataframe from existing parquet and tried to write to s3 in hudi format. I get 
the below error.
   
   A clear and concise description of the problem.
   1. I am running in spark shell with 3 executors ; each with 3GB memory, 
1core. For driver: 1core. 1gb memory.
   2. Below is the code with the markup where its failing.
   
   **To Reproduce**
   
   ```
   import org.apache.hudi.QuickstartUtils
   import org.apache.hudi.common.model.HoodieAvroPayload
   import org.apache.hudi.common.model.WriteOperationType
   import org.apache.hudi.config.HoodieWriteConfig
   import org.apache.hudi.keygen.constant.KeyGeneratorOptions
   import org.apache.spark.SparkConf
   import org.apache.spark.api.java.JavaSparkContext
   import org.apache.spark.api.java.function.Function
   import org.apache.spark.sql.Dataset
   import org.apache.spark.sql.Row
   import org.apache.spark.sql.SparkSession
   
   import java.util
   import org.apache.hudi.config.HoodieWriteConfig.TBL_NAME
   import org.apache.spark.sql.SaveMode.Append
   import org.apache.spark.sql.SaveMode.Overwrite
   
       val snapshotDf = 
Application.spark().read.parquet("s3://bucket/snapshots-test/dbdump/_bid_9223370348443853913/")
       val cdcSchema = 
SparkUtils.getSchema("s3://bucket/schemas/dbdump-schema.json")
       val cdcDf = 
Application.spark().read.schema(cdcSchema).json("s3://bucket/inputs/dbdump/")
       /* done */
   
       /* merge them */
       snapshotDf.registerTempTable("snapshot");
       val snapshotDf2 = Application.spark().sql("select * from snapshot where 
cdc_oid is not null and cdc_oid !='' ")
       val snapshotDf3 = snapshotDf2.withColumn("hash", 
lit(col("cdc_oid").hashCode() %1000) )
       
snapshotDf3.write.format("hudi").options(QuickstartUtils.getQuickstartWriteConfigs())
         .option(HoodieWriteConfig.PRECOMBINE_FIELD_NAME.key(), 
"timestamp_in_millis")
         .option(KeyGeneratorOptions.RECORDKEY_FIELD_NAME.key(), "cdc_oid")
                 .option(KeyGeneratorOptions.PARTITIONPATH_FIELD_NAME.key(), 
"hash")
         .option(TBL_NAME.key(), "GE11")
         .mode(Overwrite)
         .save("s3://bucket/snapshots-hudi/ge11/snapshot");
   
   ```
   
   Steps to reproduce the behavior:
   
   1. run the above program on parquet files of size 10GB; each row of size 6kb
   4.
   5.
   6.
   
   **Expected behavior**
   1. The hudi table should have been created from the snapshot parquet files
   2. The merge should have happened from CDC , but before this itself things 
failed
   
   A clear and concise description of what you expected to happen.
   
   **Environment Description**
   
   * Hudi version :hudi-spark3.3-bundle_2.12-0.12.3.jar
   
   * Spark version : 3.3.0
   
   * Hive version :
   
   * Hadoop version :
   
   * Storage (HDFS/S3/GCS..) : s3
   
   * Running on Docker? (yes/no) :
   
   
   **Additional context**
   
   Add any other context about the problem here.
   
   **Stacktrace**
   
   ```
   07-07 14:11:10  WARN DAGScheduler: Broadcasting large task binary with size 
1033.6 KiB
   07-07 14:12:24  ERROR HoodieSparkSqlWriter$: UPSERT failed with errors
   org.apache.hudi.exception.HoodieException: Write to Hudi failed
     at org.apache.hudi.DefaultSource.createRelation(DefaultSource.scala:148)
     at 
org.apache.spark.sql.execution.datasources.SaveIntoDataSourceCommand.run(SaveIntoDataSourceCommand.scala:45)
     at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:75)
     at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:73)
     at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.executeCollect(commands.scala:84)
     at 
org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.$anonfun$applyOrElse$1(QueryExecution.scala:103)
     at 
org.apache.spark.sql.catalyst.QueryPlanningTracker$.withTracker(QueryPlanningTracker.scala:107)
     at 
org.apache.spark.sql.execution.SQLExecution$.withTracker(SQLExecution.scala:224)
     at 
org.apache.spark.sql.execution.SQLExecution$.executeQuery$1(SQLExecution.scala:114)
     at 
org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$7(SQLExecution.scala:139)
     at 
org.apache.spark.sql.catalyst.QueryPlanningTracker$.withTracker(QueryPlanningTracker.scala:107)
     at 
org.apache.spark.sql.execution.SQLExecution$.withTracker(SQLExecution.scala:224)
     at 
org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$6(SQLExecution.scala:139)
     at 
org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:245)
     at 
org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:138)
     at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:779)
     at 
org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:68)
     at 
org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.applyOrElse(QueryExecution.scala:100)
     at 
org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.applyOrElse(QueryExecution.scala:96)
     at 
org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDownWithPruning$1(TreeNode.scala:615)
     at 
org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:177)
     at 
org.apache.spark.sql.catalyst.trees.TreeNode.transformDownWithPruning(TreeNode.scala:615)
     at 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.org$apache$spark$sql$catalyst$plans$logical$AnalysisHelper$$super$transformDownWithPruning(LogicalPlan.scala:30)
     at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning(AnalysisHelper.scala:267)
     at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning$(AnalysisHelper.scala:263)
     at 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:30)
     at 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:30)
     at 
org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:591)
     at 
org.apache.spark.sql.execution.QueryExecution.eagerlyExecuteCommands(QueryExecution.scala:96)
     at 
org.apache.spark.sql.execution.QueryExecution.commandExecuted$lzycompute(QueryExecution.scala:83)
     at 
org.apache.spark.sql.execution.QueryExecution.commandExecuted(QueryExecution.scala:81)
     at 
org.apache.spark.sql.execution.QueryExecution.assertCommandExecuted(QueryExecution.scala:124)
     at 
org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:860)
     at 
org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:390)
     at 
org.apache.spark.sql.DataFrameWriter.saveInternal(DataFrameWriter.scala:363)
     at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:239)
   
   ```
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to