[ 
https://issues.apache.org/jira/browse/HUDI-3244?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

cdmikechen reopened HUDI-3244:
------------------------------

> UnsupportedOperationException when bulk insert to hudi
> ------------------------------------------------------
>
>                 Key: HUDI-3244
>                 URL: https://issues.apache.org/jira/browse/HUDI-3244
>             Project: Apache Hudi
>          Issue Type: Bug
>          Components: Spark Integration
>            Reporter: cdmikechen
>            Priority: Major
>             Fix For: 0.10.1
>
>
> When I bulk insert to hudi, I catch this error.
> {code}
> java.lang.UnsupportedOperationException
>       at java.base/java.util.Collections$UnmodifiableMap.put(Unknown Source)
>       at 
> org.apache.hudi.DataSourceUtils.mayBeOverwriteParquetWriteLegacyFormatProp(DataSourceUtils.java:321)
>       at 
> org.apache.hudi.spark3.internal.DefaultSource.getTable(DefaultSource.java:59)
>       at 
> org.apache.spark.sql.execution.datasources.v2.DataSourceV2Utils$.getTableFromProvider(DataSourceV2Utils.scala:83)
>       at 
> org.apache.spark.sql.DataFrameWriter.getTable$1(DataFrameWriter.scala:322)
>       at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:338)
>       at 
> org.apache.hudi.HoodieSparkSqlWriter$.bulkInsertAsRow(HoodieSparkSqlWriter.scala:477)
>       at 
> org.apache.hudi.HoodieSparkSqlWriter$.write(HoodieSparkSqlWriter.scala:158)
>       at org.apache.hudi.DefaultSource.createRelation(DefaultSource.scala:164)
>       at 
> org.apache.spark.sql.execution.datasources.SaveIntoDataSourceCommand.run(SaveIntoDataSourceCommand.scala:46)
>       at 
> org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:70)
>       at 
> org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:68)
>       at 
> org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:90)
>       at 
> org.apache.spark.sql.execution.SparkPlan.$anonfun$execute$1(SparkPlan.scala:180)
>       at 
> org.apache.spark.sql.execution.SparkPlan.$anonfun$executeQuery$1(SparkPlan.scala:218)
>       at 
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
>       at 
> org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:215)
>       at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:176)
>       at 
> org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:127)
>       at 
> org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:126)
>       at 
> org.apache.spark.sql.DataFrameWriter.$anonfun$runCommand$1(DataFrameWriter.scala:962)
>       at 
> org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$5(SQLExecution.scala:100)
>       at 
> org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:160)
>       at 
> org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:87)
>       at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:764)
>       at 
> org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:64)
>       at 
> org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:962)
>       at 
> org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:414)
>       at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:398)
>       at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:287)
>       at com.syzh.data.LoadRdbmsToHudi.writeHoodie(LoadRdbmsToHudi.scala:326)
>       at com.syzh.data.LoadRdbmsToHudi.loadJdbc(LoadRdbmsToHudi.scala:91)
>       at 
> com.syzh.batch.spark.service.EtlService.ingestionTotalData(EtlService.java:114)
>       at 
> com.syzh.batch.spark.listener.EtlBatchListener.doTask(EtlBatchListener.java:154)
>       at 
> com.syzh.batch.spark.listener.EtlBatchListener.action(EtlBatchListener.java:110)
>       at 
> org.apache.curator.framework.recipes.cache.TreeCache$2.apply(TreeCache.java:685)
>       at 
> org.apache.curator.framework.recipes.cache.TreeCache$2.apply(TreeCache.java:679)
>       at 
> org.apache.curator.framework.listen.ListenerContainer$1.run(ListenerContainer.java:92)
>       at 
> com.google.common.util.concurrent.MoreExecutors$SameThreadExecutorService.execute(MoreExecutors.java:293)
>       at 
> org.apache.curator.framework.listen.ListenerContainer.forEach(ListenerContainer.java:84)
>       at 
> org.apache.curator.framework.recipes.cache.TreeCache.callListeners(TreeCache.java:678)
>       at 
> org.apache.curator.framework.recipes.cache.TreeCache.access$1400(TreeCache.java:69)
>       at 
> org.apache.curator.framework.recipes.cache.TreeCache$4.run(TreeCache.java:790)
>       at 
> java.base/java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
>       at java.base/java.util.concurrent.FutureTask.run(Unknown Source)
>       at 
> java.base/java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
>       at java.base/java.util.concurrent.FutureTask.run(Unknown Source)
>       at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown 
> Source)
>       at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown 
> Source)
>       at java.base/java.lang.Thread.run(Unknown Source)
> {code}
> It seems like this error comes from some changed codes in issue 
> https://issues.apache.org/jira/browse/HUDI-2958 



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

Reply via email to