[
https://issues.apache.org/jira/browse/HUDI-2251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17390169#comment-17390169
]
ASF GitHub Bot commented on HUDI-2251:
--------------------------------------
nsivabalan merged pull request #3367:
URL: https://github.com/apache/hudi/pull/3367
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
> Fix Exception Cause By Table Name Case Sensitivity For Append Mode Write
> ------------------------------------------------------------------------
>
> Key: HUDI-2251
> URL: https://issues.apache.org/jira/browse/HUDI-2251
> Project: Apache Hudi
> Issue Type: Sub-task
> Components: Spark Integration
> Reporter: pengzhiwei
> Assignee: pengzhiwei
> Priority: Blocker
> Labels: pull-request-available
> Fix For: 0.9.0
>
>
> When write a table name with uppercase to theĀ hoodie.properties and then
> write data by spark sql, a exception will throw out.
> {code:java}
> org.apache.hudi.exception.HoodieException: hoodie table with name
> hudi_17Gb_ext1 already exists at
> s3a://siva-test-bucket-june-16/hudi_testing/gh_arch_dump/hudi_5
> at
> org.apache.hudi.HoodieSparkSqlWriter$.handleSaveModes(HoodieSparkSqlWriter.scala:424)
> at
> org.apache.hudi.HoodieSparkSqlWriter$.write(HoodieSparkSqlWriter.scala:116)
> at
> org.apache.spark.sql.hudi.command.MergeIntoHoodieTableCommand.executeUpsert(MergeIntoHoodieTableCommand.scala:265)
> at
> org.apache.spark.sql.hudi.command.MergeIntoHoodieTableCommand.run(MergeIntoHoodieTableCommand.scala:151)
> at
> org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:70)
> at
> org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:68)
> at
> org.apache.spark.sql.execution.command.ExecutedCommandExec.executeCollect(commands.scala:79)
> at
> org.apache.spark.sql.Dataset.$anonfun$logicalPlan$1(Dataset.scala:229)
> at
> org.apache.spark.sql.Dataset.$anonfun$withAction$1(Dataset.scala:3618)
> at
> org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$5(SQLExecution.scala:100)
> at
> org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:160)
> at
> org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:87)
> at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:764)
> at
> org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:64)
> at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3616)
> at org.apache.spark.sql.Dataset.<init>(Dataset.scala:229)
> at org.apache.spark.sql.Dataset$.$anonfun$ofRows$2(Dataset.scala:100)
> at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:764)
> {code}
--
This message was sent by Atlassian Jira
(v8.3.4#803005)