ranjitha-shenoy opened a new issue #4668:
URL: https://github.com/apache/hudi/issues/4668


   **_Tips before filing an issue_**
   
   - Have you gone through our [FAQs](https://hudi.apache.org/learn/faq/)?
   
   - Join the mailing list to engage in conversations and get faster support at 
[email protected].
   
   - If you have triaged this as a bug, then file an 
[issue](https://issues.apache.org/jira/projects/HUDI/issues) directly.
   
   **Describe the problem you faced**
   
   SaveMode.Append fails on renamed hudi tables with following exception - 
hudi-0.6 and above.
   
   **To Reproduce**
   
   Steps to reproduce the behavior:
   
   1. Create a hudi table with s3 path
   2. Rename the table using  `spark.sql(s"ALTER TABLE $oldTableName RENAME TO 
$newTableName")`
   3. Use spark df.write with `mode("append")` to save into `newTableName`
   4. Exception is thrown
   
   **Expected behavior**
   
   SaveMode.Append works for renamed tables, when using new table name 
DataSourceWriteOptions.HIVE_TABLE_OPT_KEY -> $newTableName.
   
   **Environment Description**
   EMR-5.31.1
   
   * Hudi version :
    Hudi 0.6
   
   * Spark version :
   2.4.6
   
   * Hive version :
   2.3.7
   
   * Hadoop version :
   2.10.0
   
   * Storage (HDFS/S3/GCS..) :
   S3
   
   * Running on Docker? (yes/no) :
   No
   
   **Additional context**
   
   Related code : 
[HoodieSparkSqlWriter.scala#L295](https://github.com/apache/hudi/blob/e599764c2dcfbbc15d6554fa0df55b7375e4a31d/hudi-spark/src/main/scala/org/apache/hudi/HoodieSparkSqlWriter.scala#L295)
   
   HiveTableConfig.tableName is set from .hoodie/hoodie.properties file.
   When the table is renamed with spark sql, HoodieSparkSqlWriter is still 
expecting the existing table name from HiveTableConfig to match the new table 
name.
   
   **Stacktrace**
   
   ```
   org.apache.hudi.exception.HoodieException: hoodie table with name 
mysql_udemy_dev.seed_user already exists at 
s3://udemy-dev-datasets/mysql/mysql_udemy_dev/user/seed=2021-08-30T20-24-10
     at 
org.apache.hudi.HoodieSparkSqlWriter$.handleSaveModes(HoodieSparkSqlWriter.scala:297)
     at 
org.apache.hudi.HoodieSparkSqlWriter$.write(HoodieSparkSqlWriter.scala:109)
     at org.apache.hudi.DefaultSource.createRelation(DefaultSource.scala:125)
     at 
org.apache.spark.sql.execution.datasources.SaveIntoDataSourceCommand.run(SaveIntoDataSourceCommand.scala:45)
     at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:70)
     at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:68)
     at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:86)
     at 
org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:173)
     at 
org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:169)
     at 
org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:197)
     at 
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
     at 
org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:194)
     at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:169)
     at 
org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:114)
     at 
org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:112)
     at 
org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:677)
     at 
org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:677)
     at 
org.apache.spark.sql.execution.SQLExecution$.org$apache$spark$sql$execution$SQLExecution$$executeQuery$1(SQLExecution.scala:83)
     at 
org.apache.spark.sql.execution.SQLExecution$$anonfun$withNewExecutionId$1$$anonfun$apply$1.apply(SQLExecution.scala:94)
     at 
org.apache.spark.sql.execution.QueryExecutionMetrics$.withMetrics(QueryExecutionMetrics.scala:141)
     at 
org.apache.spark.sql.execution.SQLExecution$.org$apache$spark$sql$execution$SQLExecution$$withMetrics(SQLExecution.scala:178)
     at 
org.apache.spark.sql.execution.SQLExecution$$anonfun$withNewExecutionId$1.apply(SQLExecution.scala:93)
     at 
org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:200)
     at 
org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:92)
     at 
org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:677)
     at 
org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:286)
     at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:272)
     at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:230)
   ```
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to