nastra commented on issue #14557:
URL: https://github.com/apache/iceberg/issues/14557#issuecomment-3515511805

   > > Can you describe the compatibility issues in more detail that you're 
seeing?
   > 
   > We encountered the following error when trying to replace a view created 
with an older version of Iceberg using a newer version of Iceberg:
   > 
   > > Caused by: org.apache.spark.sql.catalyst.analysis.NoSuchViewException: 
[VIEW_NOT_FOUND] The view db_name.table_name cannot be found. Verify the spelli
   > > ng and correctness of the schema and catalog.
   > > If you did not qualify the name with a schema, verify the 
current_schema() output, or qualify the name with the correct schema and 
catalog.
   > > To tolerate the error on drop use DROP VIEW IF EXISTS.
   > > at 
org.apache.iceberg.spark.SparkCatalog.replaceView(SparkCatalog.java:648)
   > > at 
org.apache.iceberg.spark.SparkSessionCatalog.replaceView(SparkSessionCatalog.java:495)
   > > at 
org.apache.spark.sql.execution.datasources.v2.CreateV2ViewExec.replaceView(CreateV2ViewExec.scala:103)
   > > at 
org.apache.spark.sql.execution.datasources.v2.CreateV2ViewExec.run(CreateV2ViewExec.scala:68)
   > > at 
org.apache.spark.sql.execution.datasources.v2.V2CommandExec.result$lzycompute(V2CommandExec.scala:43)
   > > at 
org.apache.spark.sql.execution.datasources.v2.V2CommandExec.result(V2CommandExec.scala:43)
   > > at 
org.apache.spark.sql.execution.datasources.v2.V2CommandExec.executeCollect(V2CommandExec.scala:49)
   > > at 
org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.$anonfun$applyOrElse$1(QueryExecution.scala:98)
   > > at 
org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$6(SQLExecution.scala:118)
   > > at 
org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:195)
   > > at 
org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:103)
   > > at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:827)
   > > at 
org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:65)
   > > at 
org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.applyOrElse(QueryExecution.scala:98)
   > > at 
org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.applyOrElse(QueryExecution.scala:94)
   > > at 
org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDownWithPruning$1(TreeNode.scala:512)
   > > at 
org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:104)
   > > at 
org.apache.spark.sql.catalyst.trees.TreeNode.transformDownWithPruning(TreeNode.scala:512)
   > > at 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.org$apache$spark$sql$catalyst$plans$logical$AnalysisHelper$$super$transformDownWithPruning(LogicalPlan.scala:31
   > > )
   > > at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning(AnalysisHelper.scala:267)
   > > at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning$(AnalysisHelper.scala:263)
   > > at 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:31)
   > > at 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:31)
   > > at 
org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:488)
   > > at 
org.apache.spark.sql.execution.QueryExecution.eagerlyExecuteCommands(QueryExecution.scala:94)
   > > at 
org.apache.spark.sql.execution.QueryExecution.commandExecuted$lzycompute(QueryExecution.scala:81)
   > > at 
org.apache.spark.sql.execution.QueryExecution.commandExecuted(QueryExecution.scala:79)
   > > at org.apache.spark.sql.Dataset.(Dataset.scala:218)
   > > at org.apache.spark.sql.Dataset$.$anonfun$ofRows$2(Dataset.scala:98)
   > > at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:827)
   > > at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:95)
   > > at 
org.apache.spark.sql.SparkSession.$anonfun$sql$1(SparkSession.scala:640)
   > > at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:827)
   > > at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:630)
   > > at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:671)
   
   This is most likely because you're trying to replace a Hive View, which the 
Iceberg Hive catalog doesn't recognize as an Iceberg View and therefore fails 
with a NoSuchViewException


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to