lintingbin commented on issue #14557:
URL: https://github.com/apache/iceberg/issues/14557#issuecomment-3516512397

   > Can you please create 2 views and check the difference between `CREATE 
VIEW... USING iceberg` vs `CREATE VIEW...` and then do a `DESCRIBE EXTENDED 
VIEW ...` for each view and paste the results here?
   
   It seems that the syntax `CREATE VIEW... USING iceberg` is not supported in 
Spark, and `DESCRIBE EXTENDED VIEW` will result in the following error:
   
   > Error: org.apache.kyuubi.KyuubiSQLException: 
org.apache.kyuubi.KyuubiSQLException: Error operating ExecuteStatement: 
org.apache.spark.sql.AnalysisException: [TABLE_OR_VIEW_NOT_FOUND] The table or 
view `view` cannot be found. Verify the spelling and correctness of the schema 
and catalog.
   > If you did not qualify the name with a schema, verify the current_schema() 
output, or qualify the name with the correct schema and catalog.
   > To tolerate the error on drop use DROP VIEW IF EXISTS or DROP TABLE IF 
EXISTS.; line 1 pos 18;
   > 'DescribeColumn 'tmp.test1, true, [info_name#659, info_value#660]
   > +- 'UnresolvedTableOrView [view], DESCRIBE TABLE, true
   > 
   >    at 
org.apache.spark.sql.catalyst.analysis.package$AnalysisErrorAt.tableNotFound(package.scala:87)
   >    at 
org.apache.spark.sql.catalyst.analysis.CheckAnalysis.$anonfun$checkAnalysis0$1(CheckAnalysis.scala:180)
   >    at 
org.apache.spark.sql.catalyst.analysis.CheckAnalysis.$anonfun$checkAnalysis0$1$adapted(CheckAnalysis.scala:163)
   >    at 
org.apache.spark.sql.catalyst.trees.TreeNode.foreachUp(TreeNode.scala:295)
   >    at 
org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$foreachUp$1(TreeNode.scala:294)
   >    at 
org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$foreachUp$1$adapted(TreeNode.scala:294)
   >    at scala.collection.Iterator.foreach(Iterator.scala:943)
   >    at scala.collection.Iterator.foreach$(Iterator.scala:943)
   >    at scala.collection.AbstractIterator.foreach(Iterator.scala:1431)
   >    at scala.collection.IterableLike.foreach(IterableLike.scala:74)
   >    at scala.collection.IterableLike.foreach$(IterableLike.scala:73)
   >    at scala.collection.AbstractIterable.foreach(Iterable.scala:56)
   >    at 
org.apache.spark.sql.catalyst.trees.TreeNode.foreachUp(TreeNode.scala:294)
   >    at 
org.apache.spark.sql.catalyst.analysis.CheckAnalysis.checkAnalysis0(CheckAnalysis.scala:163)
   >    at 
org.apache.spark.sql.catalyst.analysis.CheckAnalysis.checkAnalysis0$(CheckAnalysis.scala:160)
   >    at 
org.apache.spark.sql.catalyst.analysis.Analyzer.checkAnalysis0(Analyzer.scala:188)
   >    at 
org.apache.spark.sql.catalyst.analysis.CheckAnalysis.checkAnalysis(CheckAnalysis.scala:156)
   >    at 
org.apache.spark.sql.catalyst.analysis.CheckAnalysis.checkAnalysis$(CheckAnalysis.scala:146)
   >    at 
org.apache.spark.sql.catalyst.analysis.Analyzer.checkAnalysis(Analyzer.scala:188)
   >    at 
org.apache.spark.sql.catalyst.analysis.Analyzer.$anonfun$executeAndCheck$1(Analyzer.scala:211)
   >    at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.markInAnalyzer(AnalysisHelper.scala:330)
   >    at 
org.apache.spark.sql.catalyst.analysis.Analyzer.executeAndCheck(Analyzer.scala:208)
   >    at 
org.apache.spark.sql.execution.QueryExecution.$anonfun$analyzed$1(QueryExecution.scala:76)
   >    at 
org.apache.spark.sql.catalyst.QueryPlanningTracker.measurePhase(QueryPlanningTracker.scala:111)
   >    at 
org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$2(QueryExecution.scala:202)
   >    at 
org.apache.spark.sql.execution.QueryExecution$.withInternalError(QueryExecution.scala:526)
   >    at 
org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$1(QueryExecution.scala:202)
   >    at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:827)
   >    at 
org.apache.spark.sql.execution.QueryExecution.executePhase(QueryExecution.scala:201)
   >    at 
org.apache.spark.sql.execution.QueryExecution.analyzed$lzycompute(QueryExecution.scala:76)
   >    at 
org.apache.spark.sql.execution.QueryExecution.analyzed(QueryExecution.scala:74)
   >    at 
org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:66)
   >    at org.apache.spark.sql.Dataset$.$anonfun$ofRows$2(Dataset.scala:97)
   >    at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:827)
   >    at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:95)
   >    at 
org.apache.spark.sql.SparkSession.$anonfun$sql$1(SparkSession.scala:640)
   >    at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:827)
   >    at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:630)
   >    at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:671)
   >    at 
org.apache.kyuubi.engine.spark.operation.ExecuteStatement.$anonfun$executeStatement$1(ExecuteStatement.scala:90)
   >    at 
scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
   >    at 
org.apache.kyuubi.engine.spark.operation.SparkOperation.$anonfun$withLocalProperties$1(SparkOperation.scala:174)
   >    at 
org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:195)
   >    at 
org.apache.kyuubi.engine.spark.operation.SparkOperation.withLocalProperties(SparkOperation.scala:158)
   >    at 
org.apache.kyuubi.engine.spark.operation.ExecuteStatement.executeStatement(ExecuteStatement.scala:85)
   >    at 
org.apache.kyuubi.engine.spark.operation.ExecuteStatement$$anon$1.run(ExecuteStatement.scala:113)
   >    at 
java.base/java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
   >    at java.base/java.util.concurrent.FutureTask.run(Unknown Source)
   >    at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown 
Source)
   >    at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown 
Source)
   >    at java.base/java.lang.Thread.run(Unknown Source)
   > 
   >    at 
org.apache.kyuubi.KyuubiSQLException$.apply(KyuubiSQLException.scala:69)
   >    at 
org.apache.kyuubi.engine.spark.operation.SparkOperation$$anonfun$onError$1.$anonfun$applyOrElse$1(SparkOperation.scala:210)
   >    at org.apache.kyuubi.Utils$.withLockRequired(Utils.scala:432)
   >    at 
org.apache.kyuubi.operation.AbstractOperation.withLockRequired(AbstractOperation.scala:52)
   >    at 
org.apache.kyuubi.engine.spark.operation.SparkOperation$$anonfun$onError$1.applyOrElse(SparkOperation.scala:198)
   >    at 
org.apache.kyuubi.engine.spark.operation.SparkOperation$$anonfun$onError$1.applyOrElse(SparkOperation.scala:191)
   >    at 
scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:38)
   >    at 
org.apache.kyuubi.engine.spark.operation.ExecuteStatement.executeStatement(ExecuteStatement.scala:96)
   >    at 
org.apache.kyuubi.engine.spark.operation.ExecuteStatement$$anon$1.run(ExecuteStatement.scala:113)
   >    at 
java.base/java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
   >    at java.base/java.util.concurrent.FutureTask.run(Unknown Source)
   >    at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown 
Source)
   >    at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown 
Source)
   >    at java.base/java.lang.Thread.run(Unknown Source)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to