lintingbin commented on issue #14557:
URL: https://github.com/apache/iceberg/issues/14557#issuecomment-3515469425

   > If you're using `SparkSessionCatalog` with Iceberg then default behavior 
is always to create Iceberg views.
   
   `spark.sql.catalog.hive org.apache.iceberg.hive.HiveCatalog`
   How can I create a Hive view without using the default method? I tried 
adding the configuration as above and used the `CREATE VIEW 
hive.db_name.table_name `statement to create a Hive view, but encountered the 
following error:
   
   > 
BackendErrorTypeEnum.GENERIC_DATA_ERROR:org.apache.kyuubi.KyuubiSQLException: 
org.apache.kyuubi.KyuubiSQLException: Error operating ExecuteStatement: 
org.apache.spark.SparkException: Plugin class for catalog 'hive' does not 
implement CatalogPlugin: org.apache.iceberg.hive.HiveCatalog. at 
org.apache.spark.sql.errors.QueryExecutionErrors$.catalogPluginClassNotImplementedError(QueryExecutionErrors.scala:2075)
 at org.apache.spark.sql.connector.catalog.Catalogs$.load(Catalogs.scala:62) at 
org.apache.spark.sql.connector.catalog.CatalogManager.$anonfun$catalog$1(CatalogManager.scala:53)
 at scala.collection.mutable.HashMap.getOrElseUpdate(HashMap.scala:86) at 
org.apache.spark.sql.connector.catalog.CatalogManager.catalog(CatalogManager.scala:53)
 at 
org.apache.spark.sql.connector.catalog.LookupCatalog$CatalogAndIdentifier$.unapply(LookupCatalog.scala:122)
 at 
org.apache.spark.sql.catalyst.analysis.RewriteViewCommands$ResolvedIdent$.unapply(RewriteViewCommands.scala:105)
 at org.apache.spa
 
rk.sql.catalyst.analysis.RewriteViewCommands$$anonfun$apply$1.applyOrElse(RewriteViewCommands.scala:55)
 at 
org.apache.spark.sql.catalyst.analysis.RewriteViewCommands$$anonfun$apply$1.applyOrElse(RewriteViewCommands.scala:51)
 at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.$anonfun$resolveOperatorsUpWithPruning$3(AnalysisHelper.scala:138)
 at 
org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:104)
 at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.$anonfun$resolveOperatorsUpWithPruning$1(AnalysisHelper.scala:138)
 at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.allowInvokingTransformsInAnalyzer(AnalysisHelper.scala:323)
 at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.resolveOperatorsUpWithPruning(AnalysisHelper.scala:134)
 at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.resolveOperatorsUpWithPruning$(AnalysisHelper.scala:130)
 at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOpe
 ratorsUpWithPruning(LogicalPlan.scal
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to