[
https://issues.apache.org/jira/browse/HUDI-9636?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
ASF GitHub Bot updated HUDI-9636:
---------------------------------
Labels: pull-request-available (was: )
> Revert changes to DESCRIBE command in Spark
> -------------------------------------------
>
> Key: HUDI-9636
> URL: https://issues.apache.org/jira/browse/HUDI-9636
> Project: Apache Hudi
> Issue Type: Bug
> Reporter: Y Ethan Guo
> Assignee: Rahil Chertara
> Priority: Major
> Labels: pull-request-available
> Fix For: 1.1.0
>
>
> Revert HUDI-8161
> When testing Hudi integration with Apache Polaris catalog noticed that this
> original change caused {{DESCRIBE TABLE}} command to error out.
> This is due to invoking older spark session {{{}DescribeTableCommand{}}}(for
> newer external catalogs we should not be doing this as it then causes the
> spark session catalog to be invoked and check for the existence of the
> namespace in the spark v1 session catalog which will naturally not exist as
> we are delegating to polaris catalog)
> {code:java}
> [SCHEMA_NOT_FOUND] The schema `hudi_d898a0070cc744758bd4e3f54b5d3c01` cannot
> be found. Verify the spelling and correctness of the schema and catalog.
> If you did not qualify the name with a catalog, verify the current_schema()
> output, or qualify the name with the correct catalog.
> To tolerate the error on drop use DROP SCHEMA IF EXISTS.
> org.apache.spark.sql.catalyst.analysis.NoSuchDatabaseException:
> [SCHEMA_NOT_FOUND] The schema `hudi_d898a0070cc744758bd4e3f54b5d3c01` cannot
> be found. Verify the spelling and correctness of the schema and catalog.
> If you did not qualify the name with a catalog, verify the current_schema()
> output, or qualify the name with the correct catalog.
> To tolerate the error on drop use DROP SCHEMA IF EXISTS.
> at
> org.apache.spark.sql.catalyst.catalog.SessionCatalog.requireDbExists(SessionCatalog.scala:252)
> at
> org.apache.spark.sql.catalyst.catalog.SessionCatalog.getTableRawMetadata(SessionCatalog.scala:546)
> at
> org.apache.spark.sql.execution.command.DescribeTableCommand.run(tables.scala:634)
> at
> org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:75)
> {code}
>
--
This message was sent by Atlassian Jira
(v8.20.10#820010)