nagashreeUp opened a new issue, #15185:
URL: https://github.com/apache/iceberg/issues/15185

   ### Apache Iceberg version
   
   1.10.1 (latest release)
   
   ### Query engine
   
   Spark
   
   ### Please describe the bug 🐞
   
   Hi,
   
   I am trying to create a namespace in biglake catalog by following google 
document 
https://docs.cloud.google.com/biglake/docs/blms-rest-catalog#configure-catalog
   
   Catalog was created before executing below code.
   
   Code snippet:
   
   import pyspark
   from pyspark.context import SparkContext
   from pyspark.sql import SparkSession
   
   
   catalog_name   = "biglake_restcatalog"
   WAREHOUSE_PATH = "gs://xx/warehouse"
   PROJECT_ID="xx"
   
   
   spark = SparkSession.builder.appName("restCatalog") \
     .config(f'spark.sql.catalog.{catalog_name}', 
'org.apache.iceberg.spark.SparkCatalog') \
     .config(f'spark.sql.catalog.{catalog_name}.type', 'rest') \
     .config(f'spark.sql.catalog.{catalog_name}.uri', 
'https://biglake.googleapis.com/iceberg/v1/restcatalog') \
     .config(f'spark.sql.catalog.{catalog_name}.warehouse', WAREHOUSE_PATH) \
     .config(f'spark.sql.catalog.{catalog_name}.header.x-goog-user-project', 
PROJECT_ID) \
     .config(f'spark.sql.catalog.{catalog_name}.rest.auth.type', 
'org.apache.iceberg.gcp.auth.GoogleAuthManager') \
     .config(f'spark.sql.catalog.{catalog_name}.io-impl', 
'org.apache.iceberg.gcp.gcs.GCSFileIO') \
     
.config(f'spark.sql.catalog.{catalog_name}.rest-metrics-reporting-enabled', 
'false') \
     .config('spark.sql.extensions', 
'org.apache.iceberg.spark.extensions.IcebergSparkSessionExtensions') \
     .config('spark.sql.defaultCatalog', catalog_name) \
     .getOrCreate()
   
   print("Spark session created.")
   spark.sql("SHOW NAMESPACES IN biglake_restcatalog")
   
   #spark.sql("CREATE NAMESPACE IF NOT EXISTS biglake_restcatalog_namespace;")
   #spark.sql("USE biglake_restcatalog_namespace;")
   #spark.sql("CREATE TABLE biglake_restcatalog_namespace.demo (id int, data 
string) USING ICEBERG;")
   #spark.sql("DESCRIBE biglake_restcatalog_namespace.demo").show()
   
   spark-submit --conf spark.sql.catalogImplementation=in-memory biglake2.py
   
   Execution throws below error - 
   AuthManagers: Loading AuthManager implementation: 
org.apache.iceberg.gcp.auth.GoogleAuthManager
   26/01/30 04:09:48 INFO GoogleAuthManager: Using Application Default 
Credentials with scopes: https://www.googleapis.com/auth/cloud-platform
   Traceback (most recent call last):
     File "/root/biglake2.py", line 25, in <module>
       spark.sql("SHOW NAMESPACES IN biglake_restcatalog")
     File 
"/root/spark_hadoop/spark-3.5.1-bin-hadoop3/python/lib/pyspark.zip/pyspark/sql/session.py",
 line 1631, in sql
     File 
"/root/spark_hadoop/spark-3.5.1-bin-hadoop3/python/lib/py4j-0.10.9.7-src.zip/py4j/java_gateway.py",
 line 1323, in __call__
     File 
"/root/spark_hadoop/spark-3.5.1-bin-hadoop3/python/lib/pyspark.zip/pyspark/errors/exceptions/captured.py",
 line 179, in deco
     File 
"/root/spark_hadoop/spark-3.5.1-bin-hadoop3/python/lib/py4j-0.10.9.7-src.zip/py4j/protocol.py",
 line 328, in get_return_value
   py4j.protocol.Py4JJavaError: An error occurred while calling o44.sql.
   : org.apache.iceberg.exceptions.RESTException: Unable to process: RPC error
           at 
org.apache.iceberg.rest.ErrorHandlers$DefaultErrorHandler.accept(ErrorHandlers.java:250)
           at 
org.apache.iceberg.rest.ErrorHandlers$DefaultErrorHandler.accept(ErrorHandlers.java:214)
           at 
org.apache.iceberg.rest.HTTPClient.throwFailure(HTTPClient.java:240)
           at org.apache.iceberg.rest.HTTPClient.execute(HTTPClient.java:336)
           at org.apache.iceberg.rest.HTTPClient.execute(HTTPClient.java:297)
           at org.apache.iceberg.rest.BaseHTTPClient.get(BaseHTTPClient.java:77)
           at 
org.apache.iceberg.rest.RESTSessionCatalog.fetchConfig(RESTSessionCatalog.java:1023)
           at 
org.apache.iceberg.rest.RESTSessionCatalog.initialize(RESTSessionCatalog.java:205)
           at 
org.apache.iceberg.rest.RESTCatalog.initialize(RESTCatalog.java:82)
           at org.apache.iceberg.CatalogUtil.loadCatalog(CatalogUtil.java:280)
           at 
org.apache.iceberg.CatalogUtil.buildIcebergCatalog(CatalogUtil.java:337)
           at 
org.apache.iceberg.spark.SparkCatalog.buildIcebergCatalog(SparkCatalog.java:154)
           at 
org.apache.iceberg.spark.SparkCatalog.initialize(SparkCatalog.java:754)
           at 
org.apache.spark.sql.connector.catalog.Catalogs$.load(Catalogs.scala:65)
           at 
org.apache.spark.sql.connector.catalog.CatalogManager.$anonfun$catalog$1(CatalogManager.scala:53)
           at scala.collection.mutable.HashMap.getOrElseUpdate(HashMap.scala:86)
           at 
org.apache.spark.sql.connector.catalog.CatalogManager.catalog(CatalogManager.scala:53)
           at 
org.apache.spark.sql.connector.catalog.LookupCatalog$CatalogAndNamespace$.unapply(LookupCatalog.scala:86)
           at 
org.apache.spark.sql.catalyst.analysis.ResolveCatalogs$$anonfun$apply$1.applyOrElse(ResolveCatalogs.scala:51)
           at 
org.apache.spark.sql.catalyst.analysis.ResolveCatalogs$$anonfun$apply$1.applyOrElse(ResolveCatalogs.scala:30)
           at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.$anonfun$resolveOperatorsDownWithPruning$2(AnalysisHelper.scala:170)
           at 
org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(origin.scala:76)
           at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.$anonfun$resolveOperatorsDownWithPruning$1(AnalysisHelper.scala:170)
           at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.allowInvokingTransformsInAnalyzer(AnalysisHelper.scala:323)
           at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.resolveOperatorsDownWithPruning(AnalysisHelper.scala:168)
           at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.resolveOperatorsDownWithPruning$(AnalysisHelper.scala:164)
           at 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperatorsDownWithPruning(LogicalPlan.scala:32)
           at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.$anonfun$resolveOperatorsDownWithPruning$4(AnalysisHelper.scala:175)
           at 
org.apache.spark.sql.catalyst.trees.UnaryLike.mapChildren(TreeNode.scala:1215)
           at 
org.apache.spark.sql.catalyst.trees.UnaryLike.mapChildren$(TreeNode.scala:1214)
           at 
org.apache.spark.sql.catalyst.plans.logical.ShowNamespaces.mapChildren(v2Commands.scala:615)
           at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.$anonfun$resolveOperatorsDownWithPruning$1(AnalysisHelper.scala:175)
           at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.allowInvokingTransformsInAnalyzer(AnalysisHelper.scala:323)
           at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.resolveOperatorsDownWithPruning(AnalysisHelper.scala:168)
           at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.resolveOperatorsDownWithPruning$(AnalysisHelper.scala:164)
           at 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperatorsDownWithPruning(LogicalPlan.scala:32)
           at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.resolveOperatorsWithPruning(AnalysisHelper.scala:99)
           at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.resolveOperatorsWithPruning$(AnalysisHelper.scala:96)
           at 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperatorsWithPruning(LogicalPlan.scala:32)
           at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.resolveOperators(AnalysisHelper.scala:76)
           at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.resolveOperators$(AnalysisHelper.scala:75)
           at 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperators(LogicalPlan.scala:32)
           at 
org.apache.spark.sql.catalyst.analysis.ResolveCatalogs.apply(ResolveCatalogs.scala:30)
           at 
org.apache.spark.sql.catalyst.analysis.ResolveCatalogs.apply(ResolveCatalogs.scala:27)
           at 
org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$execute$2(RuleExecutor.scala:222)
           at 
scala.collection.LinearSeqOptimized.foldLeft(LinearSeqOptimized.scala:126)
           at 
scala.collection.LinearSeqOptimized.foldLeft$(LinearSeqOptimized.scala:122)
           at scala.collection.immutable.List.foldLeft(List.scala:91)
           at 
org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$execute$1(RuleExecutor.scala:219)
           at 
org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$execute$1$adapted(RuleExecutor.scala:211)
           at scala.collection.immutable.List.foreach(List.scala:431)
           at 
org.apache.spark.sql.catalyst.rules.RuleExecutor.execute(RuleExecutor.scala:211)
           at 
org.apache.spark.sql.catalyst.analysis.Analyzer.org$apache$spark$sql$catalyst$analysis$Analyzer$$executeSameContext(Analyzer.scala:226)
           at 
org.apache.spark.sql.catalyst.analysis.Analyzer.$anonfun$execute$1(Analyzer.scala:222)
           at 
org.apache.spark.sql.catalyst.analysis.AnalysisContext$.withNewAnalysisContext(Analyzer.scala:173)
           at 
org.apache.spark.sql.catalyst.analysis.Analyzer.execute(Analyzer.scala:222)
           at 
org.apache.spark.sql.catalyst.analysis.Analyzer.execute(Analyzer.scala:188)
           at 
org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$executeAndTrack$1(RuleExecutor.scala:182)
           at 
org.apache.spark.sql.catalyst.QueryPlanningTracker$.withTracker(QueryPlanningTracker.scala:89)
           at 
org.apache.spark.sql.catalyst.rules.RuleExecutor.executeAndTrack(RuleExecutor.scala:182)
           at 
org.apache.spark.sql.catalyst.analysis.Analyzer.$anonfun$executeAndCheck$1(Analyzer.scala:209)
           at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.markInAnalyzer(AnalysisHelper.scala:330)
           at 
org.apache.spark.sql.catalyst.analysis.Analyzer.executeAndCheck(Analyzer.scala:208)
           at 
org.apache.spark.sql.execution.QueryExecution.$anonfun$analyzed$1(QueryExecution.scala:77)
           at 
org.apache.spark.sql.catalyst.QueryPlanningTracker.measurePhase(QueryPlanningTracker.scala:138)
           at 
org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$2(QueryExecution.scala:219)
           at 
org.apache.spark.sql.execution.QueryExecution$.withInternalError(QueryExecution.scala:546)
           at 
org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$1(QueryExecution.scala:219)
           at 
org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:900)
           at 
org.apache.spark.sql.execution.QueryExecution.executePhase(QueryExecution.scala:218)
           at 
org.apache.spark.sql.execution.QueryExecution.analyzed$lzycompute(QueryExecution.scala:77)
           at 
org.apache.spark.sql.execution.QueryExecution.analyzed(QueryExecution.scala:74)
           at 
org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:66)
           at org.apache.spark.sql.Dataset$.$anonfun$ofRows$2(Dataset.scala:99)
           at 
org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:900)
           at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:97)
           at 
org.apache.spark.sql.SparkSession.$anonfun$sql$1(SparkSession.scala:638)
           at 
org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:900)
           at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:629)
           at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:659)
           at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
           at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
           at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
           at java.base/java.lang.reflect.Method.invoke(Method.java:569)
           at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
           at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:374)
           at py4j.Gateway.invoke(Gateway.java:282)
           at 
py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
           at py4j.commands.CallCommand.execute(CallCommand.java:79)
           at 
py4j.ClientServerConnection.waitForCommands(ClientServerConnection.java:182)
           at py4j.ClientServerConnection.run(ClientServerConnection.java:106)
           at java.base/java.lang.Thread.run(Thread.java:840)
   
   Any pointers to solve this error would be very helpful.
   
   ### Willingness to contribute
   
   - [ ] I can contribute a fix for this bug independently
   - [x] I would be willing to contribute a fix for this bug with guidance from 
the Iceberg community
   - [ ] I cannot contribute a fix for this bug at this time


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to