monkeyboy123 opened a new issue, #2447:
URL: https://github.com/apache/incubator-paimon/issues/2447

   ### Search before asking
   
   - [X] I searched in the 
[issues](https://github.com/apache/incubator-paimon/issues) and found nothing 
similar.
   
   
   ### Paimon version
   
   master
   
   ### Compute Engine
   
   spark
   
   ### Minimal reproduce step
   
   In Spark, when we set metastore to hive, command like blow:
   ```
   spark-sql --jars 
/root/ljh/paimon-spark-3.3-0.6-20231122.093342-69.jar,/root/ljh/paimon-oss-0.6-20231122.093342-76.jar
 \
            --conf 
spark.sql.catalog.paimon=org.apache.paimon.spark.SparkCatalog \
            --conf 
spark.sql.extensions=org.apache.paimon.spark.extensions.PaimonSparkSessionExtensions
 \
                      --verbose \
                      --conf spark.sql.catalog.paimon.metastore=hive \
                      --conf spark.sql.catalog.paimon.warehouse=xxx \
                      --conf spark.sql.catalog.paimon.uri=xxx \
                      --conf spark.sql.catalog.paimon.fs.oss.endpoint=xxx \
   ```
   Also . set fs.defaultFS to viewfs://xxx  in core-site.xml.
   we meet Error like this:
   
   ```
   at 
org.apache.hadoop.security.UserGroupInformation.getPrimaryGroupName(UserGroupInformation.java:1455)
        at 
org.apache.hadoop.fs.viewfs.ViewFileSystem$InternalDirOfViewFs.getFileStatus(ViewFileSystem.java:1029)
        at 
org.apache.hadoop.fs.viewfs.ViewFileSystem.getFileStatus(ViewFileSystem.java:393)
        at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1683)
        at 
org.apache.paimon.fs.hadoop.HadoopFileIO.exists(HadoopFileIO.java:104)
        at 
org.apache.paimon.catalog.CatalogFactory.createCatalog(CatalogFactory.java:76)
        at 
org.apache.paimon.catalog.CatalogFactory.createCatalog(CatalogFactory.java:58)
        at org.apache.paimon.spark.SparkCatalog.initialize(SparkCatalog.java:80)
        at 
org.apache.paimon.spark.SparkGenericCatalog.initialize(SparkGenericCatalog.java:239)
        at 
org.apache.spark.sql.connector.catalog.Catalogs$.load(Catalogs.scala:65)
        at 
org.apache.spark.sql.connector.catalog.CatalogManager.loadV2SessionCatalog(CatalogManager.scala:67)
        at 
org.apache.spark.sql.connector.catalog.CatalogManager.$anonfun$v2SessionCatalog$2(CatalogManager.scala:86)
        at scala.collection.mutable.HashMap.getOrElseUpdate(HashMap.scala:86)
        at 
org.apache.spark.sql.connector.catalog.CatalogManager.$anonfun$v2SessionCatalog$1(CatalogManager.scala:86)
        at scala.Option.map(Option.scala:230)
   ```
   
   ### What doesn't meet your expectations?
   
   We need get hdfs filesystem or ossFilesystem, other than ViewFsSystem
   
   ### Anything else?
   
   _No response_
   
   ### Are you willing to submit a PR?
   
   - [X] I'm willing to submit a PR!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to