yuqi1129 opened a new issue, #8391: URL: https://github.com/apache/gravitino/issues/8391
### Version main branch ### Describe what's wrong 1. Remove hadoop related jars from pyspark class path. 2. do the following code with JDK17, pyspark 3.5.0 as document says: https://gravitino.apache.org/docs/0.9.1/hadoop-catalog-with-adls#using-spark-to-access-the-fileset ```java import logging logging.basicConfig(level=logging.INFO) from gravitino import NameIdentifier, GravitinoClient, Catalog, Fileset, GravitinoAdminClient gravitino_url = "http://localhost:8090" metalake_name = "test" catalog_name = "azure_catalog" schema_name = "schema" fileset_name = "fileset01" fileset_ident = NameIdentifier.of(schema_name, fileset_name) gravitino_admin_client = GravitinoAdminClient(uri=gravitino_url) gravitino_client = GravitinoClient(uri=gravitino_url, metalake_name=metalake_name) from pyspark.sql import SparkSession import os os.environ["PYSPARK_SUBMIT_ARGS"] = "--jars /Users/yuqi/project/graviton/bundles/azure-bundle/build/libs/gravitino-azure-bundle-1.0.0-SNAPSHOT.jar,/Users/yuqi/project/graviton/clients/filesystem-hadoop3-runtime/build/libs/gravitino-filesystem-hadoop3-runtime-1.0.0-SNAPSHOT.jar --conf \"spark.driver.extraJavaOptions=--add-opens=java.base/sun.nio.ch=ALL-UNNAMED -agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=5005\" --conf \"spark.executor.extraJavaOptions=--add-opens=java.base/sun.nio.ch=ALL-UNNAMED -agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=5005\" --master local[1] pyspark-shell" os.environ["HADOOP_USER_NAME"] = "anonymous" spark = SparkSession.builder \ .appName("s3_fielset_test") \ .config("spark.hadoop.fs.AbstractFileSystem.gvfs.impl", "org.apache.gravitino.filesystem.hadoop.Gvfs") \ .config("spark.hadoop.fs.gvfs.impl", "org.apache.gravitino.filesystem.hadoop.GravitinoVirtualFileSystem") \ .config("spark.hadoop.fs.gravitino.server.uri", "http://localhost:8090") \ .config("spark.hadoop.fs.gravitino.client.metalake", "test") \ .config("spark.hadoop.azure-storage-account-name", "xxx") \ .config("spark.hadoop.azure-storage-account-key", "xx") \ .config("spark.hadoop.fs.azure.skipUserGroupMetadataDuringInitialization", "true") \ .config("spark.driver.memory", "2g") \ .config("spark.driver.port", "2048") \ .getOrCreate() ``` ### Error message and/or stacktrace Listening for transport dt_socket at address: 5005 ERROR: Can't initialize main class org.apache.spark.deploy.SparkSubmit Reason: java.lang.NoClassDefFoundError: org/apache/hadoop/fs/FSDataInputStream Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/Users/yuqi/venv/lib/python3.9/site-packages/pyspark/sql/session.py", line 497, in getOrCreate sc = SparkContext.getOrCreate(sparkConf) File "/Users/yuqi/venv/lib/python3.9/site-packages/pyspark/context.py", line 515, in getOrCreate SparkContext(conf=conf or SparkConf()) File "/Users/yuqi/venv/lib/python3.9/site-packages/pyspark/context.py", line 201, in __init__ SparkContext._ensure_initialized(self, gateway=gateway, conf=conf) File "/Users/yuqi/venv/lib/python3.9/site-packages/pyspark/context.py", line 436, in _ensure_initialized SparkContext._gateway = gateway or launch_gateway(conf) File "/Users/yuqi/venv/lib/python3.9/site-packages/pyspark/java_gateway.py", line 107, in launch_gateway raise PySparkRuntimeError( pyspark.errors.exceptions.base.PySparkRuntimeError: [JAVA_GATEWAY_EXITED] Java gateway process exited before sending its port number. ### How to reproduce Please see above ### Additional context _No response_ -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
