gentrit1 commented on issue #11560:
URL: https://github.com/apache/hudi/issues/11560#issuecomment-2208675818

   > @gentrit1 This is known issue with RLI.
   > 
   > Check below comment - [#10609 
(comment)](https://github.com/apache/hudi/issues/10609#issuecomment-2167548029)
   
   Hi @ad1happy2go , 
   
   I saw that but thats not working in our case. The current spark configs we 
are using are:
   
   spark = (
       SparkSession.builder.appName("Hudi Basics")
       .config("spark.jars", 
"gs://path-to-jars/jars/hudi-spark3.5-bundle_2.12-0.15.0.jar")
       .config("spark.serializer", "org.apache.spark.serializer.KryoSerializer")
       .config("spark.kryo.registrator", 
"org.apache.spark.HoodieSparkKryoRegistrar")
       # .config(
       #     "spark.jars.packages", jars_packages
       # )
       .config(
           "spark.sql.catalog.spark_catalog",
           "org.apache.spark.sql.hudi.catalog.HoodieCatalog",
       )
       
.config("spark.driver.extraClassPath","gs://path-to-jars/jars/hudi-spark3.5-bundle_2.12-0.15.0.jar")
       
.config("spark.executor.extraClassPath","gs://path-to-jars/jars/hudi-spark3.5-bundle_2.12-0.15.0.jar")
       .config(
           "spark.hadoop.fs.gs.impl",
           "com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystem",
       )
       .config(
           "fs.AbstractFileSystem.gs.impl", 
"com.google.cloud.hadoop.fs.gcs.GoogleHadoopFS"
       )
       .config("spark.hadoop.google.cloud.auth.service.account.enable", "true")
       .config("spark.sql.hive.convertMetastoreParquet", "false")
       .config("spark.sql.legacy.timeParserPolicy", "LEGACY")
       .config("spark.hadoop.fs.s3a.impl", 
"org.apache.hadoop.fs.s3a.S3AFileSystem")
       .config("spark.sql.legacy.timeParserPolicy", "LEGACY")
       .getOrCreate()
   )
   spark.conf.set('spark.sql.session.timeZone', 'UTC')


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to