FelixYBW commented on issue #11534:
URL: 
https://github.com/apache/incubator-gluten/issues/11534#issuecomment-3863416886

   Spark rule: https://docs.delta.io/delta-batch/#spark-configurations
   # Spark configurations
   When you start a Spark application on a cluster, you can set the Spark 
configurations in the form of spark.hadoop.* to pass your custom Hadoop 
configurations. For example, setting a value for spark.hadoop.a.b.c will pass 
the value as a Hadoop configuration a.b.c, and Delta Lake will use it to access 
Hadoop FileSystem APIs.
   
   See [Spark 
documentation](http://spark.apache.org/docs/latest/configuration.html#custom-hadoophive-configuration)
 for more details.
   
   # SQL session configurations
   Spark SQL will pass all of the current [SQL session 
configurations](http://spark.apache.org/docs/latest/configuration.html#runtime-sql-configuration)
 to Delta Lake, and Delta Lake will use them to access Hadoop FileSystem APIs. 
For example, SET a.b.c=x.y.z will tell Delta Lake to pass the value x.y.z as a 
Hadoop configuration a.b.c, and Delta Lake will use it to access Hadoop 
FileSystem APIs.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to