yaroslav-serhiichuk opened a new pull request #31768: URL: https://github.com/apache/spark/pull/31768
Does this PR introduce any user-facing change? yes How was this patch tested? existing tests ### What changes were proposed in this pull request? The same method hadoopConfiguration() for pyspark SparkContext as present in Scala API ### Why are the changes needed? PySpark should offer an API to hadoopConfiguration to match Scala's Setting Hadoop configs within a job is handy for any configurations that are not appropriate as cluster defaults, or that will not be known until run time. The various fs.s3a.* configs are a good example of this. Currently, what people are doing is setting things like this via SparkContext._jsc.hadoopConfiguration(). ### Does this PR introduce _any_ user-facing change? yes was: sc = SparkContext() hadoop_config = sc._jsc.hadoopConfiguration() after adding changes: sc = SparkContext() hadoop_config = sc.hadoopConfiguration() ### How was this patch tested? existing tests ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: [email protected] --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
