nchammas commented on a change in pull request #31768:
URL: https://github.com/apache/spark/pull/31768#discussion_r591710726



##########
File path: python/pyspark/context.py
##########
@@ -1255,6 +1255,16 @@ def getConf(self):
         conf.setAll(self._conf.getAll())
         return conf
 
+    def hadoopConfiguration(self):
+        """
+        Returns the Hadoop configuration used for the Hadoop code (e.g. file 
systems) we reuse.
+
+        As it will be reused in all Hadoop RDDs, it's better not to modify it 
unless you
+        plan to set some global configurations for all Hadoop RDDs.
+        Return :class:`Configuration` object
+        """
+        return self._jsc.hadoopConfiguration()

Review comment:
       I'm not following why the ticket should be abandoned.
   
   If there is an alternative solution for PySpark users who want to set S3A 
configs -- something that a) does not require them to go through `._jsc`, and 
b) is tested and works -- then please post a clear example of that on the 
ticket. That would serve as a useful reference to others.
   
   Otherwise, I think the ticket is valid and should remain open.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to