[ 
https://issues.apache.org/jira/browse/SPARK-33436?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17299283#comment-17299283
 ] 

Nicholas Chammas commented on SPARK-33436:
------------------------------------------

[~hyukjin.kwon] - Can you clarify please why this ticket is "Won't Fix"? Just 
so it's clear for others who come across this ticket.

Is `._jsc` the intended way for PySpark users to set S3A configs?

> PySpark equivalent of SparkContext.hadoopConfiguration
> ------------------------------------------------------
>
>                 Key: SPARK-33436
>                 URL: https://issues.apache.org/jira/browse/SPARK-33436
>             Project: Spark
>          Issue Type: Improvement
>          Components: PySpark
>    Affects Versions: 3.1.0
>            Reporter: Nicholas Chammas
>            Priority: Minor
>
> PySpark should offer an API to {{hadoopConfiguration}} to [match 
> Scala's|http://spark.apache.org/docs/latest/api/scala/org/apache/spark/SparkContext.html#hadoopConfiguration:org.apache.hadoop.conf.Configuration].
> Setting Hadoop configs within a job is handy for any configurations that are 
> not appropriate as cluster defaults, or that will not be known until run 
> time. The various {{fs.s3a.*}} configs are a good example of this.
> Currently, what people are doing is setting things like this [via 
> SparkContext._jsc.hadoopConfiguration()|https://stackoverflow.com/a/32661336/877069].



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to