[ 
https://issues.apache.org/jira/browse/SPARK-17889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Owen resolved SPARK-17889.
-------------------------------
    Resolution: Invalid

Have a look at 
https://cwiki.apache.org/confluence/display/SPARK/Contributing+to+Spark

Questions to go the mailing list.

> How to update sc.hadoopConfiguration.set properties in standalone spark 
> cluster for SparkR
> ------------------------------------------------------------------------------------------
>
>                 Key: SPARK-17889
>                 URL: https://issues.apache.org/jira/browse/SPARK-17889
>             Project: Spark
>          Issue Type: Request
>          Components: SparkR
>    Affects Versions: 2.0.0
>            Reporter: Sankar Mittapally
>
> Hello,
>  I have downloaded below jars
> aws-java-sdk-1.7.4.jar 
> hadoop-aws-2.7.1.jar 
> for the spark version - 2.0.0 and hadoop version - 2.7.1 
> I want to access the S3 bucket from sparkR, But I am not sure how to update 
> these properties.
> sc.hadoopConfiguration.set("fs.s3n.impl","org.apache.hadoop.fs.s3native.NativeS3FileSystem")
> sc.hadoopConfiguration.set("fs.s3n.awsAccessKeyId", accessKey)
> sc.hadoopConfiguration.set("fs.s3n.awsSecretAccessKey", secretKey)
> Please let me know in which file i need to update these or Do I need to pass 
> these things during runtime.
> Thanks in adv
> sankar



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to