Hi list!

My problem is quite simple.
I need to access several S3 buckets, using different credentials.:
```
val c1 =
sc.textFile("s3n://[ACCESS_KEY_ID1:SECRET_ACCESS_KEY1]@bucket/file1.csv").count
val c2 =
sc.textFile("s3n://[ACCESS_KEY_ID2:SECRET_ACCESS_KEY2]@bucket/file1.csv").count
val c3 =
sc.textFile("s3n://[ACCESS_KEY_ID3:SECRET_ACCESS_KEY3]@bucket/file1.csv").count
...
```

One/several of those AWS credentials might contain "/" in the private access
key.
This is a known problem and from my research, the only ways to deal with
these "/" are:
1/ use environment variables to set the AWS credentials, then access the s3
buckets without specifying the credentials
2/ set the hadoop configuration to contain the the credentials.

However, none of these solutions allow me to access different buckets, with
different credentials.

Can anyone help me on this?

Thanks

Pierre



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Access-several-s3-buckets-with-credentials-containing-tp23171.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to