Hi sranga,

Were you ever able to get authentication working with the temporary IAM
credentials (id, secret, & token)? I am in the same situation and it would
be great if we could document a solution so others can benefit from this 

Thanks!


sranga wrote
> Thanks Rishi. That is exactly what I am trying to do now :)
> 
> On Tue, Oct 14, 2014 at 2:41 PM, Rishi Pidva <

> rpidva@

> > wrote:
> 
>>
>> As per EMR documentation:
>> http://docs.amazonaws.cn/en_us/ElasticMapReduce/latest/DeveloperGuide/emr-iam-roles.html
>> Access AWS Resources Using IAM Roles
>>
>> If you've launched your cluster with an IAM role, applications running on
>> the EC2 instances of that cluster can use the IAM role to obtain
>> temporary
>> account credentials to use when calling services in AWS.
>>
>> The version of Hadoop available on AMI 2.3.0 and later has already been
>> updated to make use of IAM roles. If your application runs strictly on
>> top
>> of the Hadoop architecture, and does not directly call any service in
>> AWS,
>> it should work with IAM roles with no modification.
>>
>> If your application calls services in AWS directly, you'll need to update
>> it to take advantage of IAM roles. This means that instead of obtaining
>> account credentials from/home/hadoop/conf/core-site.xml on the EC2
>> instances in the cluster, your application will now either use an SDK to
>> access the resources using IAM roles, or call the EC2 instance metadata
>> to
>> obtain the temporary credentials.
>> --
>>
>> Maybe you can use AWS SDK in your application to provide AWS credentials?
>>
>> https://github.com/seratch/AWScala
>>
>>
>> On Oct 14, 2014, at 11:10 AM, Ranga <

> sranga@

> > wrote:
>>





--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/S3-Bucket-Access-tp16303p21273.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to