GitHub user danosipov opened a pull request:
https://github.com/apache/spark/pull/1120
Add S3 configuration parameters to the EC2 deploy scripts
When deploying to AWS, there is additional configuration that is required
to read S3 files. EMR creates it automatically, there is no reason that the
Spark EC2 script shouldn't.
This PR requires a corresponding PR to the mesos/spark-ec2 to be merged, as
it gets cloned in the process of setting up machines:
https://github.com/mesos/spark-ec2/pull/58
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/danosipov/spark s3_credentials
Alternatively you can review and apply these changes as the patch at:
https://github.com/apache/spark/pull/1120.patch
To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:
This closes #1120
----
commit 41fd9388ed10fe65a50edf44ed5a4cf1364707f1
Author: Dan Osipov <[email protected]>
Date: 2014-06-18T21:37:19Z
Add S3 configuration parameters to the EC2 deploy scripts
----
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---