Github user nchammas commented on the issue:
https://github.com/apache/spark/pull/12004
> This won't be enabled in a default build of Spark.
Okie doke. I don't want to derail the PR review here, but I'll ask since
it's on-topic:
Is there a way for projects like
[Flintrock](https://github.com/nchammas/flintrock) and spark-ec2 to set
clusters up such that Spark automatically has S3 support enabled? Do we just
name the appropriate packages in `spark-defaults.conf` under
`spark.jars.packages`?
Actually, I feel a little silly now. It seems kinda obvious in retrospect.
So, to @steveloughran's point, that leaves (for me, at least) the question
of knowing what version of the AWS SDK goes with what version of `hadoop-aws`,
and so on. Is there a place outside of this PR where one would be able to see
that? [This
page](https://hadoop.apache.org/docs/current/hadoop-aws/tools/hadoop-aws/index.html)
doesn't have a version mapping, for example.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]