Github user nchammas commented on the issue:
https://github.com/apache/spark/pull/12004
> Does a build of Spark + Hadoop 2.7 right now have no ability at all to
read from S3 out of the box, or just not full / ideal support?
No ability at all, as far as I can tell. People have to explicitly start
their Spark session with a call to `--packages` like this:
```
pyspark --packages
com.amazonaws:aws-java-sdk-pom:1.10.34,org.apache.hadoop:hadoop-aws:2.6.0
```
Without that, you get a `java.io.IOException: No FileSystem for scheme:
s3n` if you try to read something from S3.
I see the maintainer case for not wanting to have the default builds of
Spark include AWS-specific stuff, and at the same time the end-user case for
having that is just as clear.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]