[
https://issues.apache.org/jira/browse/SPARK-7870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Sean Owen resolved SPARK-7870.
------------------------------
Resolution: Not A Problem
The published JARs in Maven can only show a dependency on one particular Hadoop
version no matter what. However, you aren't intended to rely on the Hadoop
bindings of Spark, as the Spark dependency is always supposed to be
{{provided}}. So, if you build and deploy Spark for Hadoop 1 on your cluster,
and build your app against the Spark APIs (which have no Hadoop dependency per
se), the app should run correctly already.
> Publish jars against hadoop 1 (default) too
> -------------------------------------------
>
> Key: SPARK-7870
> URL: https://issues.apache.org/jira/browse/SPARK-7870
> Project: Spark
> Issue Type: Bug
> Components: Build
> Affects Versions: 1.3.2, 1.4.0
> Reporter: Andy Petrella
> Labels: hadoop-1.0, hadoop-2.0, jars, publish
>
> The published jars were always (or at least since 1.1.0, see SPARK-3764)
> published using hadoop 2 binaries.
> However, the default are hadoop 1. Which means that an application that
> doesn't use hdfs will fail at simple functions like `saveAsObjectFile` (since
> it uses hadoop interfaces).
> There are a couple of points that can be considered wrong here:
> * it's not well stated that the published binaries aren't what you can expect
> base on spark rules w.r.t hadoop 1.0.4
> * if an application doesn't need hdfs, the dev couldn't rely on public
> libraries, but he has to build it's own set without asking for a specific
> hadoop version (hence will get the 1.0.4) and deploy it on his own :-/
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]