[jira] [Comment Edited] (SPARK-6511) Publish "hadoop provided" build with instructions for different distros

2015-06-11 Thread Marcelo Vanzin (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-6511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14582227#comment-14582227
 ] 

Marcelo Vanzin edited comment on SPARK-6511 at 6/11/15 5:05 PM:


Sorry for all the noise; I just noticed what's wrong. Instead of:

{code}
export SPARK_DIST_CLASSPATH=$(hadoop classpath --config /path/to/configs) 
{code}

It should be:

{code}
export SPARK_DIST_CLASSPATH=$(hadoop --config /path/to/configs classpath) 
{code}

Should have caught that during review. :-/


was (Author: vanzin):
Sorry for all the noise; I just noticed what's wrong. Instead of:

{code}
export SPARK_DIST_CLASSPATH=$(hadoop classpath --config /path/to/configs) 
{code}

It should be:

{code}
+export SPARK_DIST_CLASSPATH=$(hadoop --config /path/to/configs classpath) 
{code}

Should have caught that during review. :-/

> Publish "hadoop provided" build with instructions for different distros
> ---
>
> Key: SPARK-6511
> URL: https://issues.apache.org/jira/browse/SPARK-6511
> Project: Spark
>  Issue Type: Improvement
>  Components: Build
>Reporter: Patrick Wendell
>Assignee: Patrick Wendell
> Fix For: 1.4.0
>
>
> Currently we publish a series of binaries with different Hadoop client jars. 
> This mostly works, but some users have reported compatibility issues with 
> different distributions.
> One improvement moving forward might be to publish a binary build that simply 
> asks you to set HADOOP_HOME to pick up the Hadoop client location. That way 
> it would work across multiple distributions, even if they have subtle 
> incompatibilities with upstream Hadoop.
> I think a first step for this would be to produce such a build for the 
> community and see how well it works. One potential issue is that our fancy 
> excludes and dependency re-writing won't work with the simpler "append 
> Hadoop's classpath to Spark". Also, how we deal with the Hive dependency is 
> unclear, i.e. should we continue to bundle Spark's Hive (which has some fixes 
> for dependency conflicts) or do we allow for linking against vanilla Hive at 
> runtime.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Comment Edited] (SPARK-6511) Publish "hadoop provided" build with instructions for different distros

2015-04-13 Thread Patrick Wendell (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-6511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14493183#comment-14493183
 ] 

Patrick Wendell edited comment on SPARK-6511 at 4/13/15 10:11 PM:
--

Just as an example I tried to wire Spark to work with stock Hadoop 2.6. Here is 
how I got it running after doing a hadoop-provided build. This is pretty 
clunky, so I wonder if we should just support setting HADOOP_HOME or something 
and we can automatically find and add the jar files present within that folder.

{code}
export SPARK_DIST_CLASSPATH=$(find /tmp/hadoop-2.6.0/ -name *.jar | tr "\n" ";")
./bin/spark-shell
{code}

[~vanzin] for your CDH packages, what do you end up setting 
SPARK_DIST_CLASSPATH to?

/cc [~srowen]


was (Author: pwendell):
Just as an example I tried to wire Spark to work with stock Hadoop 2.6. Here is 
how I got it running after doing a hadoop-provided build. This is pretty 
clunky, so I wonder if we should just support setting HADOOP_HOME or something 
and we can automatically find and add the jar files present within that folder.

{code}
export SPARK_DIST_CLASSPATH=$(find /tmp/hadoop-2.6.0/ -name *.jar | tr "\n" ";")
./bin/spark-shell
{code}

[~vanzin] for your CDH packages, what do you end up setting 
SPARK_DIST_CLASSPATH to?

> Publish "hadoop provided" build with instructions for different distros
> ---
>
> Key: SPARK-6511
> URL: https://issues.apache.org/jira/browse/SPARK-6511
> Project: Spark
>  Issue Type: Improvement
>  Components: Build
>Reporter: Patrick Wendell
>
> Currently we publish a series of binaries with different Hadoop client jars. 
> This mostly works, but some users have reported compatibility issues with 
> different distributions.
> One improvement moving forward might be to publish a binary build that simply 
> asks you to set HADOOP_HOME to pick up the Hadoop client location. That way 
> it would work across multiple distributions, even if they have subtle 
> incompatibilities with upstream Hadoop.
> I think a first step for this would be to produce such a build for the 
> community and see how well it works. One potential issue is that our fancy 
> excludes and dependency re-writing won't work with the simpler "append 
> Hadoop's classpath to Spark". Also, how we deal with the Hive dependency is 
> unclear, i.e. should we continue to bundle Spark's Hive (which has some fixes 
> for dependency conflicts) or do we allow for linking against vanilla Hive at 
> runtime.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org