[ 
https://issues.apache.org/jira/browse/SPARK-11342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14975906#comment-14975906
 ] 

Jeff Zhang commented on SPARK-11342:
------------------------------------

Yes, it would be ideal to allow to set any available profile for spark. But 
considering hadoop is the most important profile, it would be nice to be 
allowed to customize it first. I notice the following hadoop-profile in 
run_test.py.  They can be set in jerkins environment, so think it may be also 
possible to do that in local environment. 
{code}
    sbt_maven_hadoop_profiles = {
        "hadoop1.0": ["-Phadoop-1", "-Dhadoop.version=1.2.1"],
        "hadoop2.0": ["-Phadoop-1", "-Dhadoop.version=2.0.0-mr1-cdh4.1.1"],
        "hadoop2.2": ["-Pyarn", "-Phadoop-2.2"],
        "hadoop2.3": ["-Pyarn", "-Phadoop-2.3", "-Dhadoop.version=2.3.0"],
        "hadoop2.6": ["-Pyarn", "-Phadoop-2.6"],
    }
{code}

> Allow to set hadoop profile when running dev/run_tests
> ------------------------------------------------------
>
>                 Key: SPARK-11342
>                 URL: https://issues.apache.org/jira/browse/SPARK-11342
>             Project: Spark
>          Issue Type: Improvement
>          Components: Tests
>            Reporter: Jeff Zhang
>            Priority: Minor
>
> Usually I will assembly spark with hadoop 2.6.0. But when I run 
> dev/run_tests, it would use hadoop-2.3. And when I run bin/spark-shell the 
> next time, it would complain that there're multiple of spark assembly jars. 
> It would be nice that I can specify hadoop profile when run dev/run_tests



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to