[
https://issues.apache.org/jira/browse/SPARK-11342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14975888#comment-14975888
]
Jeff Zhang commented on SPARK-11342:
------------------------------------
[~sowen] Isn't it also for local testing ?
{code}
if os.environ.get("AMPLAB_JENKINS"):
# if we're on the Amplab Jenkins build servers setup variables
# to reflect the environment settings
build_tool = os.environ.get("AMPLAB_JENKINS_BUILD_TOOL", "sbt")
hadoop_version = os.environ.get("AMPLAB_JENKINS_BUILD_PROFILE",
"hadoop2.3")
test_env = "amplab_jenkins"
# add path for Python3 in Jenkins if we're calling from a Jenkins
machine
os.environ["PATH"] = "/home/anaconda/envs/py3k/bin:" +
os.environ.get("PATH")
else:
# else we're running locally and can use local settings
build_tool = "sbt"
hadoop_version = "hadoop2.3"
test_env = "local"
{code}
> Allow to set hadoop profile when running dev/run_tests
> ------------------------------------------------------
>
> Key: SPARK-11342
> URL: https://issues.apache.org/jira/browse/SPARK-11342
> Project: Spark
> Issue Type: Improvement
> Components: Tests
> Reporter: Jeff Zhang
> Priority: Minor
>
> Usually I will assembly spark with hadoop 2.6.0. But when I run
> dev/run_tests, it would use hadoop-2.3. And when I run bin/spark-shell the
> next time, it would complain that there're multiple of spark assembly jars.
> It would be nice that I can specify hadoop profile when run dev/run_tests
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]