README incorrectly suggests build sources spark-env.sh

This is misleading because the build doesn't source that file. IMO
it's better to force people to specify build environment variables
on the command line always, like we do in every example.


Project: http://git-wip-us.apache.org/repos/asf/incubator-spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/incubator-spark/commit/153cad12
Tree: http://git-wip-us.apache.org/repos/asf/incubator-spark/tree/153cad12
Diff: http://git-wip-us.apache.org/repos/asf/incubator-spark/diff/153cad12

Branch: refs/heads/scala-2.10
Commit: 153cad1293efb7947f5c3d01c7209b5b035e63c6
Parents: 5b74609
Author: Patrick Wendell <[email protected]>
Authored: Tue Dec 10 12:53:45 2013 -0800
Committer: Patrick Wendell <[email protected]>
Committed: Tue Dec 10 12:54:28 2013 -0800

----------------------------------------------------------------------
 README.md | 3 ---
 1 file changed, 3 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/incubator-spark/blob/153cad12/README.md
----------------------------------------------------------------------
diff --git a/README.md b/README.md
index 8c7853e..7faba27 100644
--- a/README.md
+++ b/README.md
@@ -69,9 +69,6 @@ When building for Hadoop 2.2.X and newer, you'll need to 
include the additional
     # Apache Hadoop 2.2.X and newer
     $ mvn -Dyarn.version=2.2.0 -Dhadoop.version=2.2.0 -Pnew-yarn
 
-For convenience, these variables may also be set through the 
`conf/spark-env.sh` file
-described below.
-
 When developing a Spark application, specify the Hadoop version by adding the
 "hadoop-client" artifact to your project's dependencies. For example, if you're
 using Hadoop 1.2.1 and build your application using SBT, add this entry to

Reply via email to