HADOOP_GREMLIN_LIBS does not need "spark"

This causes more problems in 3.3.0 where Spark 2.0 generates errors if 
duplicate jars are on the path. Having "spark-gremlin" with "hadoop-gremlin" 
immediately causes problems. CTR


Project: http://git-wip-us.apache.org/repos/asf/tinkerpop/repo
Commit: http://git-wip-us.apache.org/repos/asf/tinkerpop/commit/8f34da82
Tree: http://git-wip-us.apache.org/repos/asf/tinkerpop/tree/8f34da82
Diff: http://git-wip-us.apache.org/repos/asf/tinkerpop/diff/8f34da82

Branch: refs/heads/TINKERPOP-1612
Commit: 8f34da823f7bea4674e2ee41d9f11b0c2aa76660
Parents: 5aa3f40
Author: Stephen Mallette <sp...@genoprime.com>
Authored: Mon Jan 30 15:05:16 2017 -0500
Committer: Stephen Mallette <sp...@genoprime.com>
Committed: Mon Jan 30 15:05:16 2017 -0500

----------------------------------------------------------------------
 docs/src/reference/implementations-hadoop.asciidoc | 10 ++--------
 1 file changed, 2 insertions(+), 8 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/tinkerpop/blob/8f34da82/docs/src/reference/implementations-hadoop.asciidoc
----------------------------------------------------------------------
diff --git a/docs/src/reference/implementations-hadoop.asciidoc 
b/docs/src/reference/implementations-hadoop.asciidoc
index 3416ea8..d3b340c 100644
--- a/docs/src/reference/implementations-hadoop.asciidoc
+++ b/docs/src/reference/implementations-hadoop.asciidoc
@@ -265,14 +265,8 @@ arguably easier for developers to work with than native 
Hadoop MapReduce. Spark-
 the bulk-synchronous parallel, distributed message passing algorithm within 
Spark and thus, any `VertexProgram` can be
 executed over `SparkGraphComputer`.
 
-If `SparkGraphComputer` will be used as the `GraphComputer` for `HadoopGraph` 
then its `lib` directory should be
-specified in `HADOOP_GREMLIN_LIBS`.
-
-[source,shell]
-export 
HADOOP_GREMLIN_LIBS=$HADOOP_GREMLIN_LIBS:/usr/local/gremlin-console/ext/spark-gremlin/lib
-
-Furthermore the `lib/` directory should be distributed across all machines in 
the SparkServer cluster. For this purpose TinkerPop
-provides a helper script, which takes the Spark installation directory and the 
Spark machines as input:
+Furthermore the `lib/` directory should be distributed across all machines in 
the SparkServer cluster. For this purpose
+TinkerPop provides a helper script, which takes the Spark installation 
directory and the Spark machines as input:
 
 [source,shell]
 bin/hadoop/init-tp-spark.sh /usr/local/spark spark@10.0.0.1 spark@10.0.0.2 
spark@10.0.0.3

Reply via email to