But I am able to run the SparkPi example:
./run-example SparkPi 1000 --master spark://192.168.26.131:7077

Result:Pi is roughly 3.14173708



bit1...@163.com
 
From: bit1...@163.com
Date: 2015-02-18 16:29
To: user
Subject: Problem with 1 master + 2 slaves cluster
Hi sparkers,
I setup a spark(1.2.1) cluster with 1 master and 2 slaves, and then startup 
them, everything looks running normally.
In the master node, I run the spark-shell, with the following steps:

bin/spark-shell --master spark://192.168.26.131:7077
scala> var rdd = sc.textFile("file:///home/hadoop/history.txt.used.byspark", 7)
rdd.flatMap(_.split(" ")).map((_, 1)).reduceByKey(_ + _,5).map(x => (x._2, 
x._1)).sortByKey(false).map(x => (x._2, 
x._1)).saveAsTextFile("file:///home/hadoop/output")

After finishing running the application, there is no word count related output, 
there does exist an output directory appear on each slave node,  but there is 
only a "_temporary" subdirectory

Any ideas? Thanks!




Reply via email to