Assuming your cluster is actually working (e.g., other examples like
SparkPi work), then the problem is probably that println() doesn't actually
write output back to the driver; instead, it may just be outputting locally
to each slave. You can test this by replacing lines 43 through 45 with:

  sc.parallelize(1 to 10, slices).map {
    i => barr1.value.size
  }.collect().foreach(i => println(i))

which should gather the exact same data but ensure that the printlns
actually occur on the driver.

On Mon, Nov 18, 2013 at 5:59 PM, 杨强 <[email protected]> wrote:

>  Hi, all.
> I'm using spark-0.8.0-incubating.
>
> I tried the example BroadcastTest in local mode.
> ./run-example org.apache.spark.examples.BroadcastTest local 1 2>/dev/null
> This works fine and get the result:
>  Iteration 0
> ===========
> 1000000
> 1000000
> 1000000
> 1000000
> 1000000
> 1000000
> 1000000
> 1000000
> 1000000
> 1000000
> Iteration 1
> ===========
> 1000000
> 1000000
> 1000000
> 1000000
> 1000000
> 1000000
> 1000000
> 1000000
> 1000000
> 1000000
>
> But when I run this program in the cluster(standalone mode) with:
> ./run-example org.apache.spark.examples.BroadcastTest spark://
> 172.16.1.39:7077 5 2>/dev/null
> This output is as follows:
>  Iteration 0
> ===========
> Iteration 1
> ===========
>
> I also tried command
> ./run-example org.apache.spark.examples.BroadcastTest spark://
> 172.16.1.39:7077 5
> but I did not find any error message.
>
> Hope someone can give me some advices. Thank you.
>
>
> The content of file etc/spark-env.sh is as follows:
>
>  export SCALA_HOME=/usr/lib/scala-2.9.3
> export SPARK_MASTER_IP=172.16.1.39
> export SPARK_MASTER_WEBUI_PORT=8090
> export SPARK_WORKER_WEBUI_PORT=8091
> export SPARK_WORKER_MEMORY=2G
>
> #export 
> SPARK_CLASSPATH=.:/home/spark-0.7.3/core/target/spark-core-assembly-0.7.3.jar:$SPACK_CLASSPATH
>
> export 
> SPARK_CLASSPATH=.:/home/hadoop/spark-0.8.0-incubating/conf:/home/hadoop/spark-0.8.0-incubating/assembly/target/scala-2.9.3/spark-assembly-0.8.0-incubating-hadoop1.0.1.jar:/home/hadoop/hadoop-1.0.1/conf
>
> ------------------------------
>      Sincerely
>
> Yang, Qiang
>

Reply via email to