Hi all, 
I am new to spark. I am trying to deploy HDFS (hadoop-2.6.0) and Spark-1.3.1 
with four nodes, and each node has 8-cores and 8GB memory.
One is configured as headnode running masters, and 3 others are workers 

But when I try to run the Pagerank from HiBench, it always cause a node to 
reboot during the middle of the work for all scala, java, and python versions. 
But works fine
with the MapReduce version from the same benchmark. 

I also tried standalone deployment, got the same issue. 

My spark-defaults.conf
spark.master                            yarn-client
spark.driver.memory             4g
spark.executor.memory           4g
spark.rdd.compress              false   


The job submit script is:

bin/spark-submit  --properties-file 
HiBench/report/pagerank/spark/scala/conf/sparkbench/spark.conf --class 
org.apache.spark.examples.SparkPageRank --master yarn-client --num-executors 2 
--executor-cores 4 --executor-memory 4G --driver-memory 4G 
HiBench/src/sparkbench/target/sparkbench-4.0-SNAPSHOT-MR2-spark1.3-jar-with-dependencies.jar
 hdfs://discfarm:9000/HiBench/Pagerank/Input/edges 
hdfs://discfarm:9000/HiBench/Pagerank/Output 3

What is problem with my configuration ? and How can I find the cause ?

any help is welcome !











---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to