Hi,
Thanks for reading,

I am trying to running a spark program on cluster. The program can
successfully running on local;
The standalone topology is working, I can see workers from master webUI;
Master and worker are different machine, and worker status is ALIVE;
The thing is no matter I start a program from eclipse or ./run-example,
they both stop at some point like:
Stage IdDescriptionSubmittedDurationTasks: Succeeded/TotalShuffle ReadShuffle
Write0count at 
SparkExample.java:31<http://jie-optiplex-7010.local:4040/stages/stage?id=0>2013/12/16
14:50:367 m
0/2
And after a while, the worker's state become DEAD.

Spark directory on worker is copy from master by ./make-distribution,
firewall is all closed.

Has anyone has the same issue before?

Reply via email to