Hi All,

I have a spark cluster running in 3 Linux. One Linux is master, two other
Linux have stand-by masters. And each Linux have 2 worker instance running.  

I found that when the Linux that have the Spark master got reset or network
down, the application will froze and sometimes can not recover. So I wonder
any of you know if there is an official recommended steps to recover after
these situations:
 
1) reset the Linux running Spark master?
2) reset the Linux running Spark slaves?
3) recover from network down on Linux running Spark master?
4) recover from network down on Linux running Spark slaves?

Thanks
Peng




--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/What-is-the-correct-way-to-reset-a-Linux-in-Cluster-tp26129.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to