I'm using spark 1.0.1 on a quite large cluster, with gobs of memory, etc.
 Cluster resources are available to me via Yarn and I am seeing these
errors quite often.

ERROR YarnClientClusterScheduler: Lost executor 63 on <host>: remote Akka
client disassociated


This is in an interactive shell session.  I don't know a lot about Yarn
plumbing and am wondering if there's some constraint in play -- executors
can't be idle for too long or they get cleared out.


Any insights here?

Reply via email to