Normally the Spark driver stops by throwing an exception if the same task fails four times. Can you please check in the log if your have these four failures for the same task? I recommend to first take a closer look what's going on in Spark and the Cassandra driver before you dive into the scheduling with Mesos. Or you can run your job using Spark standalone cluster to verify if this problem is really related to Mesos.
BTW, your real problem seems to be overloading the Cassandra cluster, so I would think about how to throttle the read load your job is producing. On Fri, Aug 8, 2014 at 9:19 AM, Xu Zhongxing <[email protected]> wrote: > I am using spark + mesos to read data from cassandra. After some time, > cassandra driver throws exception due to high load on the cluster. And > Mesos UI shows many task failures. The Spark driver hangs there. I would > like the spark driver to exit so that I could know that the job fails. Why > the spark driver program does not exit when mesos stops after many failures? > > The detailed log is at > https://github.com/datastax/spark-cassandra-connector/issues/134 >

