I am using spark + mesos to read data from cassandra. After some time, 
cassandra driver throws exception due to high load on the cluster. And Mesos UI 
shows many task failures. The Spark driver hangs there. I would like the spark 
driver to exit so that I could know that the job fails. Why the spark driver 
program does not exit when mesos stops after many failures?


The detailed log is at 
https://github.com/datastax/spark-cassandra-connector/issues/134

Reply via email to