OOM error in Spark worker

2015-10-01 Thread varun sharma
My workers are going OOM over time. I am running a streaming job in spark 1.4.0. Here is the heap dump of workers. *16,802 instances of "org.apache.spark.deploy.worker.ExecutorRunner", loaded by "sun.misc.Launcher$AppClassLoader @ 0xdff94088" occupy 488,249,688 (95.80%) bytes. These

OOM error in Spark worker

2015-09-29 Thread varun sharma
ka topic and are pending to be scheduled because of delay in processing... Will my force killing the streaming job lose that data which is not yet scheduled? Please help ASAP. -- View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/OOM-error-in-Spark-worker-tp24856.h