Re: Spark 2.3.2 : No of active tasks vastly exceeds total no of executor cores

2018-10-24 Thread Shing Hing Man
I have increased spark.scheduler.listenerbus.eventqueue.capacity, and ran my application (in Yarn client mode) as before. I no longer get "Dropped events". But the driver ran out of memory. The Spark UI gradually became unreponsive. I noticed from the Spark UI that tens of thousands of jobs

Re: Spark 2.3.2 : No of active tasks vastly exceeds total no of executor cores

2018-10-22 Thread Shing Hing Man
In my log, I have found mylog.2:2018-10-19 20:00:50,455 WARN [dag-scheduler-event-loop] (Logging.scala:66) - Dropped 3498 events from appStatus since Fri Oct 19 19:25:05 UTC 2018.mylog.2:2018-10-19 20:02:07,053 WARN [dispatcher-event-loop-1] (Logging.scala:66) - Dropped 123385 events from