> On 13 Jan 2016, at 19:25, Charlie Wright <[email protected]> wrote: > > Well im trying to send my own event, a ResourcePressureEvent. I have a thread > in the ResourceManager that I have created that periodically checks for > "resource pressure" - which is whenever the (pending resources + used > resources / total resources >= threshold) - and whenever there is "resource > pressure" I want to send an event to applications running on the cluster > (specifically Spark applications). > > Charles. >
1. you are probably best off playing with pre-emption in queues: the containers get killed for you. 2. For spark, use Dynamic Resource Allocation for flexing cluster size up and down based on load. 3. Spark does recognise and detect pre-emption failures as different from generic app failures. I don't know if it handles any pre-emption warning events. Finally, you can have your own IPC channel to the Spark Driver (client or cluster) to pass on some event.
