: Shivaram Venkataraman
Cc: Ryan Blue; Ajith shetty; dev@spark.apache.org
Subject: Re: [Spark][Scheduler] Spark DAGScheduler scheduling performance
hindered on JobSubmitted Event
It's mostly just hash maps from some ids to some state, and those can be
replaced just with concurrent hash maps
It's mostly just hash maps from some ids to some state, and those can be
replaced just with concurrent hash maps?
(I haven't actually looked at code and am just guessing based on
recollection.)
On Tue, Mar 6, 2018 at 10:42 AM, Shivaram Venkataraman <
shiva...@eecs.berkeley.edu> wrote:
> The prob
The problem with doing work in the callsite thread is that there are a
number of data structures that are updated during job submission and
these data structures are guarded by the event loop ensuring only one
thread accesses them. I dont think there is a very easy fix for this
given the structure
I agree with Reynold. We don't need to use a separate pool, which would
have the problem you raised about FIFO. We just need to do the planning
outside of the scheduler loop. The call site thread sounds like a
reasonable place to me.
On Mon, Mar 5, 2018 at 12:56 PM, Reynold Xin wrote:
> Rather t
Rather than using a separate thread pool, perhaps we can just move the prep
code to the call site thread?
On Sun, Mar 4, 2018 at 11:15 PM, Ajith shetty
wrote:
> DAGScheduler becomes a bottleneck in cluster when multiple JobSubmitted
> events has to be processed as DAGSchedulerEventProcessLoop i