[ 
https://issues.apache.org/jira/browse/SPARK-13433?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Owen closed SPARK-13433.
-----------------------------

The driver uses 1 core. If it can't schedule, nothing can. This is not 
deadlock, it's just resource problems. Your app will not proceed until 
something else frees up resources. It is not necessarily true that the driver 
should know to yield or kill itself.

> The standalone   server should limit the count of cores and memory for 
> running Drivers
> --------------------------------------------------------------------------------------
>
>                 Key: SPARK-13433
>                 URL: https://issues.apache.org/jira/browse/SPARK-13433
>             Project: Spark
>          Issue Type: Improvement
>          Components: Scheduler
>    Affects Versions: 1.6.0
>            Reporter: lichenglin
>
> I have a 16 cores cluster.
> A  Running driver at least use 1 core may be more.
> When I submit a lot of job to the standalone  server in cluster mode.
> all the cores may be used for running driver,
> and then there is no cores to run applications
> The server is stuck.
> So I think we should limit the resources(cores and memory) for running driver.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to