[ 
https://issues.apache.org/jira/browse/SPARK-13433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15157152#comment-15157152
 ] 

Sean Owen commented on SPARK-13433:
-----------------------------------

You can limit the number of cores used by the driver, but it doesn't look like 
you did here, and it used all the cores. Simply, you should not do that.

> The standalone   server should limit the count of cores and memory for 
> running Drivers
> --------------------------------------------------------------------------------------
>
>                 Key: SPARK-13433
>                 URL: https://issues.apache.org/jira/browse/SPARK-13433
>             Project: Spark
>          Issue Type: Improvement
>          Components: Scheduler
>    Affects Versions: 1.6.0
>            Reporter: lichenglin
>
> I have a 16 cores cluster.
> A  Running driver at least use 1 core may be more.
> When I submit a lot of job to the standalone  server in cluster mode.
> all the cores may be used for running driver,
> and then there is no cores to run applications
> The server is stuck.
> So I think we should limit the resources(cores and memory) for running driver.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to