[ 
https://issues.apache.org/jira/browse/SPARK-27750?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17016974#comment-17016974
 ] 

Sean R. Owen commented on SPARK-27750:
--------------------------------------

I don't think Spark or a resource manager can save you from this entirely. What 
would you do - not launch an app because some other app might want resources 
later? and how would Spark even know there are other drivers? RMs can use pools 
to prevent too much resource going to one user.

> Standalone scheduler - ability to prioritize applications over drivers, many 
> drivers act like Denial of Service
> ---------------------------------------------------------------------------------------------------------------
>
>                 Key: SPARK-27750
>                 URL: https://issues.apache.org/jira/browse/SPARK-27750
>             Project: Spark
>          Issue Type: New Feature
>          Components: Scheduler
>    Affects Versions: 3.0.0
>            Reporter: t oo
>            Priority: Minor
>
> If I submit 1000 spark submit drivers then they consume all the cores on my 
> cluster (essentially it acts like a Denial of Service) and no spark 
> 'application' gets to run since the cores are all consumed by the 'drivers'. 
> This feature is about having the ability to prioritize applications over 
> drivers so that at least some 'applications' can start running. I guess it 
> would be like: If (driver.state = 'submitted' and (exists some app.state = 
> 'submitted')) then set app.state = 'running'
> if all apps have app.state = 'running' then set driver.state = 'submitted' 
>  
> Secondary to this, why must a driver consume a minimum of 1 entire core?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to