Github user hbhanawat commented on the pull request:

    https://github.com/apache/spark/pull/11723#issuecomment-197221049
  
    @rxin  Thanks for commenting. 
    
    Spark was designed such that it is agnostic to the underlying cluster 
manager (as long as it can acquire executor processes, and these communicate 
with each other). Since Spark is now being used in newer and different use 
cases, there is a need for allowing other cluster managers to manage spark 
components. One such use case is - embedding spark components like executor and 
driver inside another process which may be a datastore. This allows co-location 
of data and processing. Another use case would be using Spark like an 
application server (you might have heard about spark-jobserver). Spark's 
current design allows handling such use cases if the cluster manager supports 
it. Hence, IMO, it is meaningful to allow plugging in new cluster managers. 
    
    From code perspective, I think that even creation of TaskScheduler and 
SchedulerBackend for Yarn/Mesos/local mode should be done using a similar 
interface. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to