Github user elyast commented on the pull request:

    https://github.com/apache/spark/pull/4170#issuecomment-78799120
  
    One comment, however if you run multiple Spark applications even tough 
executor-id == slave-id, multiple executors can be started on the same host. 
(And every one of them will consume 1 CPU without scheduling any tasks). This 
can be painful when you want to run multiple streaming applications on Mesos in 
fine grained mode, because each streaming driver's executors will consume 1 
CPU...
    
    
    
![executors](https://cloud.githubusercontent.com/assets/671809/6633130/b48ac190-c900-11e4-9057-cdc875e2342b.jpg)
    
    Screen shots illustrate situation on single slave, when there are two 
executors running for 2 different Spark applications (one is streaming app, 
second one is Zeppelin), and as u can see there 0 active tasks the consumption 
of CPU is 2.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to