Github user harishreedharan commented on the pull request:

    https://github.com/apache/spark/pull/5385#issuecomment-113338415
  
    On Thursday, June 18, 2015, andrewor14 <[email protected]> wrote:
    
    > Well, there are at least two other important considerations:
    >
    >    - We should use the batch queue instead of the task queue
    >
    > Not sure what you mean here - can you explain a bit? In the receiver case
    partition count depends on amount of data received while in the diret
    stream case it is fixed, so that we'd have to rethink since the task count
    is constant.
    
    >
    >    - We should never kill executors with receivers
    >
    > This is not an issue since the receiver is basically executed as a long
    running task.
    
    >
    >
    > The first is important because dynamic allocation currently doesn't really
    > do anything in most streaming workloads. The second is crucial because we
    > don't want dynamic allocation to disrupt the basic function of the
    > application.
    >
    > —
    > Reply to this email directly or view it on GitHub
    > <https://github.com/apache/spark/pull/5385#issuecomment-113338014>.
    >
    
    
    -- 
    
    Thanks,
    Hari



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to