Github user cleaton commented on the pull request:

    https://github.com/apache/spark/pull/3868#issuecomment-68678753
  
    @tdas Thank you for the input.
    Yes, the main purpose of this patch is to make ReceiverTracker graceful by 
waiting for   ssc.sparkContext.runJob(tempRDD, 
ssc.sparkContext.clean(startReceiver)) to terminate and all receivers to 
de-register (possible redundant?). I borrowed the aproach used in JobGenerator 
and you are right I forgot to keep timeWhenStopStarted global.
    
    The second approach sounds good to me. Would make it easier to follow the 
shutdown sequence if it is consolidated in one place.
    
    And for unit test my idea is to create a dummy receiver implementation that 
blocks on shutdown while still producing a fixed number of records.
    
    Do you think you or someone else working more closely with spark streaming 
should take over this patch? Seems it is about deciding which approach is best 
suited for spark in the long run. I can still try to provide a unit test for 
this though.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to