GitHub user zsxwing opened a pull request:

    https://github.com/apache/spark/pull/8417

    [SPARK-10224][Streaming]Fix the issue that blockIntervalTimer won't call 
updateCurrentBuffer when stopping

    `blockIntervalTimer.stop(interruptTimer = false)` doesn't guarantee calling 
`updateCurrentBuffer`. So it's possible that `blockIntervalTimer` will exit 
when `updateCurrentBuffer` is not empty. Then the data in `currentBuffer` will 
be lost.
    
    To reproduce it, you can add `Thread.sleep(200)` in this line 
(https://github.com/apache/spark/blob/69c9c177160e32a2fbc9b36ecc52156077fca6fc/streaming/src/main/scala/org/apache/spark/streaming/util/RecurringTimer.scala#L100)
 and run `StreamingContexSuite`. There was a failure in Jenkins here: 
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/41455/console
    
    This PR added a loop to make sure  `updateCurrentBuffer` is empty before 
calling `blockIntervalTimer.stop(interruptTimer = false)`.

You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/zsxwing/spark SPARK-10224

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/spark/pull/8417.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #8417
    
----
commit 94f108be1300a7709c3bd1548ee1114a41665b0d
Author: zsxwing <[email protected]>
Date:   2015-08-25T13:28:07Z

    Fix the issue that blockIntervalTimer won't call updateCurrentBuffer when 
stopping

----


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to