[ 
https://issues.apache.org/jira/browse/BEAM-5386?focusedWorklogId=178340&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-178340
 ]

ASF GitHub Bot logged work on BEAM-5386:
----------------------------------------

                Author: ASF GitHub Bot
            Created on: 23/Dec/18 15:52
            Start Date: 23/Dec/18 15:52
    Worklog Time Spent: 10m 
      Work Description: mxm commented on pull request #7349: [BEAM-5386] 
Prevent CheckpointMarks from not getting acknowled
URL: https://github.com/apache/beam/pull/7349
 
 
   The UnboundedSourceWrapper would still shutdown when the final watermarks of 
all
   readers are equal to the maximum timestamp. This could then lead to the 
remaining
   CheckpointMarks to never get acknowledged. Acknowledgement is performed as 
part
   of the checkpoint completion which is only possible when the source and its 
task
   are running.
   
   A user had previously reported that the last remaining messages were not 
getting
   acknowledged in a PubSub scenario. It looks like this is the cause.
   
   CC @tweise 
   
   Post-Commit Tests Status (on master branch)
   
------------------------------------------------------------------------------------------------
   
   Lang | SDK | Apex | Dataflow | Flink | Gearpump | Samza | Spark
   --- | --- | --- | --- | --- | --- | --- | ---
   Go | [![Build 
Status](https://builds.apache.org/job/beam_PostCommit_Go/lastCompletedBuild/badge/icon)](https://builds.apache.org/job/beam_PostCommit_Go/lastCompletedBuild/)
 | --- | --- | --- | --- | --- | ---
   Java | [![Build 
Status](https://builds.apache.org/job/beam_PostCommit_Java/lastCompletedBuild/badge/icon)](https://builds.apache.org/job/beam_PostCommit_Java/lastCompletedBuild/)
 | [![Build 
Status](https://builds.apache.org/job/beam_PostCommit_Java_ValidatesRunner_Apex/lastCompletedBuild/badge/icon)](https://builds.apache.org/job/beam_PostCommit_Java_ValidatesRunner_Apex/lastCompletedBuild/)
 | [![Build 
Status](https://builds.apache.org/job/beam_PostCommit_Java_ValidatesRunner_Dataflow/lastCompletedBuild/badge/icon)](https://builds.apache.org/job/beam_PostCommit_Java_ValidatesRunner_Dataflow/lastCompletedBuild/)
 | [![Build 
Status](https://builds.apache.org/job/beam_PostCommit_Java_ValidatesRunner_Flink/lastCompletedBuild/badge/icon)](https://builds.apache.org/job/beam_PostCommit_Java_ValidatesRunner_Flink/lastCompletedBuild/)
 [![Build 
Status](https://builds.apache.org/job/beam_PostCommit_Java_PVR_Flink/lastCompletedBuild/badge/icon)](https://builds.apache.org/job/beam_PostCommit_Java_PVR_Flink/lastCompletedBuild/)
 | [![Build 
Status](https://builds.apache.org/job/beam_PostCommit_Java_ValidatesRunner_Gearpump/lastCompletedBuild/badge/icon)](https://builds.apache.org/job/beam_PostCommit_Java_ValidatesRunner_Gearpump/lastCompletedBuild/)
 | [![Build 
Status](https://builds.apache.org/job/beam_PostCommit_Java_ValidatesRunner_Samza/lastCompletedBuild/badge/icon)](https://builds.apache.org/job/beam_PostCommit_Java_ValidatesRunner_Samza/lastCompletedBuild/)
 | [![Build 
Status](https://builds.apache.org/job/beam_PostCommit_Java_ValidatesRunner_Spark/lastCompletedBuild/badge/icon)](https://builds.apache.org/job/beam_PostCommit_Java_ValidatesRunner_Spark/lastCompletedBuild/)
   Python | [![Build 
Status](https://builds.apache.org/job/beam_PostCommit_Python_Verify/lastCompletedBuild/badge/icon)](https://builds.apache.org/job/beam_PostCommit_Python_Verify/lastCompletedBuild/)
 | --- | [![Build 
Status](https://builds.apache.org/job/beam_PostCommit_Py_VR_Dataflow/lastCompletedBuild/badge/icon)](https://builds.apache.org/job/beam_PostCommit_Py_VR_Dataflow/lastCompletedBuild/)
 </br> [![Build 
Status](https://builds.apache.org/job/beam_PostCommit_Py_ValCont/lastCompletedBuild/badge/icon)](https://builds.apache.org/job/beam_PostCommit_Py_ValCont/lastCompletedBuild/)
 | [![Build 
Status](https://builds.apache.org/job/beam_PostCommit_Python_VR_Flink/lastCompletedBuild/badge/icon)](https://builds.apache.org/job/beam_PostCommit_Python_VR_Flink/lastCompletedBuild/)
 | --- | --- | ---
   
   
   
   
   
 
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


Issue Time Tracking
-------------------

            Worklog Id:     (was: 178340)
            Time Spent: 10m
    Remaining Estimate: 0h

> Flink Runner gets progressively stuck when Pubsub subscription is nearly empty
> ------------------------------------------------------------------------------
>
>                 Key: BEAM-5386
>                 URL: https://issues.apache.org/jira/browse/BEAM-5386
>             Project: Beam
>          Issue Type: Bug
>          Components: io-java-gcp, runner-flink
>    Affects Versions: 2.6.0
>            Reporter: Encho Mishinev
>            Assignee: Chamikara Jayalath
>            Priority: Major
>          Time Spent: 10m
>  Remaining Estimate: 0h
>
> I am running the Flink runner on Apache Beam 2.6.0.
> My pipeline involves reading from Google Cloud Pubsub. The problem is that 
> whenever there are few messages left in the subscription I'm reading from, 
> the whole job becomes progressively slower and slower, Flink's checkpoints 
> start taking much more time and messages seem to not get properly 
> acknowledged.
> This happens only whenever the subscription is nearly empty. For example when 
> running 13 taskmanagers with parallelism of 52 for the job and a subscription 
> that has 122 000 000 messages, you start feeling the slowing down after there 
> are only 1 000 000 - 2 000 000 messages left.
> In one of my tests the job processed nearly 122 000 000 messages in an hour 
> and then spent over 30 minutes attempting to do the few hundred thousand 
> left. In the end it was reading a few hundred messages a minute and not 
> reading at all for some periods. Upon stopping it the subscription still had 
> 235 unacknowledged messages, even though Flink's element count was higher 
> than the amount of messages I had loaded. The only explanation is that the 
> messages did not get properly acknowledged and were resent.
> I have set up the subscriptions to a large acknowledgment deadline, but that 
> does not help.
> I did smaller tests on subscriptions with 100 000 messages and a job that 
> simply reads and does nothing else. The problem is still evident. With 
> parallelism of 52 the job gets slow right away. Takes over 5min to read about 
> 100 000 messages and a few hundred seem to keep cycling through never being 
> acknowledged.
> On the other hand a parallelism of 1 works fine until there are about 5000 
> messages left, and then slows down similarly.
> Parallelism of 16 reads about 75 000 of the 100 000 immediately (a few 
> seconds) and then proceeds to slowly work on the other 25 000 for minutes.
> The PubsubIO connector is provided by Beam so I suspect the problem to be in 
> Beam's Flink runner rather than Flink itself.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to