[ 
https://issues.apache.org/jira/browse/BEAM-11417?focusedWorklogId=521826&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-521826
 ]

ASF GitHub Bot logged work on BEAM-11417:
-----------------------------------------

                Author: ASF GitHub Bot
            Created on: 08/Dec/20 19:23
            Start Date: 08/Dec/20 19:23
    Worklog Time Spent: 10m 
      Work Description: dpmills commented on pull request #13507:
URL: https://github.com/apache/beam/pull/13507#issuecomment-740893546


   LGTM


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Issue Time Tracking
-------------------

    Worklog Id:     (was: 521826)
    Time Spent: 20m  (was: 10m)

> StreamingDataflowWorker can leak UnboundedSource finalization callbacks
> -----------------------------------------------------------------------
>
>                 Key: BEAM-11417
>                 URL: https://issues.apache.org/jira/browse/BEAM-11417
>             Project: Beam
>          Issue Type: Bug
>          Components: runner-dataflow
>            Reporter: Daniel Mills
>            Assignee: Boyuan Zhang
>            Priority: P1
>          Time Spent: 20m
>  Remaining Estimate: 0h
>
> StreamingDataflowWorker keeps a map of finalization callbacks 
> ([https://github.com/apache/beam/blob/master/runners/google-cloud-dataflow-java/worker/src/main/java/org/apache/beam/runners/dataflow/worker/StreamingDataflowWorker.java#L401).]
>   If the Dataflow service loses a callback ID (due to autoscaling etc; they 
> are best-effort), the callback will stay around forever.
> This can cause a relatively rapid memory leak for sources like KafkaIO where 
> the callback (the KafkaCheckpointMark) has a reference to the 
> KafkaUnboundedReader object, which keeps a KafkaConsumer object alive.
> A simple fix would be to change the ConcurrentHashMap to a guava Cache with a 
> timeout on elements.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to