[ 
https://issues.apache.org/jira/browse/BEAM-5797?focusedWorklogId=158703&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-158703
 ]

ASF GitHub Bot logged work on BEAM-5797:
----------------------------------------

                Author: ASF GitHub Bot
            Created on: 25/Oct/18 13:52
            Start Date: 25/Oct/18 13:52
    Worklog Time Spent: 10m 
      Work Description: tweise closed pull request #6828: [BEAM-5797] Ensure 
ExecutableStageDoFnOperator dispose is executed once
URL: https://github.com/apache/beam/pull/6828
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git 
a/runners/flink/src/main/java/org/apache/beam/runners/flink/translation/wrappers/streaming/ExecutableStageDoFnOperator.java
 
b/runners/flink/src/main/java/org/apache/beam/runners/flink/translation/wrappers/streaming/ExecutableStageDoFnOperator.java
index 2135d5a8285..20e426cb866 100644
--- 
a/runners/flink/src/main/java/org/apache/beam/runners/flink/translation/wrappers/streaming/ExecutableStageDoFnOperator.java
+++ 
b/runners/flink/src/main/java/org/apache/beam/runners/flink/translation/wrappers/streaming/ExecutableStageDoFnOperator.java
@@ -162,15 +162,20 @@ private StateRequestHandler 
getStateRequestHandler(ExecutableStage executableSta
 
   @Override
   public void dispose() throws Exception {
-    // Remove the reference to stageContext and make stageContext available 
for garbage collection.
-    try (@SuppressWarnings("unused")
-            AutoCloseable bundleFactoryCloser = stageBundleFactory;
-        @SuppressWarnings("unused")
-            AutoCloseable closable = stageContext) {
-      // DoFnOperator generates another "bundle" for the final watermark -- 
see BEAM-5816 for more context
-      super.dispose();
+    // may be called multiple times when an exception is thrown
+    if (stageContext != null) {
+      // Remove the reference to stageContext and make stageContext available 
for garbage collection.
+      try (@SuppressWarnings("unused")
+              AutoCloseable bundleFactoryCloser = stageBundleFactory;
+          @SuppressWarnings("unused")
+              AutoCloseable closable = stageContext) {
+        // DoFnOperator generates another "bundle" for the final watermark
+        // https://issues.apache.org/jira/browse/BEAM-5816
+        super.dispose();
+      } finally {
+        stageContext = null;
+      }
     }
-    stageContext = null;
   }
 
   @Override


 

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


Issue Time Tracking
-------------------

    Worklog Id:     (was: 158703)
    Time Spent: 1h 40m  (was: 1.5h)

> SDK workers are not always killed when Flink pipeline finishes
> --------------------------------------------------------------
>
>                 Key: BEAM-5797
>                 URL: https://issues.apache.org/jira/browse/BEAM-5797
>             Project: Beam
>          Issue Type: Bug
>          Components: runner-flink
>            Reporter: Micah Wylde
>            Assignee: Micah Wylde
>            Priority: Major
>              Labels: portability-flink
>             Fix For: 2.9.0
>
>          Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> Beam python workers are spun up as part of a pipeline execution, and killed 
> once that pipeline has been cancelled or failed. However, in some situations 
> we see the workers hanging around indefinitely until they are manually killed 
> or the taskmanager is restarted. The behavior seems to only occur with 
> streaming pipelines, and appearsĀ non-deterministic



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to