[ 
https://issues.apache.org/jira/browse/BEAM-12792?focusedWorklogId=731973&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-731973
 ]

ASF GitHub Bot logged work on BEAM-12792:
-----------------------------------------

                Author: ASF GitHub Bot
            Created on: 23/Feb/22 22:18
            Start Date: 23/Feb/22 22:18
    Worklog Time Spent: 10m 
      Work Description: phoerious commented on a change in pull request #16658:
URL: https://github.com/apache/beam/pull/16658#discussion_r813373631



##########
File path: sdks/python/apache_beam/runners/worker/worker_pool_main.py
##########
@@ -51,6 +51,28 @@
 _LOGGER = logging.getLogger(__name__)
 
 
+def kill_process_gracefully(proc, timeout=10):
+  """
+  Kill a worker process gracefully by sending a SIGTERM and waiting for
+  it to finish. A SIGKILL will be sent if the process has not finished
+  after ``timeout`` seconds.
+  """
+  def _kill():
+    proc.terminate()
+    t = time.time()
+    while time.time() < t + timeout:
+      time.sleep(0.01)
+      if proc.poll() is not None:

Review comment:
       True, but that wouldn't be less code, because it requires a surrounding 
try/except block. The docs also suggest that wait with timeout does busy wait, 
so doing an explicit while with sleep is probably better anyway.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Issue Time Tracking
-------------------

    Worklog Id:     (was: 731973)
    Time Spent: 10h 40m  (was: 10.5h)

> Multiple jobs running on Flink session cluster reuse the persistent Python 
> environment.
> ---------------------------------------------------------------------------------------
>
>                 Key: BEAM-12792
>                 URL: https://issues.apache.org/jira/browse/BEAM-12792
>             Project: Beam
>          Issue Type: Bug
>          Components: sdk-py-harness
>    Affects Versions: 2.27.0, 2.28.0, 2.29.0, 2.30.0, 2.31.0
>         Environment: Kubernetes 1.20 on Ubuntu 18.04.
>            Reporter: Jens Wiren
>            Priority: P1
>              Labels: FlinkRunner, beam
>          Time Spent: 10h 40m
>  Remaining Estimate: 0h
>
> I'm running TFX pipelines on a Flink cluster using Beam in k8s. However, 
> extra python packages passed to the Flink runner (or rather beam worker 
> side-car) are only installed once per deployment cycle. Example:
>  # Flink is deployed and is up and running
>  # A TFX pipeline starts, submits a job to Flink along with a python whl of 
> custom code and beam ops.
>  # The beam worker installs the package and the pipeline finishes succesfully.
>  # A new TFX pipeline is build where a new beam fn is introduced, the pipline 
> is started and the new whl is submitted as in step 2).
>  # This time, the new package is not being installed in the beam worker 
> causing the job to fail due to a reference which does not exist in the beam 
> worker, since it didn't install the new package.
>  
> I started using Flink from beam version 2.27 and it has been an issue all the 
> time.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

Reply via email to