[ 
https://issues.apache.org/jira/browse/BEAM-12792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17481033#comment-17481033
 ] 

Janek Bevendorff commented on BEAM-12792:
-----------------------------------------

You do that with pod templates. You spawn the cluster with the start-cluster.sh 
script, which accepts several configuration parameters, including one for a pod 
template. The deployment I'm running here is a persisted version of that. It's 
a bit hacky, but as long as the K8S operator doesn't support the native K8S 
mode (which would do the same thing, just more user-friendly), it's the only 
way to have a persistent session cluster with auto-scaling enabled (if I don't 
want to fumble with start-cluster.sh all the time).

The pod template is saved in a K8S ConfigMap, which is mounted into the 
JobManager pod. Here's the critical container spec part of my pod template:

 
{code:java}
     containers:
        - name: flink-main-container
          image: apache/flink:1.13.2-scala_2.12
          ports:
            - containerPort: 6122
              name: rpc
            - containerPort: 6125
              name: query-state
          livenessProbe:
            tcpSocket:
              port: 6122
            initialDelaySeconds: 30
          resources:
            requests:
              memory: 2048Mi
              cpu: 2
          volumeMounts:
            - mountPath: /tmp/beam-artifact-staging
              name: artifact-staging
            - mountPath: /opt/flink/log
              name: flink-logs
        - name: beam-worker-pool
          image: apache/beam_python3.8_sdk
          args: ["--worker_pool"]
          ports:
            - containerPort: 50000
              name: pool
          livenessProbe:
            tcpSocket:
              port: 50000
            initialDelaySeconds: 30
            periodSeconds: 60
          resources:
            requests:
              memory: 4096Mi
              cpu: "2"
          volumeMounts:
            - mountPath: /tmp/beam-artifact-staging
              name: artifact-staging{code}
 

 

> Beam worker only installs --extra_package once
> ----------------------------------------------
>
>                 Key: BEAM-12792
>                 URL: https://issues.apache.org/jira/browse/BEAM-12792
>             Project: Beam
>          Issue Type: Bug
>          Components: sdk-py-harness
>    Affects Versions: 2.27.0, 2.28.0, 2.29.0, 2.30.0, 2.31.0
>         Environment: Kubernetes 1.20 on Ubuntu 18.04.
>            Reporter: Jens Wiren
>            Priority: P1
>              Labels: FlinkRunner, beam
>
> I'm running TFX pipelines on a Flink cluster using Beam in k8s. However, 
> extra python packages passed to the Flink runner (or rather beam worker 
> side-car) are only installed once per deployment cycle. Example:
>  # Flink is deployed and is up and running
>  # A TFX pipeline starts, submits a job to Flink along with a python whl of 
> custom code and beam ops.
>  # The beam worker installs the package and the pipeline finishes succesfully.
>  # A new TFX pipeline is build where a new beam fn is introduced, the pipline 
> is started and the new whl is submitted as in step 2).
>  # This time, the new package is not being installed in the beam worker 
> causing the job to fail due to a reference which does not exist in the beam 
> worker, since it didn't install the new package.
>  
> I started using Flink from beam version 2.27 and it has been an issue all the 
> time.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

Reply via email to