GitHub user sueun-dev added a comment to the discussion: KubernetesExecutor: 
Worker/Scheduler Pods do not use custom docker image.

hey, pretty common gotcha with the airflow helm chart. looking at your 
screenshots a couple things stand out:

from the pod output, every single pod is still running `apache/airflow:3.1.8` 
which is the default chart image. so the `--set` values arent actually taking 
effect. couple things to check:

**1. use `defaultAirflowRepository` and `defaultAirflowTag` instead**

these are the top level values that propagate to ALL components including 
KubernetesExecutor worker pods, scheduler, webserver, triggerer, everything:

```bash
helm upgrade --install airflow apache-airflow/airflow \
  --set defaultAirflowRepository=your-registry/your-airflow \
  --set defaultAirflowTag=your-tag
```

setting `images.airflow.repository` works too, but `defaultAirflowRepository` 
is simpler and guaranteed to cascade everywhere.

**2. verify the variable substitution in your CI/CD**

from the screenshot it looks like you're using something like `--set 
images.$IMAGE_KEY.repository=$(containerRegistryName)/$(imageName)`. double 
check that those CI/CD variables are actually resolving. add a debug step that 
echoes the full helm command before it runs. if `$IMAGE_KEY` is empty or wrong, 
helm will silently set a value under a nonexistent key and fall back to 
defaults.

**3. check pullPolicy**

default is `IfNotPresent`. if k8s already cached an image with the same tag, it 
wont re-pull. try `--set images.airflow.pullPolicy=Always`.

**4. verify after deploy**

```bash
# check what values helm actually stored
helm get values <release-name> -n <namespace>

# check what image the pods are running
kubectl get pods -n <namespace> -o jsonpath='{range 
.items[*]}{.metadata.name}{"\t"}{.spec.containers[*].image}{"\n"}{end}'

# check the KubernetesExecutor pod template
kubectl get configmap -n <namespace> <release>-airflow-pod-template-file -o yaml
```

the configmap one is especially important for KubernetesExecutor - thats where 
worker pod specs come from.

my bet is #2 - the CI variable substitution. `helm get values` will tell you 
immediately whether the image override actually made it into the release. if it 
shows your custom image but pods still run the default, then its a pullPolicy 
or caching issue.

hope that helps, lmk

GitHub link: 
https://github.com/apache/airflow/discussions/64655#discussioncomment-16451719

----
This is an automatically sent email for [email protected].
To unsubscribe, please send an email to: [email protected]

Reply via email to