MadhuPolu commented on issue #57743:
URL: https://github.com/apache/airflow/issues/57743#issuecomment-3643115000
## If anyone looking whether the bug exists in the latest Airflow version
(3.1.4)
I can confirm this issue **still exists in Airflow 3.1.4** (tested on
2025-12-11).
### Environment
- **Airflow Version**: 3.1.4
- **Executor**: KubernetesExecutor
- **Deployment**: Official Apache Airflow Helm Chart
- **Python**: 3.12
- **Kubernetes Provider**: Latest
### Reproduction
When using `executor_config` with `V1Pod` in `default_args` at DAG level:
from kubernetes.client import models as k8s
```
default_args = {
'executor_config': {
'pod_override': k8s.V1Pod(
spec=k8s.V1PodSpec(
node_selector={'cloud.google.com/gke-nodepool':
'arm-worker-pool'},
tolerations=[
k8s.V1Toleration(
key='cloud.google.com/gke-nodepool',
operator='Equal',
value='arm-worker-pool',
effect='NoSchedule'
)
],
containers=[k8s.V1Container(name="base")]
)
)
}
}
@dag(
dag_id='kickoff__table_onboarding',
default_args=default_args, # ← DAG-level executor_config
...
)### Error Details
**Full Stack Trace from API Server Logs:**
pydantic_core._pydantic_core.PydanticSerializationError: Unable to serialize
unknown type: <class 'kubernetes.client.models.v1_pod.V1Pod'>
Traceback (most recent call last):
File
"/home/airflow/.local/lib/python3.12/site-packages/fastapi/routing.py", line
334, in app
content = await serialize_response(
^^^^^^^^^^^^^^^^^^^^^^^^^
File
"/home/airflow/.local/lib/python3.12/site-packages/fastapi/routing.py", line
188, in serialize_response
return field.serialize(
^^^^^^^^^^^^^^^^
File
"/home/airflow/.local/lib/python3.12/site-packages/fastapi/_compat.py", line
152, in serialize
return self._type_adapter.dump_python(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File
"/home/airflow/.local/lib/python3.12/site-packages/pydantic/type_adapter.py",
line 605, in dump_python
return self.serializer.to_python(
^^^^^^^^^^^^^^^^^^^^^^^^^^
pydantic_core._pydantic_core.PydanticSerializationError: Unable to serialize
unknown type: <class 'kubernetes.client.models.v1_pod.V1Pod'>### Impact
- **Tasks execute successfully** (scheduler handles V1Pod objects correctly)
- **API endpoints fail** with 500 Internal Server Error when accessing:
- `/api/v2/dags/{dag_id}/details`
- Any endpoint that serializes DAG metadata containing executor_config
- **UI becomes partially unusable** for affected DAGs
```
### Current Workaround
Apply `executor_config` at **task level** instead of DAG level:
```
from kubernetes.client import models as k8s
K8S_EXECUTOR_ARM64_CONFIG = {
'pod_override': k8s.V1Pod(
spec=k8s.V1PodSpec(
node_selector={'cloud.google.com/gke-nodepool':
'arm-worker-pool'},
tolerations=[...],
containers=[k8s.V1Container(name="base")]
)
)
}
@task(executor_config=K8S_EXECUTOR_ARM64_CONFIG, pool=POOL_NAME)
def my_task():
passThis is verbose and error-prone, as every single task must
explicitly include `executor_config`.
```
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]