jayon-niravel opened a new issue, #33080:
URL: https://github.com/apache/airflow/issues/33080
### Official Helm Chart version
1.10.0 (latest released)
### Apache Airflow version
2.6.0
### Kubernetes Version
1.23
### Helm Chart configuration
`
# Airflow webserver settings
webserver:
extraVolumeMounts:
- name: backups
mountPath: /opt/airflow/backups
readOnly: false
extraVolumes:
- name: backups
persistentVolumeClaim:
claimName: airflow-s3-pvc`
### Docker Image customizations
_No response_
### What happened
pod-template-file.kubernetes-helm-yaml
`
env:
- name: AIRFLOW__CORE__PARALLELISM
value: 1000
- name: AIRFLOW__CORE__EXECUTOR
value: KubernetesExecutor
volumeMounts:
- mountPath: /opt/airflow/backups
name: backups
readOnly: false
- mountPath: {{ template "airflow_logs" . }}
name: logs
volumes:
- name: backups
persistentVolumeClaim:
claimName: airflow-s3-pvc
`
### What you think should happen instead
When I run the Helm chart, all the pod container runs sequentially one after
another.
I saw the errors in the event log and found below
`Unable to attach or mount volumes: unmounted volumes=[backups], unattached
volumes=[xxxxx]: timed out waiting for the condition`
Looks like it is waiting for the volume mount and at a time only one pod can
run.
#Below is the YAML file for the persistentvolumeclaims: airflow-s3-pvc
`kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: airflow-s3-pvc
resourceVersion: '1899233584'
creationTimestamp: '2023-07-11T13:30:39Z'
annotations:
pv.kubernetes.io/bind-completed: 'yes'
pv.kubernetes.io/bound-by-controller: 'yes'
volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/aws-ebs
volume.kubernetes.io/selected-node: ip-10-0-206-84.ec2.internal
volume.kubernetes.io/storage-provisioner: kubernetes.io/aws-ebs
finalizers:
- kubernetes.io/pvc-protection
managedFields:
- manager: Mozilla
operation: Update
apiVersion: v1
time: '2023-07-11T13:30:39Z'
fieldsType: FieldsV1
fieldsV1:
'f:spec':
'f:accessModes': {}
'f:resources':
'f:requests':
.: {}
'f:storage': {}
'f:storageClassName': {}
'f:volumeMode': {}
- manager: kube-scheduler
operation: Update
apiVersion: v1
time: '2023-07-11T13:33:51Z'
fieldsType: FieldsV1
fieldsV1:
'f:metadata':
'f:annotations':
.: {}
'f:volume.kubernetes.io/selected-node': {}
- manager: kube-controller-manager
operation: Update
apiVersion: v1
time: '2023-07-11T13:33:56Z'
fieldsType: FieldsV1
fieldsV1:
'f:metadata':
'f:annotations':
'f:pv.kubernetes.io/bind-completed': {}
'f:pv.kubernetes.io/bound-by-controller': {}
'f:volume.beta.kubernetes.io/storage-provisioner': {}
'f:volume.kubernetes.io/storage-provisioner': {}
'f:spec':
'f:volumeName': {}
- manager: kube-controller-manager
operation: Update
apiVersion: v1
time: '2023-07-11T13:33:56Z'
fieldsType: FieldsV1
fieldsV1:
'f:status':
'f:accessModes': {}
'f:capacity':
.: {}
'f:storage': {}
'f:phase': {}
subresource: status
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 50Gi
volumeName: pvc-89e4f04f-10c1-4c76-848a-c10c3740a3f8
storageClassName: gp2
volumeMode: Filesystem
status:
phase: Bound
accessModes:
- ReadWriteOnce
capacity:
storage: 50Gi
`
### How to reproduce
Create an PVC with above YAML and use the above Kubernetes pod template.
### Anything else
How the volume mount issue can be fixed? So that the pods can run in
parallel instead of sequential?
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of
Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]