While we appreciate you reaching out - [Lease open a discussion in Github
Discussions. https://github.com/apache/airflow/discussions - this is where
you can ask for troubleshooting help. Also you can ask your questions in
Airflow Slack (#user-troubleshooting channel). Devlist is used mostly for
development communication.

On Wed, Nov 27, 2024 at 11:58 PM karan alang <karan.al...@gmail.com> wrote:

> I've Airflow installed in namespace - airflow, and i'm using NFS to store
> the dags & pyspark code. I want to run the airflow jobs in different
> namespace - airflow-spark-apps .. however, i'm unable to do this since the
> PVC associated with the ns - airflow-spark-apps is not accessible to
> airflow deployed in ns - airflow.
>
> Here is the helm chart deployment :
>
> ```
>
> helm upgrade --install airflow apache-airflow/airflow \
>     --namespace airflow \
>     --set dags.persistence.enabled=true \
>     --set dags.persistence.existingClaim=airflow-dags-pvc \
>     --set dags.persistence.subPath="airflow-dags" \
>     --set global.persistence.existingClaimNamespace=airflow-spark-apps \
>     --set dags.gitSync.enabled=false \
>     --set images.airflow.repository=
> artifacts.versa-networks.com:8443/airflow-image
> \
>     --set images.airflow.tag=0.0.1 \
>     --set scheduler.resources.requests.memory="1024Mi" \
>     --set scheduler.resources.requests.cpu="500m" \
>     --set scheduler.resources.limits.memory="2048Mi" \
>     --set scheduler.resources.limits.cpu="1000m" \
>     --set webserver.resources.requests.memory="512Mi" \
>     --set webserver.resources.requests.cpu="250m" \
>     --set webserver.resources.limits.memory="1024Mi" \
>     --set webserver.resources.limits.cpu="500m" \
>     --set workers.resources.requests.memory="1024Mi" \
>     --set workers.resources.requests.cpu="500m" \
>     --set workers.resources.limits.memory="4096Mi" \
>     --set workers.resources.limits.cpu="2000m" \
>     --version 1.9.0 \
>     --set serviceAccount.create=false \
>     --set serviceAccount.name=airflow \
>     --set config.kubernetes_executor.multi_namespace_mode=True \
>     --set
> config.kubernetes_executor.multi_namespace_mode_namespace_list="airflow-spark-apps"
> \
>     --debug
>
> ```
>
> PVC & PV created in namespace - airflow-spark-apps:
>
> ```
>
> apiVersion: v1
> kind: PersistentVolume
> metadata:
>   name: airflow-dags-pv
>   namespace: storage
>   labels:
>     app: airflow-dags
> spec:
>   capacity:
>     storage: 5Gi
>   accessModes:
>     - ReadWriteMany
>   nfs:
>     server: nfs-service.storage.svc.cluster.local
>     path: "/exports/airflow-dags"
> ---
> apiVersion: v1
> kind: PersistentVolumeClaim
> metadata:
>   name: airflow-dags-pvc
>   namespace: airflow-spark-apps
> spec:
>   accessModes:
>     - ReadWriteMany
>   storageClassName: ""
>   resources:
>     requests:
>       storage: 5Gi
>   selector:
>     matchLabels:
>       app: airflow-dags
>
> ```
>
> Airflow worker is not starting up, error :
>
> ```
>
> Events:
>   Type     Reason            Age                  From
>  Message
>   ----     ------            ----                 ----
>  -------
>   Warning  FailedScheduling  67s (x33 over 161m)  default-scheduler
> 0/4 nodes are available: persistentvolumeclaim "airflow-dags-pvc" not
> found. preemption: 0/4 nodes are available: 4 Preemption is not
> helpful for scheduling..
>
> ```
>
> How do I fix this ? Can I have the airflow jobs running in namespace -
> airflow-spark-apps, while the airflow install is in ns - airflow ?
>
> tia!
>

Reply via email to