cristianastorino opened a new issue, #27772: URL: https://github.com/apache/airflow/issues/27772
### Apache Airflow version 2.4.3 ### What happened When trying to view logs from Web UI I get the following error: `*** Log file does not exist: /opt/airflow/logs/dag_id=demo_dag/run_id=manual__2022-11-17T09:46:26.538673+00:00/task_id=test-demo/attempt=1.log *** Fetching from: http://airflow-scheduler-599d84c9c9-r9kws:8793/log/dag_id=demo_dag/run_id=manual__2022-11-17T09:46:26.538673+00:00/task_id=test-demo/attempt=1.log *** Failed to fetch log file from worker. [Errno -2] Name or service not known` ### What you think should happen instead It should get the log files. It seems like the hostname used is wrong. It tries to recover logs from: `http://airflow-scheduler-599d84c9c9-r9kws:8793` but the correct address is `http://airflow-scheduler:8793` `airflow-scheduler-599d84c9c9-r9kws` is the "container id" inside the scheduler pod. The scheduler should be accessed instead by the related service with name `airflow-scheduler` ### How to reproduce - Deploy airflow version 2.4.3 with the official helm chart (version 1.7.0) in a kubernetes cluster - Configure airflow to use LocalExecutor mode. - Run a simple dag - Try to view dag logs from web ui ### Operating System Debian GNU/Linux 11 (bullseye) ### Versions of Apache Airflow Providers _No response_ ### Deployment Official Apache Airflow Helm Chart ### Deployment details - Official Apache Airflow Helm Chart v1.7.0 - Airflow v2.4.3 - kubernetes v1.24.6 - helm v3.10.0 ### Anything else The problem occurs every time ### Are you willing to submit PR? - [ ] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
