AlexisBRENON opened a new issue, #22861:
URL: https://github.com/apache/airflow/issues/22861
### Apache Airflow version
2.2.4
### What happened
When moving from local logs to stackdriver remote logging, full task logs
are not available, neither through Airflow UI or GCP stackdriver log explorer.
Instead I got those logs in stackdriver:
```
{local_task_job.py:212} WARNING - State of this instance has been externally
set to success. Terminating instance.
{process_utils.py:124} INFO - Sending Signals.SIGTERM to group 69. PIDs of
all processes in the group: [69]
{process_utils.py:75} INFO - Sending the signal Signals.SIGTERM to group 69
{process_utils.py:70} INFO - Process psutil.Process(pid=69,
status='terminated', exitcode=0, started='13:51:55') (69) terminated with exit
code 0
```
And the worker logs show:
```
[2022-04-08 13:51:55,752: INFO/MainProcess] Task
airflow.executors.celery_executor.execute_command[a830f79a-7f89-49c2-897e-f522d4fddb1d]
received
[2022-04-08 13:51:55,810: INFO/ForkPoolWorker-15] Executing command in
Celery: ['airflow', 'tasks', 'run', 'test_dag', 'print_date',
'manual__2022-04-08T13:51:54.828860+00:00', '--local', '--subdir',
'DAGS_FOLDER/data_airflow_dags/dags/te
st_dag.py']
[2022-04-08 13:51:55,811: INFO/ForkPoolWorker-15] Celery task ID:
a830f79a-7f89-49c2-897e-f522d4fddb1d
[2022-04-08 13:51:55,970: INFO/ForkPoolWorker-15] Filling up the DagBag from
/opt/airflow/dags/data_airflow_dags/dags/test_dag.py
[2022-04-08 13:51:56,001: WARNING/ForkPoolWorker-15] Running <TaskInstance:
test_dag.print_date manual__2022-04-08T13:51:54.828860+00:00 [queued]> on host
airflow-worker-deployment-6bd56f56b4-zc9zj
E0408 13:51:56.070197711 64 fork_posix.cc:70] Fork support is
only compatible with the epoll1 and poll polling strategies
E0408 13:51:56.188089607 64 fork_posix.cc:70] Fork support is
only compatible with the epoll1 and poll polling strategies
[2022-04-08 13:52:01,322: INFO/ForkPoolWorker-15] Task
airflow.executors.celery_executor.execute_command[a830f79a-7f89-49c2-897e-f522d4fddb1d]
succeeded in 5.569218712000293s: None
```
### What you think should happen instead
Instead, I expect to see the same logs that I've got locally in stackdriver,
being able to fetch them through Airflow UI or Stackdriver UI
### How to reproduce
I deployed an airflow instance on a kubernetes cluster
([kind](https://kind.sigs.k8s.io/)) with Celery executor.
Defined a service account able to write logs in stackdriver and tried to
setup everything to make everything works (maybe I missed something).
I started with NO remote logging and launched a simple dag containing a
single bash operator calling `date`. This task generate a log printing the date
(as well as putting it in the task XCom).
Then I updated environment variables of the three of scheduler, webserver,
worker with:
* AIRFLOW__LOGGING__REMOTE_LOGGING=true
* AIRFLOW__LOGGING__REMOTE_BASE_LOG_FOLDER=stackdriver://airflow-k8s-alexis
### Operating System
Debian GNU/Linux 10 (buster)
### Versions of Apache Airflow Providers
apache-airflow-providers-google==6.4.0
### Deployment
Other Docker-based deployment
### Deployment details
K8s deployement based on official docker-compose.
K8s Rev: v1.23.4
kind version 0.12.0
Airflow image: 2.2.4
Redis image: latest
### Anything else
I search for "Fork support is only compatible with the epoll1 and poll
polling strategies" error message on Google.
It seems to be related to GRPC.
I tried other polling strategy (poll and epoll1) but it doesn't fix the
error and just produce another error message (`Other threads are currently
calling into gRPC, skipping fork() handlers`).
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of
Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]