dungdm93 opened a new issue #17202:
URL: https://github.com/apache/airflow/issues/17202


   
   **Apache Airflow version**: `2.1.2+d25854dd413aa68ea70fb1ade7fe01425f456192`
   
   
   **Kubernetes version (if you are using kubernetes)** (use `kubectl 
version`): `v1.19.10-gke.1600`
   
   **Environment**:
   
   - **Cloud provider or hardware configuration**:
   - **OS** (e.g. from /etc/os-release):
   - **Kernel** (e.g. `uname -a`):
   - **Install tools**:
   - **Others**:
   
   **What happened**:
   My airflow cluster use S3 remote logs to MinIO (a S3 compatible object 
store) follow [this 
guide](https://airflow.apache.org/docs/apache-airflow-providers-amazon/stable/logging/s3-task-handler.html).
 But when some (example) DAG run, I got the following error:
   ![Screenshot from 2021-07-24 
15-15-10](https://user-images.githubusercontent.com/6848311/126863452-5b5a215e-5279-4c33-8d3b-aa1f362a9975.png)
   
   After some investigation, I find out that airflow don't truncate 
`s3://<bucket>` part from `remote_base_log_folder` when uploading logs to S3.
   ![Screenshot from 2021-07-24 
16-05-56](https://user-images.githubusercontent.com/6848311/126863631-9371f10a-e03d-4fee-9710-580d1cbbf548.png)
   (`mc admin trace <target>`)
   
   **What you expected to happen**:
   `s3://<bucket>` MUST be truncated from `remote_base_log_folder` or allow 
config `remote_base_log_folder` without `s3://` prefix and bucket. (only base 
path)
   
   **How to reproduce it**:
   * Airflow config:
   ![Screenshot from 2021-07-24 
16-09-06](https://user-images.githubusercontent.com/6848311/126863756-7346af33-cd23-4e92-a2b5-c5d9bda5b67b.png)
   
   * Airflow logs connection:
   ![Screenshot from 2021-07-24 
16-09-19](https://user-images.githubusercontent.com/6848311/126863752-fdafcf96-5634-4d01-9510-62800aba5a73.png)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to