potiuk commented on PR #32089: URL: https://github.com/apache/airflow/pull/32089#issuecomment-1612107752
> may have fixed this issues faster if it didn't disappear every time I synced my branch It's not a question to us - rather to how Github works. You can still see them in past runs - you can find your actions and results of it in "actions" tab - you can filter by your branch name for example. Also I think you can see them in "previous attempts when you go to details of new run: <img width="482" alt="Screenshot 2023-06-28 at 22 59 54" src="https://github.com/apache/airflow/assets/595491/6c0fe9cd-9f1a-4cca-a12d-cedc760fc792"> Also the bit of difficulty for you personally is that the policy for all ASF project is that workflow of "new contributors" (so people who never committed to the project) have to be manually approved by committers -> this is to prevent various kind of abuse (for example using the resources that the ASF has donated for free by GitHub abused by bad actors who create fake new accounts and try to mine bitcoin by opening multiple pull requests with code that mines cryptocurrency. This has already been a pattern actively abused by bad actors so it is a real scenario of abuse that has been actively exploited: https://github.blog/2021-04-22-github-actions-update-helping-maintainers-combat-bad-actors/ But you are completely covered. You can run whatever the CI is doing with Breeze. https://github.com/apache/airflow/blob/main/STATIC_CODE_CHECKS.rst#running-static-code-checks-via-breeze -> shows the way how you can reproduce locally all tests `breeze static-checks --all-files` without having to wait for CI to be approved and run. This is the reason we have `breeze` that provides commmon, completely reproducible environment that you can run locally to reproduce whatever CI errors you see. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
