ashb commented on pull request #15141:
URL: https://github.com/apache/airflow/pull/15141#issuecomment-821223370
`./breeze -i kerberos` should give you a docker env with kerberos set up I
think.
From CI on this branch:
```
2021-04-01T20:33:12.0023083Z 18.85s setup
tests/api/auth/backend/test_kerberos_auth.py::TestApiKerberos::test_trigger_dag
...
2021-04-01T20:33:12.0048850Z 0.09s call
tests/api/auth/backend/test_kerberos_auth.py::TestApiKerberos::test_trigger_dag
...
2021-04-01T20:33:12.0055405Z 0.01s teardown
tests/api/auth/backend/test_kerberos_auth.py::TestApiKerberos::test_unauthorized
...
2021-04-01T20:33:12.0062038Z 0.01s call
tests/api/auth/backend/test_kerberos_auth.py::TestApiKerberos::test_unauthorized
2021-04-01T20:33:12.0099990Z =========== 37 passed, 8032 skipped, 8 warnings
in 185.60s (0:03:05) ===========
```
Compared to on master:
```
31.00s setup
tests/api/auth/backend/test_kerberos_auth.py::TestApiKerberos::test_trigger_dag
25.11s call
tests/executors/test_celery_executor.py::TestCeleryExecutor::test_celery_integration_1_redis_redis_6379_0
20.30s call
tests/executors/test_celery_executor.py::TestCeleryExecutor::test_celery_integration_0_amqp_guest_guest_rabbitmq_5672
10.97s call
tests/api/auth/backend/test_kerberos_auth.py::TestApiKerberos::test_trigger_dag
7.16s call
tests/providers/trino/hooks/test_trino.py::TestTrinoHookIntegration::test_should_record_records
6.03s call
tests/api/auth/backend/test_kerberos_auth.py::TestApiKerberos::test_unauthorized
=========== 37 passed, 8179 skipped, 8 warnings in 287.80s (0:04:47)
===========
```
So saves about a minute for the "integration" set of tests.
We probably need to optimize the "slow" path next -- which ever of the test
groups on master is the slowest -- since they are run in parallel one test type
being quicker might not help anymore if we are always waiting on a slow one.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]