moomindani commented on PR #63775:
URL: https://github.com/apache/airflow/pull/63775#issuecomment-4109038547

   Thanks for the PR and the clear root-cause analysis!
   
   I took a look at both this PR and #63611 which targets the same issue. A few 
thoughts:
   
   **The "pre-warm the cache" approach here is clever but fragile.** It works 
because `_a_do_api_call()` eagerly populates the `@cached_property` cache, so 
subsequent sync property accesses in `_endpoint_url()` etc. hit the cached 
value. However, other async methods like `_a_get_token()`, 
`_a_get_aad_headers()`, `_a_get_k8s_jwt_token()`, and 
`_a_get_k8s_projected_volume_token()` also access `self.databricks_conn` 
directly — there are 32+ such accesses across async code paths. Today this 
works because they're all called downstream of `_a_do_api_call()`, but if any 
future code calls those methods independently (or if the call order changes), 
the same crash would resurface. This makes it a latent footgun for future 
contributors.
   
   **The import of `_async_get_connection`** from 
`airflow.sdk.execution_time.context` is a private API (note the `_` prefix). 
`BaseHook` already provides a public `aget_connection()` async classmethod in 
`task-sdk/src/airflow/sdk/bases/hook.py` — that would be the right method to 
use here.
   
   **#63611 takes a more comprehensive approach** — it replaces every 
`self.databricks_conn` access in async code paths with async alternatives, so 
no async method ever touches the sync property. While it's a larger change, 
it's safer for future development and doesn't rely on call ordering assumptions.
   
   I'd suggest either:
   1. Expanding this PR to cover all async code paths (similar to #63611's 
approach), or
   2. Coordinating with @bob-skowron to consolidate into one PR
   
   What do you think?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to