potiuk commented on issue #51545: URL: https://github.com/apache/airflow/issues/51545#issuecomment-2980436148
> If all packages have pinned versions, you can't independently update client or server. "Dependency conflicts for administrators supporting data teams using different versions of providers, libraries, or python packages" like Ash mentioned I think you misunderstood my description I think only task-sdk should be client and it should be "all" you need to run client. Even if it means duplication of some code (serialization) - and you can make duplication smartly by symbolic linking or copying relevant code between distributions in pre-commit. IMHO 'task.sdk' should be full, standalone, separately versioned thing and the only thing imported by providers and the only thing that is needed to run any kind of worker. You should not need any package other than "airflow-task-sdk" to be installed to make "celery worker" or "kubernetes image" to work. That's the client. Versioned separately. With all "supported" versions (going back few releases following agreed deprecation schedule) tested against current "main" version of server automatically in CI. I also think the only reason we should split "airflow-core" (and that's the motivation I have) is to shard dependencies - and make different part of the system (schduler, api-server, triggerer ) use potentially different set of dependencies - especially api_server should be split from the rest. So introducing "common" is for me the way how you can have common utils used by all those separate "distributions" to have common utls like logging, conf, etc. etc. - but all of them should always come from the same "commit" when installed togetther (i.e. should be pinned to the same version that they were tested it in "main" and "v3_*" branch). Otherwise it's going to be a mess. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
