> I guess I'm missing what would go in the provider's integration tests
then, but I'll catch up once I see it in action.

Tl;DR; Look it up - there are quite a few of those already. Take a look at
apache/kafka for example.

Generally speaking for providers, what we are testing is not "internals" of
airflow - but whether the provider Operators/Hooks etc. "do the job" that
is expected of them. In that sense it's rather easy to have a high-level
philosophical difference between unit/system/integration. Might be a good
refresher for those who knew it, and a longer explanation for those who
never had time to look at the docs. Of course - feel free to skip that long
email, if you are not interested - pretty much the same explanation
(differently structured) you will find in our docs.

* *unit tests* (in Airflow providers at least) are testing if Operators (as
an example - it applies to all of the entities in provider) behave as
expected and allow to implement tests for all kinds of edge cases, we
expect unit tests for "every" meaningful change for provider, because if
you add or modify functionality and unit test still pass without changing,
it means that either you missed the unit test before that tested the
behaviour, or your change does nothing. But in pretty much all those unit
tests interaction with the "thing" that provider talks to is mocked. you
don't actually talk to AWS or Kafka, you pretend you do. Currently some of
them use DB, some not. The goal for Airflow 3 is that they are 100% DB-less
for providers.

* *integration tests* (in Airflow providers at least) on the other hand are
tests that talk to the real thing (run as docker alongside our tests) -
this is mostly applicable to "open-source" things that we talk to - ones
that are available and relatively easy to run locally as docker container
and one that can be easily configure to actually "work" locally. Example it
is Kafka - we start a "real" kafka server and we talk to it. Those
integration tests are mostly testing "green path" - they are slower and
more "brittle" - so you just need a few of those. They are really implement
to avoid the situation like this, where all your unit tests happily pass
but your mocking is wrong and the whole thing does not work -
https://i.sstatic.net/yHGn1.gif (to be honest - this meme was actually an
inspiration for naming the tests type "integration") . They are usually
high level and call essentially "execute" methods of operators or
corresponding hook methods that are "coarse grained" and expect to talk to
the real thing in the right sequence, one that later will be used in Dags.

* *system tests (*this is a very specific type of tests that are very, very
much "Airflow-only".) have a very specific structure and they have a number
of purposes (and we are adding more ways how we use example dags from time
to time).

They are implemented as example dags that can be run end-to-end - but they
have the code snippet ("test_run = get_test_run(dag)") that essentially
allows the example dag to be run via pytest.

     1) The rally interesting function of "system tests" - for systems like
"aws", "gcp" and others that are "cloud" based and cannot be "**really**"
run easily on our CI without external credentials, they serve the same
purpose as "integration" tests - and this is what you, google, databricks
and LLM operators by Astronomer" are running - from what I last checked,
the dashboards are mostly broken now after the move - mostly due to change
paths (and I offered my help to fix those) - yesterday Vincent with a
little bit of my assistance fixed the AWS ones (YAY!), I hope in a few days
we will also fix the rest - I need to talk directly to the teams that run
them.

But the system tests are also very useful for all other providers - even
those that already have "integration" tests - or in some cases for those
operators that are in the core (we moved them to the "standard" provider
now so system tests are really only in providers in Airflow 3).

     2) those dags are "examples" that people can use to build their own
dags
     3) also snippets of those examples are extracted to provider's
documentation
     4) the benefit of having those tests as "example dags" fis that we are
actually walking over all those examples in our tests in CI image and we
are importing all of them - so we are at least sure that the example dags
can be imported and do not raise import errors - which already saved us
multiple times
      5) we test if the example dags do not generate DB access (in
top-level code) and (this is a very recent addition) we also test if hook's
"connections" are not instantiated in the top-level code (another bad
practice). This is to strengthen our "good practices" and make sure that
our example dags are not promoting bad behaviours
-> tests/always/test_example_dags.py has those tests.
      6) [THIS IS NOT YET THERE BUT I WANTED TO PROPOSE IT ONCE WE HAVE
MORE COMPLETE RUFF RULES IMPLEMENTED FOR AIRFLOW]  - we should be able to
run all ruff rules we come up with for Airflow 2-> 3 migration on all the
example_dags. This will make sure that in our examples we use and promote
all the good practices, and it will also confirm that the "ruff" rules we
implemented are "reasonable" and can be applied in practice

This is really the explanation of the current "nomenclature" we use.  And
yeah - if we have other ideas on how to name them - I think we can change
those names fairly easily :).

As usual - naming is hard, but those are the current names we use for those
types we have.

J.






On Wed, Feb 19, 2025 at 1:16 AM Ferruzzi, Dennis
<ferru...@amazon.com.invalid> wrote:

> I guess I'm missing what would go in the provider's integration tests
> then, but I'll catch up once I see it in action.
>
>
>  - ferruzzi
>
>
> ________________________________
> From: Jarek Potiuk <ja...@potiuk.com>
> Sent: Tuesday, February 18, 2025 3:34 PM
> To: dev@airflow.apache.org
> Subject: RE: [EXT] [DISCUSS] Tests structure for providers
>
> CAUTION: This email originated from outside of the organization. Do not
> click links or open attachments unless you can confirm the sender and know
> the content is safe.
>
>
>
> AVERTISSEMENT: Ce courrier électronique provient d’un expéditeur externe.
> Ne cliquez sur aucun lien et n’ouvrez aucune pièce jointe si vous ne pouvez
> pas confirmer l’identité de l’expéditeur et si vous n’êtes pas certain que
> le contenu ne présente aucun risque.
>
>
>
> > Basically, I like the idea and we'll have to put a little time into
> splitting all the exiting not-system-tests out into unit and integ.  Is
> that more or less what you are thinking?
>
> Actually - not really :).
>
> We should change (and this is planned) all the provider's tests to "non-DB"
> tests - so make them "closer" to unit tests. The main reason why some
> provider tests were using DB was because it was easier to do for authors
> and because we did not have "task sdk" and - yes - provider's code in
> general **could** use DB. With Task SDK, none of the provider's code should
> use db. And we should also make sure providers depend on TaskSDK (with
> compatibility layer for Airlfkow 2). That will happen in the coming weeks
> and any help there will be appreciated. I will likely work out a similar
> pattern to that as with the provider's move so anyone's help will be
> appreciated there. So ideally - there will no longer be "db_tests" in
> providers.
>
> We attempted to do some of it previously with DB isolation, when AIP-44 was
> implemented (but that used a different approach necessarily) but with Task
> SDK in place, it should be - ideally easier and more robust.
> It is captured as stage 2 and 3 in this plan
>
> https://github.com/orgs/apache/projects/403/views/2?pane=issue&itemId=81806927&issue=apache%7Cairflow%7C42632
> also Kaxil left that one issue that relates to it and it's part of the same
> story https://github.com/apache/airflow/issues/45839
>
> I am not sure though if we need to do anything like that in "core". I
> consider "core" using DB as "expected" - mocking DB for scheduler or Task
> SDK would make very little sense, and those tests are not really
> "integration" tests in the sense we have integration tests defined today
> (testing some external "containers" that are "integrated" with airflow code
> that uses those external system to talk to them). For me the "core" tests
> that use metadata DB are pretty much testing "airflow" functionality - with
> or without DB.
>
> Our Integration tests work differently - they require additional
> "docker-compose" setup, and setup the "integrated tests" to work with them.
> I think our "unit" definition is "good enough" as long as we agree that our
> unist tests are often not testing just
>
> But If you feel like you would like to propose change of nomenclature and
> name differently "unit" tests (no DB) and "some other name (with DB)" in
> core and move them to a separate directory - maybe yes that can be done,
> but I personally had not planned any of it , and I am not sure what bigger
> purpose it would have other than moving tests around and redefining naming.
> But yeah. if you have a concrete proposal and justification (and likely
> would like to do it) - then you can always propose it. Basically what it
> would mean is to take all the "db_test" marked tests, and moving them to
> another  (neither current "unit" nor "integration" tests and leaving the
> "non-db-tests" in "unit"). Not very difficult, probably anyone can do it
> but then it would require working out some mechanisms of keeping it this
> way - they are already run in separate jobs in CI, It's just done via
> pytest filters on the decorator - and the nice things is that multiple
> tests db/non-db are often mixed in same modules, share some fixtures and
> common code - I intuitively feel that keeping say "backfill" tests together
> in the same module - even if some are "db" and some "non-db" tests, is more
> valuable that "naming" those tests differently and placing them in
> different package and module.
>
> But maybe if someone would like to do POC and show what it might look like,
> I'd change my opinion.
>
> J.
>
>
>
>
>
> On Tue, Feb 18, 2025 at 11:42 PM Ferruzzi, Dennis
> <ferru...@amazon.com.invalid> wrote:
>
> > > If you look for it, you will find several
> > variants of the "Test pyramid" with anything from 3 to 7 layers and with
> > names that are sometimes the same but they are different places in
> > different pyramids - sometimes even the same names reverted in different
> > variants.
> >
> > Yeah, it's kind of like the "debate" between using the OSI model or the
> > DARPA model for describing network communications.  You always need
> > clarification when someone mentions "application layer" and can never
> > really be sure you are on the same page.  <obligatory "one more standard"
> > XKCD: https://xkcd.com/927/>
> >
> >
> > I've been thinking about this more since I sent that message and I think
> > we're saying the same thing.  Right now the structure is "system tests"
> and
> > "not a system test".  If we are going to make (and enforce) the
> > unit/integ/system differentiation - and I do like the proposal - then all
> > of the "not-system-tests" will need t be sifted through and sorted.  As
> an
> > immediate example, we currently have "unit tests" that hit the DB, which
> > should (IMHO) be considered integ tests as they are not self-contained...
> > which matches what you've linked: "Integration tests are special tests
> that
> > require additional services running, such as Postgres, MySQL, Kerberos,
> > etc."
> >
> > Basically, I like the idea and we'll have to put a little time into
> > splitting all the exiting not-system-tests out into unit and integ.  Is
> > that more or less what you are thinking?
> >
> >
> >
> >  - ferruzzi
> >
> >
> > ________________________________
> > From: Jarek Potiuk <ja...@potiuk.com>
> > Sent: Tuesday, February 18, 2025 11:10 AM
> > To: dev@airflow.apache.org
> > Subject: RE: [EXT] [DISCUSS] Tests structure for providers
> >
> > CAUTION: This email originated from outside of the organization. Do not
> > click links or open attachments unless you can confirm the sender and
> know
> > the content is safe.
> >
> >
> >
> > AVERTISSEMENT: Ce courrier électronique provient d’un expéditeur externe.
> > Ne cliquez sur aucun lien et n’ouvrez aucune pièce jointe si vous ne
> pouvez
> > pas confirmer l’identité de l’expéditeur et si vous n’êtes pas certain
> que
> > le contenu ne présente aucun risque.
> >
> >
> >
> > > I'm glad you landed on the unit/integration/system naming convention, I
> > like that but I'm curious what would go in the integration path.  Before
> I
> > started with Airflow, I had been taught that integration test are what we
> > call system tests.  I've come to use the two terms more or less
> > interchangeably.  What I am familiar without outside of Airflow is unit
> > tests are within a module and integration tests are between modules.
> Under
> > that definition, most (all?) of what we currently call system tests are
> > integration tests.  Maybe it's an "all squares are rectangles but not all
> > rectangles are squares" type definition and I'm missing some distinction
> > between the terms.  I guess some of what we have in the existing unit
> tests
> > would be more accurately integ tests, so maybe that's what you have in
> > mind.
> >
> > Funny you mention this - even today when driving when I was reading to
> "The
> > real Python" Podcast from earlier this week about TDD vs. BDD
> > https://realpython.com/podcasts/rpp/239/ and Christopher Bailey and
> > Christopher Trudeau have been explaining all kinds of testing and "Test
> > pyramid" and they explicitly mention how blurry and imperfect the
> > nomenclature is for tests :D. If you look for it, you will find several
> > variants of the "Test pyramid" with anything from 3 to 7 layers and with
> > names that are sometimes the same but they are different places in
> > different pyramids - sometimes even the same names reverted in different
> > variants.
> >
> > According to some people even our unit tests (the db ones especially) are
> > really hardly "unit tests" - unit tests should focus on testing class or
> > function, and in many cases our unit tests are testing .... more ... and
> db
> > ... and sometimes networking ... and .......
> >
> > So I'd say - "unit", "integration" and "system" are names that we should
> > agree on and define in Airflow to know what they really mean, but I don't
> > think there is a universally standard definition for either of those that
> > can be applied to what we do in airflow. And I think we have to come up
> > with our own convention. Actually we have even more types of tests -
> docker
> > compose tests, kubernetes tests, helm unit tests ....  (but they are not
> in
> > providers,- and as part of our refactor we will get to those eventually
> > when we will be moving our project structure:
> >
> > Currently, full list of types of tests we have and detailed explanation
> > what each test type is are in
> >
> >
> https://github.com/apache/airflow/blob/main/contributing-docs/09_testing.rst
> >
> > And let me quote it here:
> >
> >
> >    - Unit tests
> >    <
> >
> https://github.com/apache/airflow/blob/main/contributing-docs/testing/unit_tests.rst
> > >
> > are
> >    Python tests that do not require any additional integrations. Unit
> tests
> >    are available both in the Breeze environment
> >    <
> https://github.com/apache/airflow/blob/main/dev/breeze/doc/README.rst>
> >     and local virtualenv
> >    <
> >
> https://github.com/apache/airflow/blob/main/contributing-docs/07_local_virtualenv.rst
> > >.
> >    Note that in order for a pull request to be reviewed, *it should have
> a
> >    unit test*, unless a test is not required i.e. for documentation
> > changes.
> >    - Integration tests
> >    <
> >
> https://github.com/apache/airflow/blob/main/contributing-docs/testing/integration_tests.rst
> > >
> > are
> >    available in the Breeze environment
> >    <
> https://github.com/apache/airflow/blob/main/dev/breeze/doc/README.rst>
> > that
> >    is also used for Airflow CI tests. Integration tests are special tests
> > that
> >    require additional services running, such as Postgres, MySQL,
> Kerberos,
> > etc.
> >    - Docker Compose tests
> >    <
> >
> https://github.com/apache/airflow/blob/main/contributing-docs/testing/docker_compose_tests.rst
> > >
> > are
> >    tests we run to check if our quick start docker-compose works.
> >    - Kubernetes tests
> >    <
> >
> https://github.com/apache/airflow/blob/main/contributing-docs/testing/k8s_tests.rst
> > >
> > are
> >    tests we run to check if our Kubernetes deployment and Kubernetes Pod
> >    Operator works.
> >    - Helm unit tests
> >    <
> >
> https://github.com/apache/airflow/blob/main/contributing-docs/testing/helm_unit_tests.rst
> > >
> > are
> >    tests we run to verify if Helm Chart is rendered correctly for various
> >    configuration parameters.
> >    - System tests
> >    <
> >
> https://github.com/apache/airflow/blob/main/contributing-docs/testing/system_tests.rst
> > >
> > are
> >    automatic tests that use external systems like Google Cloud and AWS.
> > These
> >    tests are intended for an end-to-end DAG execution.
> >
> > You can also run other kinds of tests when you are developing airflow
> > packages:
> >
> >    - Testing packages
> >    <
> >
> https://github.com/apache/airflow/blob/main/contributing-docs/testing/testing_packages.rst
> > >
> > is
> >    a document that describes how to manually build and test pre-release
> >    candidate packages of airflow and providers.
> >    - Python client tests
> >    <
> >
> https://github.com/apache/airflow/blob/main/contributing-docs/testing/python_client_tests.rst
> > >
> > are
> >    tests we run to check if the Python API client works correctly.
> >    - DAG testing
> >    <
> >
> https://github.com/apache/airflow/blob/main/contributing-docs/testing/dag_testing.rst
> > >
> > is
> >    a document that describes how to test DAGs in a local environment with
> >    DebugExecutor. Note, that this is a legacy method - you can now use
> >    dag.test() method to test DAGs.
> >
> > So the "system"/ "integration"/ "unit" we have in Providers - is exactly
> > reflecting the terminology and definition we defined a long time ago. Of
> > course - we can still change if we think it's worth it. BUT IMHO -
> because
> > there are no "standards" - in any project you have to learn what a
> > particular type of test really is. And in Airflow we have a very complete
> > and well documented definition (follow the links to see it). So we are at
> > least consistent "internally".
> >
> > J.
> >
> >
> >
> >
> > On Tue, Feb 18, 2025 at 7:51 PM Ferruzzi, Dennis
> > <ferru...@amazon.com.invalid> wrote:
> >
> > > Looks like this was discussed, decided, and done over the weekend, but
> > > just wanted to say thanks for that.  The redundancy in he import paths
> > has
> > > been bugging me LOL
> > >
> > > I'm glad you landed on the unit/integration/system naming convention, I
> > > like that but I'm curious what would go in the integration path.
> Before
> > I
> > > started with Airflow, I had been taught that integration test are what
> we
> > > call system tests.  I've come to use the two terms more or less
> > > interchangeably.  What I am familiar without outside of Airflow is unit
> > > tests are within a module and integration tests are between modules.
> > Under
> > > that definition, most (all?) of what we currently call system tests are
> > > integration tests.  Maybe it's an "all squares are rectangles but not
> all
> > > rectangles are squares" type definition and I'm missing some
> distinction
> > > between the terms.  I guess some of what we have in the existing unit
> > tests
> > > would be more accurately integ tests, so maybe that's what you have in
> > mind.
> > >
> > > Anyway, thanks for another major cleanup task!
> > >
> > >
> > >
> > >  - ferruzzi
> > >
> > >
> > > ________________________________
> > > From: Jarek Potiuk <ja...@potiuk.com>
> > > Sent: Sunday, February 16, 2025 3:48 AM
> > > To: dev@airflow.apache.org
> > > Subject: RE: [EXT] [DISCUSS] Tests structure for providers
> > >
> > > CAUTION: This email originated from outside of the organization. Do not
> > > click links or open attachments unless you can confirm the sender and
> > know
> > > the content is safe.
> > >
> > >
> > >
> > > AVERTISSEMENT: Ce courrier électronique provient d’un expéditeur
> externe.
> > > Ne cliquez sur aucun lien et n’ouvrez aucune pièce jointe si vous ne
> > pouvez
> > > pas confirmer l’identité de l’expéditeur et si vous n’êtes pas certain
> > que
> > > le contenu ne présente aucun risque.
> > >
> > >
> > >
> > > And yes Ash -> ruff rule it is to ban "providers" imports
> > > https://github.com/apache/airflow/pull/46801 (and it found two places
> > > where
> > > they were still used)
> > >
> > > On Sun, Feb 16, 2025 at 12:35 PM Jarek Potiuk <ja...@potiuk.com>
> wrote:
> > >
> > > > And merged! that was fast. Thanks Bugra, Ash, Jens !
> > > >
> > > > On Sun, Feb 16, 2025 at 11:40 AM Jarek Potiuk <ja...@potiuk.com>
> > wrote:
> > > >
> > > >> https://github.com/apache/airflow/pull/46800 -> there you go :) .
> > Happy
> > > >> reviewing of 1570+ files changed.
> > > >>
> > > >> BTW. It's way easier to review it with `git diff` than with Github
> UI
> > :)
> > > >>
> > > >> On Sun, Feb 16, 2025 at 11:32 AM Jarek Potiuk <ja...@potiuk.com>
> > wrote:
> > > >>
> > > >>> Naaa. that one is super easy to implement and automate and it's
> > better
> > > >>> to have one quick "surgical" cut to rename all of those :), I am
> > about
> > > to
> > > >>> send PR with all of those changes in a moment.
> > > >>>
> > > >>> On Sun, Feb 16, 2025 at 11:26 AM Buğra Öztürk <
> > ozturkbugr...@gmail.com
> > > >
> > > >>> wrote:
> > > >>>
> > > >>>> Amazing!:) Maybe we can divide and conquer this on structure
> changes
> > > and
> > > >>>> pre-commit rather than leaving you with all :)
> > > >>>>
> > > >>>> On Sun, 16 Feb 2025, 10:45 Jarek Potiuk, <ja...@potiuk.com>
> wrote:
> > > >>>>
> > > >>>> > Yeah "unit" seems to be the way to go :) . I will make another
> > giant
> > > >>>> PR
> > > >>>> > (but this time it should be even easier to review) to convert to
> > it
> > > :)
> > > >>>> >
> > > >>>> > On Sun, Feb 16, 2025 at 10:41 AM Buğra Öztürk <
> > > >>>> ozturkbugr...@gmail.com>
> > > >>>> > wrote:
> > > >>>> >
> > > >>>> > > Thanks for all the effort on this! It looks great!
> > > >>>> > > - The approach looks great!
> > > >>>> > > - Heavy plus one on this. The pre-commit should be one of the
> > > ground
> > > >>>> > > pillars of any standardisation. Having an extra file structure
> > > check
> > > >>>> > could
> > > >>>> > > be included too on top of `from` to eliminate new providers to
> > > >>>> follow the
> > > >>>> > > consistent structure.
> > > >>>> > > - I like the unit too over provider_tests. It is a great idea
> to
> > > >>>> > eliminate
> > > >>>> > > ambiguity. The other parts seem pretty pretty good.
> > > >>>> > >
> > > >>>> > > Along with the latest improvements, these will be great
> > additions!
> > > >>>> > >
> > > >>>> > >
> > > >>>> > > On Sun, Feb 16, 2025 at 4:15 AM Rahul Vats <
> > > rah.sharm...@gmail.com>
> > > >>>> > wrote:
> > > >>>> > >
> > > >>>> > > > Thanks for the detailed explanation and for driving this
> > > effort—it
> > > >>>> > looks
> > > >>>> > > > like a solid improvement.
> > > >>>> > > >
> > > >>>> > > >    1.
> > > >>>> > > >
> > > >>>> > > >    I love the approach & new structure. I can't think of a
> > > better
> > > >>>> > > approach.
> > > >>>> > > >    2.
> > > >>>> > > >
> > > >>>> > > >    I agree that adding a pre-commit check to enforce
> > consistent
> > > >>>> imports
> > > >>>> > > is
> > > >>>> > > >    a good idea. It will help maintain the new structure and
> > > >>>> prevent
> > > >>>> > > >    regressions.
> > > >>>> > > >    3.
> > > >>>> > > >
> > > >>>> > > >    For the naming, I’d prefer using unit instead of
> > > >>>> provider_tests to
> > > >>>> > > keep
> > > >>>> > > >    it more intuitive and consistent with system and
> > integration.
> > > >>>> This
> > > >>>> > > makes
> > > >>>> > > >    the structure self-explanatory and easier for new
> > > contributors.
> > > >>>> > > >
> > > >>>> > > >
> > > >>>> > > > Regards,
> > > >>>> > > > Rahul Vats
> > > >>>> > > >
> > > >>>> > > > On Sun, 16 Feb 2025 at 03:50, Jarek Potiuk <
> ja...@potiuk.com>
> > > >>>> wrote:
> > > >>>> > > >
> > > >>>> > > > > > 2) no real opinion, seems useful. Let’s see if we can do
> > it
> > > >>>> with a
> > > >>>> > > ruff
> > > >>>> > > > > rule first? It _might_ be possible. (It is possible to
> > extend
> > > >>>> the
> > > >>>> > ruff
> > > >>>> > > > > rules on a per subproject basis, check out
> > > >>>> task_sdk/pyproject.toml)
> > > >>>> > > > >
> > > >>>> > > > > I am not sure if we need ruff rule for it, as we have not
> > yet
> > > >>>> enabled
> > > >>>> > > any
> > > >>>> > > > > `ruff` airflow rules in airflow development (it is
> targeted
> > so
> > > >>>> far
> > > >>>> > for
> > > >>>> > > > > users writing DAGs}
> > > >>>> > > > >
> > > >>>> > > > > I think it's more "airflow internal" rule not something
> that
> > > >>>> can be
> > > >>>> > > used
> > > >>>> > > > > for general audience (which is pretty-much what ruff rules
> > are
> > > >>>> about)
> > > >>>> > > > >
> > > >>>> > > > > Unfortunately - the current way ruff works does not allow
> > > local
> > > >>>> > > > > "plugin" functionality to add new rules that would only be
> > > >>>> applicable
> > > >>>> > > to
> > > >>>> > > > > the current project you are in. In the absence of that
> > > feature,
> > > >>>> > having
> > > >>>> > > a
> > > >>>> > > > > python pre-commit seem the next-best thing.
> > > >>>> > > > >
> > > >>>> > > > > J.
> > > >>>> > > > >
> > > >>>> > > > >
> > > >>>> > > > > On Sat, Feb 15, 2025 at 10:50 PM Ash Berlin-Taylor <
> > > >>>> a...@apache.org>
> > > >>>> > > > wrote:
> > > >>>> > > > >
> > > >>>> > > > > > 100%. It’s on my list to tackle very soon, it can’t get
> > put
> > > >>>> off
> > > >>>> > much
> > > >>>> > > > > > longer.
> > > >>>> > > > > >
> > > >>>> > > > > > I have some thoughts (around back compat mostly) but
> they
> > > >>>> aren’t
> > > >>>> > > > > collected
> > > >>>> > > > > > yet, and I won’t derail this thread.
> > > >>>> > > > > >
> > > >>>> > > > > > > On 15 Feb 2025, at 21:43, Jarek Potiuk <
> > ja...@potiuk.com>
> > > >>>> wrote:
> > > >>>> > > > > > >
> > > >>>> > > > > > > cc: @ashb @uranusjr @kaxil -> something for
> > consideration
> > > >>>> in our
> > > >>>> > > > > > > discussions on how to repackage airlfow soon. I keep
> on
> > > >>>> > explaining
> > > >>>> > > > why
> > > >>>> > > > > > > running code in airflow.__init__.py is a bad idea and
> > > >>>> advocating
> > > >>>> > > for
> > > >>>> > > > > > > removal of it and replace it with explicit
> > initialization,
> > > >>>> yet
> > > >>>> > that
> > > >>>> > > > > topic
> > > >>>> > > > > > > have not been discussed yet, but I will plan to start
> a
> > > >>>> > discussion
> > > >>>> > > > > about
> > > >>>> > > > > > it
> > > >>>> > > > > > > soon once we approach the packaging subject. I am not
> > sure
> > > >>>> what's
> > > >>>> > > > your
> > > >>>> > > > > > > thinking is about this - I know you spent
> consirderable
> > > >>>> amount of
> > > >>>> > > > time
> > > >>>> > > > > on
> > > >>>> > > > > > > doing all the "lazy initalization" dance all over the
> > > >>>> places,
> > > >>>> > and I
> > > >>>> > > > > think
> > > >>>> > > > > > > it adds a lot of complexity to our code and only
> > partially
> > > >>>> solves
> > > >>>> > > the
> > > >>>> > > > > > > cicular imports problem. But I know @ashb has very
> > strong
> > > >>>> feeling
> > > >>>> > > > about
> > > >>>> > > > > > > being able to do "from airlfow import Dag" - which
> more
> > or
> > > >>>> less
> > > >>>> > > > > requires
> > > >>>> > > > > > > all this complexity. I just don't think it's worth it.
> > > >>>> > > > > >
> > > >>>> > > > > >
> > > >>>> > > > >
> > > >>>> > > >
> > > >>>> > >
> > > >>>> > >
> > > >>>> > > --
> > > >>>> > > Bugra Ozturk
> > > >>>> > >
> > > >>>> >
> > > >>>>
> > > >>>
> > >
> >
>

Reply via email to