Please include what the pytest output looks like in a failure case to in any 
proposal.

> On 11 Jul 2025, at 11:02, Jarek Potiuk <ja...@potiuk.com> wrote:
> 
> It **looks** like we are close to consensus on that one. I think a good
> idea would be to create an outline proposal summarizing what we are going
> to have finally, ask people for comments if anything has anything against
> it, and if we do not see any "no' - create a PR with more details in the
> guideline an calling officially for "[LAZY CONSENSUS]" on it (and in
> the meantime proceed with implementation :D.
> 
> J
> 
> On Fri, Jul 11, 2025 at 11:36 AM Kyungjun Lee <kyungjunlee...@gmail.com>
> wrote:
> 
>> Should we start by drafting the document first, or should we ask the
>> community through a vote whether it’s okay to create the guideline?
>> 
>> 
>> 
>> 
>> 
>> 2025년 7월 11일 (금) 오후 6:31, Kyungjun Lee <kyungjunlee...@gmail.com>님이 작성:
>> 
>>> I want to lead this discussion!!!
>>> 
>>> 2025년 7월 11일 (금) 오후 6:18, Jarek Potiuk <ja...@potiuk.com>님이 작성:
>>> 
>>>> We just need a volunteer (or few) to drive it :D
>>>> 
>>>> On Fri, Jul 11, 2025 at 11:15 AM Jarek Potiuk <ja...@potiuk.com> wrote:
>>>> 
>>>>> Yeah. I like it and the discussion :)  - it's so cool to see so many
>>>>> people taking part and leading the discussion. The Airflow community
>> is
>>>>> awesome.
>>>>> 
>>>>> I think it's nice with pytest-mock + the "pytest-first" patterns. I
>>>> think
>>>>> it would be cool to indeed propose a "guideline" first and then
>> possibly
>>>>> even rewrite things and add some guardrails to flag non-pytest usages
>>>>> automatically (pre-commit) - that is always the best to not only
>>>> "agree" on
>>>>> something but automate keeping that in-check.
>>>>> 
>>>>> We could likely - again - use AI and run a small mini-project to
>> convert
>>>>> all our tests. I think if we write detailed-enough guideline, we can
>>>>> literally prompt AI "rewrite the test code here following the
>> guidelines
>>>>> here".
>>>>> 
>>>>> So what I see as an "outcome of a discussion" is:
>>>>> * to collaboratively work on the guidelines
>>>>> * make them both human and AI digestible
>>>>> * add guardrails to not add old style in new files and to gradually
>> mark
>>>>> parts of the code as "done" when done
>>>>> * run the "mini-project" to apply it consistently (and maybe as a
>>>>> result iterate and correct and improve the guidelines)
>>>>> 
>>>>> J.
>>>>> 
>>>>> 
>>>>> On Fri, Jul 11, 2025 at 7:57 AM Amogh Desai <amoghdesai....@gmail.com
>>> 
>>>>> wrote:
>>>>> 
>>>>>> +1 to TP's proposal too.
>>>>>> 
>>>>>> It's easy to read and also stands out better.
>>>>>> 
>>>>>> We have a few places in the task-sdk tests where we also have done
>>>>>> patterns
>>>>>> like:
>>>>>> 
>>>>>> assert not any(
>>>>>>    x
>>>>>>    == mock.call(
>>>>>>        msg=GetXCom(
>>>>>>            key="key",
>>>>>>            dag_id="test_dag",
>>>>>>            run_id="test_run",
>>>>>>            task_id="pull_task",
>>>>>>            map_index=-1,
>>>>>>        ),
>>>>>>    )
>>>>>>    for x in mock_supervisor_comms.send.call_args_list
>>>>>> )
>>>>>> 
>>>>>> 
>>>>>> call_args_list is particularly useful in scenarios when you want to
>>>>>> validate presence / absence of a call.
>>>>>> 
>>>>>> Thanks & Regards,
>>>>>> Amogh Desai
>>>>>> 
>>>>>> 
>>>>>> On Thu, Jul 10, 2025 at 8:39 PM Kyungjun Lee <
>> kyungjunlee...@gmail.com
>>>>> 
>>>>>> wrote:
>>>>>> 
>>>>>>> Thank you all — I really appreciate how many people have joined the
>>>>>>> discussion.
>>>>>>> 
>>>>>>> I also like the approach that Tzu-ping Chung suggested. This really
>>>>>> isn’t
>>>>>>> an easy topic. At first, I thought it would be best to use only
>> plain
>>>>>>> assert
>>>>>>> statements, but after reading through the different perspectives
>>>> here,
>>>>>> I’ve
>>>>>>> come to realize that overusing assert can also be problematic. It’s
>>>>>> been a
>>>>>>> great reminder that we should be intentional about what we choose
>> to
>>>>>> assert
>>>>>>> — making sure we’re testing what truly matters.
>>>>>>> 
>>>>>>> 
>>>>>>> I’ll also follow up soon with an example snippet and a brief
>> testing
>>>>>> guide
>>>>>>> to help clarify the discussion.
>>>>>>> 
>>>>>>> 2025년 7월 10일 (목) 오후 11:49, Tzu-ping Chung <t...@astronomer.io.invalid
>>>>> 님이
>>>>>> 작성:
>>>>>>> 
>>>>>>>> Does pytest-mock have an equivalent for call()? I agree for
>>>> mocking in
>>>>>>>> general we should consider replacing plain decorators and context
>>>>>>> managers
>>>>>>>> with the mocker fixture. This probably deserves its own
>> discussion
>>>>>>> thread.
>>>>>>>> 
>>>>>>>> --
>>>>>>>> Sent from my iPhone
>>>>>>>> 
>>>>>>>>> On 10 Jul 2025, at 14:37, Dev-iL <gid....@gmail.com> wrote:
>>>>>>>>> 
>>>>>>>>> One tiny comment regarding TP's suggestion - IMHO it's better
>> to
>>>>>> avoid
>>>>>>>> `unittest.mock` in favor of the equivalent `mocker` fixture
>>>> provided
>>>>>> by
>>>>>>>> `pytest-mock`.
>>>>>>>>> 
>>>>>>>>> On 2025/07/10 06:30:22 Tzu-ping Chung wrote:
>>>>>>>>>> Personally I dislike things like assert_called_once_with etc.
>>>>>> since
>>>>>>>> they are easy to miss when you scan a test to see what they are
>>>>>> trying to
>>>>>>>> check. An 'assert' keyword stands out (it’s always the first word
>>>> in
>>>>>> the
>>>>>>>> line), especially with syntax highlighting.
>>>>>>>>>> 
>>>>>>>>>> I do agree the proposed Pytest style is arguably less
>>>> readable. I
>>>>>>>> offer another syntax.
>>>>>>>>>> 
>>>>>>>>>> 
>>>>>>>>>> from unittest.mock import call
>>>>>>>>>> 
>>>>>>>>>> assert mock_http_run.mock_calls == [
>>>>>>>>>> call(
>>>>>>>>>> endpoint=f"api/v2/accounts/{expected_account_id}/",
>>>>>>>>>> data=None,
>>>>>>>>>> extra_options=None,
>>>>>>>>>> )
>>>>>>>>>> ]
>>>>>>>>>> assert mock_paginate.mock_calls == []
>>>>>>>>>> 
>>>>>>>>>> 
>>>>>>>>>> To me, this is on par with assert_called_once_with et al. in
>>>>>> terms of
>>>>>>>> readability, and arguably better for test authors since you don’t
>>>>>> need to
>>>>>>>> remember the various function names anymore, but only
>> 'mock_calls'
>>>> and
>>>>>>> the
>>>>>>>> 'call' helper class.
>>>>>>>>>> 
>>>>>>>>>> TP
>>>>>>>>>> 
>>>>>>>>>> 
>>>>>>>>>>> On Jul 9, 2025, at 23:28, Ferruzzi, Dennis
>>>>>>> <fe...@amazon.com.INVALID>
>>>>>>>> wrote:
>>>>>>>>>>> 
>>>>>>>>>>> I'm a bit late to the party, and really only reiterating
>> what
>>>>>> has
>>>>>>>> already been said, but of the two examples (original and your
>>>>>> rewrite, I
>>>>>>>> prefer the original. I think as a general rule, we tend to use
>> the
>>>>>>>> assert_called_once, etc with mocks butt he asserts with
>> non-mocked
>>>>>>>> variables.
>>>>>>>>>>> 
>>>>>>>>>>> I am all for more documentation, but I'd have a slight
>>>>>> preference
>>>>>>>> towards keeping the existing structure.
>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>>> - ferruzzi
>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>>> ________________________________
>>>>>>>>>>> From: Kyungjun Lee <ky...@gmail.com>
>>>>>>>>>>> Sent: Tuesday, July 8, 2025 2:13 AM
>>>>>>>>>>> To: dev@airflow.apache.org
>>>>>>>>>>> Subject: RE: [EXT] [DISCUSS] Consistent test assertion
>> style:
>>>>>>>> pytest-native vs unittest-style
>>>>>>>>>>> 
>>>>>>>>>>> CAUTION: This email originated from outside of the
>>>>>> organization. Do
>>>>>>>> not click links or open attachments unless you can confirm the
>>>> sender
>>>>>> and
>>>>>>>> know the content is safe.
>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>>> AVERTISSEMENT: Ce courrier électronique provient d’un
>>>> expéditeur
>>>>>>>> externe. Ne cliquez sur aucun lien et n’ouvrez aucune pièce
>> jointe
>>>> si
>>>>>>> vous
>>>>>>>> ne pouvez pas confirmer l’identité de l’expéditeur et si vous
>>>> n’êtes
>>>>>> pas
>>>>>>>> certain que le contenu ne présente aucun risque.
>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>>> Thank you Ash and Amogh Desai for your insights and
>>>>>> explanations.
>>>>>>>>>>> The information you shared has been incredibly helpful and
>> is
>>>>>>>> contributing
>>>>>>>>>>> a lot to my growth.
>>>>>>>>>>> 
>>>>>>>>>>> 2025년 7월 8일 (화) 오후 2:54, Amogh Desai <am...@gmail.com>님이
>> 작성:
>>>>>>>>>>> 
>>>>>>>>>>>> Agreed, I personally also find the current way to be
>> easier
>>>> to
>>>>>>> read
>>>>>>>> and in
>>>>>>>>>>>> most
>>>>>>>>>>>> cases we want to assert if "something" was called,
>>>>>> irrespective of
>>>>>>>> the
>>>>>>>>>>>> order it was
>>>>>>>>>>>> called in. So the dedicated function based way works well
>>>> for
>>>>>> me.
>>>>>>>>>>>> 
>>>>>>>>>>>> If I want to test a order, I'd rather call parts of code
>>>> that I
>>>>>>>> want to
>>>>>>>>>>>> test explicitly and assert
>>>>>>>>>>>> on them.
>>>>>>>>>>>> 
>>>>>>>>>>>>> This happens because the mock object is not properly
>>>>>> recognized
>>>>>>> as
>>>>>>>> a mock
>>>>>>>>>>>> instance at type-checking time, which might confuse some
>>>>>>>> contributors,
>>>>>>>>>>>> especially new ones relying on type hints or static
>> analysis
>>>>>>> tools.
>>>>>>>>>>>> 
>>>>>>>>>>>> Regarding this, I'd say you can either cast mocks to their
>>>>>> types
>>>>>>> as
>>>>>>>> one
>>>>>>>>>>>> way:
>>>>>>>>>>>> `mock_http_run: MagicMock = mock_http_run` -- give or
>> take,
>>>> or
>>>>>> use
>>>>>>>>>>>> `autospec` to make the mock reflect the signature of the
>>>>>> object?
>>>>>>>> Check out:
>>>>>>>>>>>> 
>>>>>> https://docs.python.org/3/library/unittest.mock.html#autospeccing
>>>>>>>>>>>> 
>>>>>>>>>>>> Thanks & Regards,
>>>>>>>>>>>> Amogh Desai
>>>>>>>>>>>> 
>>>>>>>>>>>> 
>>>>>>>>>>>> On Mon, Jul 7, 2025 at 6:13 PM Ash Berlin-Taylor <
>>>>>>> as...@apache.org>
>>>>>>>> wrote:
>>>>>>>>>>>> 
>>>>>>>>>>>>> def test_get_account(self, mock_paginate: Mock,
>>>> mock_http_run:
>>>>>>>> Mocj,
>>>>>>>>>>>>> conn_id, account_id):
>>>>>>>>>>>>> 
>>>>>>>>>>>>> 
>>>>>>>>>>>>> Might fix that? IDEs in general do not cope well with
>>>> purest
>>>>>>>> tests, and
>>>>>>>>>>>>> are missing context on what most of the variables are, be
>>>> it
>>>>>>>>>>>> parameterised
>>>>>>>>>>>>> values or fixture values, so this isn’t a problem that is
>>>>>> unique
>>>>>>> to
>>>>>>>>>>>> mocks.
>>>>>>>>>>>>> 
>>>>>>>>>>>>>> On 7 Jul 2025, at 12:47, Kyungjun Lee <ky...@gmail.com>
>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> I'd like to follow up on our previous discussion about
>>>>>>>> pytest-native vs
>>>>>>>>>>>>>> unittest-style assertions.
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> While working on the following test case:
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> ```python
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> @pytest.mark.parametrize(
>>>>>>>>>>>>>> argnames="conn_id, account_id",
>>>>>>>>>>>>>> argvalues=[(ACCOUNT_ID_CONN, None), (NO_ACCOUNT_ID_CONN,
>>>>>>>>>>>> ACCOUNT_ID)],
>>>>>>>>>>>>>> ids=["default_account", "explicit_account"],
>>>>>>>>>>>>>> )
>>>>>>>>>>>>>> @patch.object(DbtCloudHook, "run")
>>>>>>>>>>>>>> @patch.object(DbtCloudHook, "_paginate")
>>>>>>>>>>>>>> def test_get_account(self, mock_paginate, mock_http_run,
>>>>>>> conn_id,
>>>>>>>>>>>>>> account_id):
>>>>>>>>>>>>>> hook = DbtCloudHook(conn_id)
>>>>>>>>>>>>>> hook.get_account(account_id=account_id)
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> assert hook.method == "GET"
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> _account_id = account_id or DEFAULT_ACCOUNT_ID
>>>>>>>>>>>>>> hook.run.assert_called_once_with(
>>>>>>>>>>>>>> endpoint=f"api/v2/accounts/{_account_id}/", data=None,
>>>>>>>>>>>>>> extra_options=None
>>>>>>>>>>>>>> )
>>>>>>>>>>>>>> hook._paginate.assert_not_called()
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> ```
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> My IDE shows a warning:
>>>>>>>>>>>>>> Cannot find reference 'assert_called_once_with' in
>>>>>> 'function'.
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> This happens because the mock object is not properly
>>>>>> recognized
>>>>>>>> as a
>>>>>>>>>>>> mock
>>>>>>>>>>>>>> instance at type-checking time, which might confuse some
>>>>>>>> contributors,
>>>>>>>>>>>>>> especially new ones relying on type hints or static
>>>> analysis
>>>>>>>> tools.
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> I realized that this aspect of mock handling might be
>>>> missing
>>>>>>>> from our
>>>>>>>>>>>>>> previous discussions. I wanted to bring it up as part of
>>>> the
>>>>>>>> broader
>>>>>>>>>>>>>> conversation about test styles—particularly how we
>> balance
>>>>>>>> IDE/tooling
>>>>>>>>>>>>>> support with readability and style consistency.
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> Curious to hear your thoughts on this!
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> @ash @potiuk
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 2025년 7월 6일 (일) 오후 8:09, Kyungjun Lee <ky...@gmail.com
>>> 님이
>>>>>> 작성:
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> Thank you @Potiuk for pointing out the intent behind
>> the
>>>>>> “one
>>>>>>>> assert
>>>>>>>>>>>> per
>>>>>>>>>>>>>>> test” principle, and @ash for highlighting how using
>>>>>> dedicated
>>>>>>>> mock
>>>>>>>>>>>>> assert
>>>>>>>>>>>>>>> functions can make the code easier to read and
>>>> understand.
>>>>>>> These
>>>>>>>> were
>>>>>>>>>>>>>>> perspectives I hadn’t fully considered, and I really
>>>>>> appreciate
>>>>>>>> you
>>>>>>>>>>>>> sharing
>>>>>>>>>>>>>>> them.
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> Thanks to your input, I was able to read more materials
>>>> and
>>>>>>>> broaden my
>>>>>>>>>>>>>>> thinking on the topic. That said, my original focus was
>>>>>> more on
>>>>>>>> the
>>>>>>>>>>>> idea
>>>>>>>>>>>>>>> that sticking to plain assert statements lowers the
>> entry
>>>>>>>> barrier for
>>>>>>>>>>>>> new
>>>>>>>>>>>>>>> contributors—because they don’t have to learn multiple
>>>>>>> assertion
>>>>>>>>>>>> styles.
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> Still, as @Potiuk mentioned, I’ll put more thought into
>>>>>> making
>>>>>>>> the
>>>>>>>>>>>>> testing
>>>>>>>>>>>>>>> guidelines clearer and more concrete—especially since
>>>> that
>>>>>>> helps
>>>>>>>>>>>>> AI-based
>>>>>>>>>>>>>>> tools as well 😄
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> For reference, here’s one of the articles I read:
>>>>>>>>>>>>>>> 
>>>>>> https://medium.com/@kentbeck_7670/test-desiderata-94150638a4b3
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> Thank you to everyone who took part in this discussion.
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> 2025년 7월 6일 (일) 오후 3:42, Ash Berlin-Taylor <
>>>>>> as...@apache.org
>>>>>>>> 님이
>>>>>>>> 작성:
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>> I personally find the dedicated functions way easier
>> to
>>>>>> read
>>>>>>> the
>>>>>>>>>>>> intent
>>>>>>>>>>>>>>>> behind, it’s one function/statement vs 3. More so when
>>>> you
>>>>>>>> don’t care
>>>>>>>>>>>>> about
>>>>>>>>>>>>>>>> the order of calls, just that something was called
>>>> (where
>>>>>> to
>>>>>>> do
>>>>>>>> this
>>>>>>>>>>>>>>>> manually we’d need to reimplement the helper function)
>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>> Additionally pytest already rewrites those to have
>> nicer
>>>>>> error
>>>>>>>>>>>>> messages,
>>>>>>>>>>>>>>>> but the dedicated mock assert finding are much easier
>> to
>>>>>> read
>>>>>>>> the
>>>>>>>>>>>> code
>>>>>>>>>>>>> and
>>>>>>>>>>>>>>>> understand the intent to me, so I’i am for staying
>> with
>>>> the
>>>>>>>> dedicated
>>>>>>>>>>>>>>>> assert functions
>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>> -ash
>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>> On 5 Jul 2025, at 13:30, Kyungjun Lee <
>> ky...@gmail.com
>>>>> 
>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>> I’ve learned a lot of things I didn’t know before.
>>>>>>>>>>>>>>>>> Thank you so much for all your help — I really
>>>> appreciate
>>>>>> it.
>>>>>>>>>>>>>>>>> I’ll get started on this right away!
>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>> 2025년 7월 5일 (토) 오후 9:18, Jarek Potiuk <
>>>> ja...@potiuk.com
>>>>>>> 님이
>>>>>>> 작성:
>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>> Haha ... I'm curious — which part sounded like
>>>> ChatGPT
>>>>>>> style?
>>>>>>>>>>>>>>>>>> English isn't my first language, so I did get a
>> little
>>>>>> help
>>>>>>>> from
>>>>>>>>>>>> it.
>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>> That whole paragraph :) .
>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>> Sure! Since you asked for how the test *should* be
>>>>>>> written,
>>>>>>>> I
>>>>>>>>>>>> took
>>>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>>>>> opportunity to clean it up using a more
>> pytest-native
>>>>>> style
>>>>>>>> while
>>>>>>>>>>>>> also
>>>>>>>>>>>>>>>>>> fixing the mock order issue.
>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>> "Sure! Since you asked ..." sounds like an AI bot.
>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>> That got me thinking — what do you think about
>>>> adding a
>>>>>>>> guideline
>>>>>>>>>>>>>>>>>> document
>>>>>>>>>>>>>>>>>> for writing tests, similar to how we have a coding
>>>> style
>>>>>>>> guide?
>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>> Absolutely - we already have some "seed' of it
>>>> "Writing
>>>>>>> tests"
>>>>>>>>>>>>>>>> chapter in
>>>>>>>>>>>>>>>>>> contributing guideline, but we could definitely make
>>>> it
>>>>>> more
>>>>>>>>>>>>> detailed.
>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>> 
>>>>>>>>>>>> 
>>>>>>>> 
>>>>>>> 
>>>>>> 
>>>> 
>> https://github.com/apache/airflow/blob/main/contributing-docs/testing/unit_tests.rst#writing-unit-tests
>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>> And - speaking of AI - this is becoming more and
>> more
>>>>>>>> important to
>>>>>>>>>>>>>>>> describe
>>>>>>>>>>>>>>>>>> any common rules we have and context - so that using
>>>>>> Agentic
>>>>>>>> AI
>>>>>>>>>>>>> yields
>>>>>>>>>>>>>>>>>> better results. Kaxil already added
>>>>>>>>>>>>>>>>>> 
>> https://github.com/apache/airflow/blob/main/AGENTS.md
>>>>>> which
>>>>>>>>>>>>> describes
>>>>>>>>>>>>>>>>>> context for coding agents lile Codex - and we could
>>>>>> improve
>>>>>>>> it and
>>>>>>>>>>>>> link
>>>>>>>>>>>>>>>>>> more docs from our repo if they get more of our
>> agreed
>>>>>>>>>>>> "conventions"
>>>>>>>>>>>>> -
>>>>>>>>>>>>>>>> then
>>>>>>>>>>>>>>>>>> Agents would get it as context and their generated
>>>> code
>>>>>>> would
>>>>>>>> be
>>>>>>>>>>>>>>>> consistent
>>>>>>>>>>>>>>>>>> with what we describe there.
>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>> In a way - I think having a good documentation on
>>>>>> processes,
>>>>>>>> tools
>>>>>>>>>>>>> and
>>>>>>>>>>>>>>>>>> conventions was always something I've been after,
>> but
>>>>>> with
>>>>>>> the
>>>>>>>>>>>>> Agentic
>>>>>>>>>>>>>>>>>> workflows it might significantly boost the quality
>> of
>>>>>>>> generated
>>>>>>>>>>>> code
>>>>>>>>>>>>>>>> if we
>>>>>>>>>>>>>>>>>> have more of those conventions and guidelines
>>>> described.
>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>> So .... ABSOLUTELY ... the more we describe in
>> there,
>>>> the
>>>>>>>> better.
>>>>>>>>>>>> And
>>>>>>>>>>>>>>>> we
>>>>>>>>>>>>>>>>>> have no more excuse that "anyhow no-one reads it" -
>>>>>> because
>>>>>>>> the
>>>>>>>>>>>>> coding
>>>>>>>>>>>>>>>>>> agents WILL be reading it and acting accordingly.
>> So I
>>>>>> think
>>>>>>>> this
>>>>>>>>>>>> is
>>>>>>>>>>>>> a
>>>>>>>>>>>>>>>> very
>>>>>>>>>>>>>>>>>> good investment to make.
>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>> J.
>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>> On Sat, Jul 5, 2025 at 2:07 PM Kyungjun Lee <
>>>>>>>>>>>>> kyungjunlee...@gmail.com
>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>> Haha ... I'm curious — which part sounded like
>>>> ChatGPT
>>>>>>> style?
>>>>>>>>>>>>>>>>>>> English isn't my first language, so I did get a
>>>> little
>>>>>> help
>>>>>>>> from
>>>>>>>>>>>> it.
>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>> But thank you — I actually learned something new
>> from
>>>>>> your
>>>>>>>>>>>> comment!
>>>>>>>>>>>>>>>>>>> That got me thinking — what do you think about
>>>> adding a
>>>>>>>> guideline
>>>>>>>>>>>>>>>>>> document
>>>>>>>>>>>>>>>>>>> for writing tests, similar to how we have a coding
>>>> style
>>>>>>>> guide? It
>>>>>>>>>>>>>>>> might
>>>>>>>>>>>>>>>>>>> help ensure consistency across the Airflow codebase
>>>>>> when it
>>>>>>>> comes
>>>>>>>>>>>> to
>>>>>>>>>>>>>>>>>>> testing styles as well.
>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>> 2025년 7월 5일 (토) 오후 8:52, Jarek Potiuk <
>>>> ja...@potiuk.com
>>>>>>> 님이
>>>>>>>> 작성:
>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>> But of course - i'd love to hear what others
>> think -
>>>>>> it's
>>>>>>>> not a
>>>>>>>>>>>>> "very
>>>>>>>>>>>>>>>>>>>> strong" opinion.
>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>> On Sat, Jul 5, 2025 at 1:48 PM Jarek Potiuk <
>>>>>>>> ja...@potiuk.com>
>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>> Cool. That's what I wanted to see.
>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>> By the way - not that there's anything wrong -
>> but
>>>> was
>>>>>>> the
>>>>>>>>>>>> answer
>>>>>>>>>>>>>>>>>>> written
>>>>>>>>>>>>>>>>>>>>> by AI initially ? The first paragraph looks
>>>>>> suspiciously
>>>>>>>> like
>>>>>>>>>>>> Chat
>>>>>>>>>>>>>>>>>> GPT
>>>>>>>>>>>>>>>>>>>>> answer :D ?
>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>> Comment from my side: I personally prefer the
>>>> original
>>>>>>>> style.
>>>>>>>>>>>> It's
>>>>>>>>>>>>>>>>>> more
>>>>>>>>>>>>>>>>>>>>> concise and it is easier to read - you see as if
>>>> the
>>>>>> call
>>>>>>>> was
>>>>>>>>>>>>>>>>>> actually
>>>>>>>>>>>>>>>>>>>>> written down. Also this is quite a bit too many
>>>>>>> assertions
>>>>>>>> in
>>>>>>>>>>>> the
>>>>>>>>>>>>>>>>>>> second
>>>>>>>>>>>>>>>>>>>>> case and it takes a lot of mental effort to
>>>> understand
>>>>>>> what
>>>>>>>>>>>>> actually
>>>>>>>>>>>>>>>>>> is
>>>>>>>>>>>>>>>>>>>>> being asserted.
>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>> There is a "school" of writing unit tests that
>>>> every
>>>>>> test
>>>>>>>> should
>>>>>>>>>>>>>>>> have
>>>>>>>>>>>>>>>>>>> ONE
>>>>>>>>>>>>>>>>>>>>> assertion only. Always. I think it is a bit
>>>> extreme,
>>>>>> and
>>>>>>> I
>>>>>>>> do
>>>>>>>>>>>> not
>>>>>>>>>>>>>>>>>>> follow
>>>>>>>>>>>>>>>>>>>> it
>>>>>>>>>>>>>>>>>>>>> myself but I think it is also a kind of good
>>>>>> direction to
>>>>>>>> have
>>>>>>>>>>>> ->
>>>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>>>>>>> fewer
>>>>>>>>>>>>>>>>>>>>> assertions you have in your test, the better (I
>>>>>> think).
>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>> I think tests should be mostly optimized for
>>>> easiness
>>>>>> of
>>>>>>>> reading
>>>>>>>>>>>>> and
>>>>>>>>>>>>>>>>>>>>> understanding what is being tested - and it's
>> just
>>>> not
>>>>>>>> that easy
>>>>>>>>>>>>> in
>>>>>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>>>>>>>> second case.
>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>> J.
>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>> On Sat, Jul 5, 2025 at 1:39 PM Kyungjun Lee <
>>>>>>>>>>>>>>>>>> kyungjunlee...@gmail.com>
>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>> Sure! Since you asked for how the test *should*
>> be
>>>>>>>> written, I
>>>>>>>>>>>>> took
>>>>>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>>>>>>>>> opportunity to clean it up using a more
>>>> pytest-native
>>>>>>>> style
>>>>>>>>>>>> while
>>>>>>>>>>>>>>>>>> also
>>>>>>>>>>>>>>>>>>>>>> fixing the mock order issue.
>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>> Here’s the updated test:
>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>> ```python
>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>> @pytest.mark.parametrize(
>>>>>>>>>>>>>>>>>>>>>> argnames="conn_id, account_id",
>>>>>>>>>>>>>>>>>>>>>> argvalues=[(ACCOUNT_ID_CONN, None),
>>>>>> (NO_ACCOUNT_ID_CONN,
>>>>>>>>>>>>>>>>>>>> ACCOUNT_ID)],
>>>>>>>>>>>>>>>>>>>>>> ids=["default_account", "explicit_account"],
>>>>>>>>>>>>>>>>>>>>>> )
>>>>>>>>>>>>>>>>>>>>>> @patch.object(DbtCloudHook, "run")
>>>>>>>>>>>>>>>>>>>>>> @patch.object(DbtCloudHook, "_paginate")
>>>>>>>>>>>>>>>>>>>>>> def test_get_account(self, mock_paginate,
>>>>>> mock_http_run,
>>>>>>>>>>>> conn_id,
>>>>>>>>>>>>>>>>>>>>>> account_id):
>>>>>>>>>>>>>>>>>>>>>> hook = DbtCloudHook(conn_id)
>>>>>>>>>>>>>>>>>>>>>> hook.get_account(account_id=account_id)
>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>> assert hook.method == "GET"
>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>> expected_account_id = account_id or
>>>>>> DEFAULT_ACCOUNT_ID
>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>> assert mock_http_run.call_count == 1
>>>>>>>>>>>>>>>>>>>>>> assert mock_http_run.call_args.args == ()
>>>>>>>>>>>>>>>>>>>>>> assert mock_http_run.call_args.kwargs == {
>>>>>>>>>>>>>>>>>>>>>> "endpoint":
>>>>>> f"api/v2/accounts/{expected_account_id}/",
>>>>>>>>>>>>>>>>>>>>>> "data": None,
>>>>>>>>>>>>>>>>>>>>>> "extra_options": None,
>>>>>>>>>>>>>>>>>>>>>> }
>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>> assert mock_paginate.call_count == 0
>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>> ```
>>>>>>>>>>>>>>>>>>>>>> Why I chose this style:
>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>> -
>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>> *Mock verification using assert*: Instead of
>>>>>>>>>>>>>>>>>>>>>> mock.assert_called_once_with(...), I used
>>>> call_count
>>>>>> and
>>>>>>>>>>>>>>>>>> call_args.
>>>>>>>>>>>>>>>>>>>>>> This
>>>>>>>>>>>>>>>>>>>>>> approach aligns better with pytest’s idioms and
>>>>>> produces
>>>>>>>>>>>>> cleaner,
>>>>>>>>>>>>>>>>>>>> more
>>>>>>>>>>>>>>>>>>>>>> readable error messages when assertions fail.
>>>>>>>>>>>>>>>>>>>>>> -
>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>> *Explicit verification*: Using call_args.args
>> and
>>>>>>>>>>>>>>>>>> call_args.kwargs
>>>>>>>>>>>>>>>>>>>>>> makes
>>>>>>>>>>>>>>>>>>>>>> the test behavior very explicit, which helps
>> with
>>>>>>>> debugging
>>>>>>>>>>>> and
>>>>>>>>>>>>>>>>>>>>>> understanding the exact calls made.
>>>>>>>>>>>>>>>>>>>>>> -
>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>> *Decorator order matching argument order*: As
>>>> @patch
>>>>>>>>>>>> decorators
>>>>>>>>>>>>>>>>>>> apply
>>>>>>>>>>>>>>>>>>>>>> from the bottom up, the argument order has been
>>>>>>> corrected
>>>>>>>> to
>>>>>>>>>>>>>>>>>> match
>>>>>>>>>>>>>>>>>>> (
>>>>>>>>>>>>>>>>>>>>>> mock_paginate first, then mock_http_run).
>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>> Let me know if you'd like to follow a slightly
>>>>>> different
>>>>>>>>>>>>> convention
>>>>>>>>>>>>>>>>>> —
>>>>>>>>>>>>>>>>>>>>>> happy
>>>>>>>>>>>>>>>>>>>>>> to adjust!
>>>>>>>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>>>>>>>> I was lu
>>>>>>>>> [message truncated...]
>>>>>>>> 
>>>>>>>> 
>>>> ---------------------------------------------------------------------
>>>>>>>> To unsubscribe, e-mail: dev-unsubscr...@airflow.apache.org
>>>>>>>> For additional commands, e-mail: dev-h...@airflow.apache.org
>>>>>>>> 
>>>>>>>> 
>>>>>>> 
>>>>>> 
>>>>> 
>>>> 
>>> 
>> 


---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscr...@airflow.apache.org
For additional commands, e-mail: dev-h...@airflow.apache.org

Reply via email to