Hi Sridhar,

On Fri, May 22, 2009 at 10:43 -0700, Sridhar Ratnakumar wrote:
> Hello Holger,
>
> On 09-05-22 12:41 AM, holger krekel wrote:
>> are you aware that py.test runs test function in the order
>> in which they appear in the test module file, btw?
>
> Is this by design? Can I always expect the functions and method be run  
> in the defined order?
>
> Vis.:
> [quote] 'Tests usually run in the order in which they appear in the  
> files. However, tests should not rely on running one after another, as  
> this prevents more advanced usages: running tests distributedly or  
> selectively, or in "looponfailing" mode, will cause them to run in  
> random order.'[endquote]
>
> The keyword /usually/ suggests to me that that may not be the case always.

You are right to point this out.  It's been in the docs for a
while.  I actually think tests will always run in the order in
which they appear in the file *if* those tests are executed in
the same process.  However, distribution and looponfailing (and a 
potential randomizing plugin) may schedule the execution of each of 
a group of functions to different processes. There is no way currently
to signal "the functions of this test class need to 
run consecutively and together in the same process". 

>> If one of the yielded tests fails, should the rest of the yielded tests
>> better not run at all?
>
> Correct. In my case, the rest of the yielded tests should not run.

guess so.  there is no way to express this currently - even
goes somewhat against the original idea of yield-mediated tests

>> Would you like to reuse the yielded test functions for other test cases?
>
> Usually not, but sometimes (if the tests are defined in reusable  
> fashion), yes.

ok.

>>   i'd like to understand your use case better.  I get the impression
>> that something else/more than the current funcarg/generate mechanisms is 
>> needed
>> to address it nicely. So please also state openly any problems/wishes you
>> have with the current yield-way of doing things.
>
> I gave some thought to this.. and let me explain:
>
> I have this (conceptually big) test case for which I want detailed  
> reporting. This test case is the function ``test_typical_usecase``. All  
> the test_* functions defined inside this function are parts of the  
> parent test.
>
> If one of them fails, then of course the whole test is considered to be  
> failed and thus rest of them should not run (this is a bug in my current  
> test code as it continues to run them).
>
>
> I guess what I actually want out of this 'splitting' is fine-grained  
> reporting. That is, if, say, test_import fails.. I should see FAIL for  
> test_import so that I can immediately see where the problem is.
>
>   [1] http://gist.github.com/115787
>   [2] http://gist.github.com/116260
>
> Here, if test_import fails.. ``test_typical_usecase['test_import']`` is  
> shown to be failing in [1] (fine-grained reporting) .. but this is not a  
> correct way to do it.. as rest of the tests continue to run.
>
> In [2], test_typical_usecase is shown to be failing (not fine-grained  
> reporting).

i'd like to consider two direct ways of improving reporting.

first possibility: 

    def test_typical_usecase(repcontrol):
         packages = packages_small_list
         c, repo_root_url = prepare_client(packages)
         c.do_update(None, None, repo_root_url)
         repcontrol.section("test_search")
         ...
         repcontrol.section("test_list_all")
         ... 

You would not need to have things defined in a test function
but of course you could still do it. 
Running this test would show one or possibly more dots. 
If the test fails, the failure report could maybe look 
something like this: 

    def test_typical_usecase(repcontrol): 
        OK setup 
        OK test_search 
        FAIL test_list_all 
        ... 

and after FAIL you would see the part of the traceback after
the above 'repcontrol.section("test_list_all")' line. 
Stdout/Stderr capturing could probably also be made
to present only the parts relating to the failing part. 

the second possibility is to write a plugin that implements
an "IncrementalTestCase": 

    class IncrementalTestCase:
        def setup(self):
            self.packages = packages_small_list
            self.c, self.repo_root_url = prepare_client(packages)
            self.c.do_update(None, None, repo_root_url)

        def search(self):
            for pkg in self.packages:
                sample_keyword = pkg['name'][:3]
                logger.info('Searching for `%s` expecting `%s`',
                sample_keyword, pkg['name'])
            results = [p.name for p in c.do_search(None, None,
            sample_keyword)]
            logger.info('Got results: %s', results)

        ...

this would run one function after another in the order in
which they appear.  If a function fails, it would abort 
the whole case.  This scheme makes it easy to reuse 
functions for another test case variant. The class
would get discovered by the "IncrementalTest" or
maybe just "IncTest" name, i guess.

Let me know what you think or if you have other ideas. 
cheers,

   holger

-- 
Metaprogramming, Python, Testing: http://tetamap.wordpress.com
Python, PyPy, pytest contracting: http://merlinux.eu 
_______________________________________________
py-dev mailing list
py-dev@codespeak.net
http://codespeak.net/mailman/listinfo/py-dev

Reply via email to