Hi Eli,

On Thu, Oct 28, 2010 at 14:11 -0700, Ateljevich, Eli wrote:
> Thanks Holger. I said wanted to count assertions but I was just being 
> careless -- counting test failures and passes the usual way is fine. 
> 
> You seem to understand my timing issue by asking "do I know how many tests 
> there are"?  I'll ask the cxxtest folks, but I haven't seen it in their usage 
> examples and there is no obvious --count-tests type of flag. Python would see 
> each cxxtest suite as a monolith.
> 
> One alternative is to lie to the py.test reporting mechanism. If I understand 
> that code snippet you pointed out, I get a shot at this in repr_failure(self, 
> excinfo). Is excinfo an opportunity to manipulate test attributes enough to 
> increase the total, success and fail counts:
> 1. Collector discovers cxxtests
> 2. Runner runs them
> 3. Runner parses the success/fail results and stashes them
> 4. Runner lies in repr_failure(self,excinfo), upping the counts
> Even if this is possible, it seems like at most I can do this on failure -- 
> the number of tests and successes would be hard to count correctly. 

You can't really "lie in repr_failure" regarding test counts.   We could maybe
think about allowing multiple reports per test item. Haven't tried this
and am a bit skeptical about it producing internal problems.  It might be
surprising for other plugins like the junitxml plugin or the terminal
plugin itself which assume they get one report per collected test item IIRC.
However, it's worth an experiment i guess especially since Ronny (another
py.test hacker and heavy user) recently arrived at a similar need, it seems.
 
> The other alternative is to run and count the tests at collect time:
> 1. Collector runs the cxxtests
> 2. Collector parses the test results and emits shadow tests that trivially 
> fail, succeed or have an error.
> 3. Test runner runs the shadow tests.
> I am wary of running cxxtests during "collect", but I can't really say why.
>
> What do you think?

Right, it doesn't look completely fitting.  Running tests at collect time 
sounds wrong.  

If i were you I'd probably first try to collect one test item per test file and
parse the log/result file, raise a failure if anything failed and show details
if any test/assertion fails.   I'd probably first perfect this (including
properly and robustly parsing the log) and see how the resulting testing process
feels in practise.  The involved coding effort is worthwhile in any case.

If allowing multiple results still feels important maybe I can try / experiment
with the above idea (somehow allowing multiple results per one test item) 
or you could try this: a test run could create a cache file which will
be found by subsequent collections which will then create multiple items 
and not just one. The items can - at runtest() time - easily share the 
same result log from a cxx-run.  

cheers,
holger

P.S.: let's keep the discussion on py-dev as this may be interesting
for future reference or other people.

> Thanks so much for following up with this!!!
> 
> Eli
> 
> 
> 
> ________________________________________
> From: holger krekel [hol...@merlinux.eu]
> Sent: Wednesday, October 27, 2010 11:52 PM
> To: Ateljevich, Eli
> Cc: py-dev@codespeak.net
> Subject: Re: [py-dev] py.test to control other testing framework
> 
> Hi Eli,
> 
> On Wed, Oct 27, 2010 at 17:00 -0700, Ateljevich, Eli wrote:
> > On another list, I asked Holger about how to use py.test as a wrapper to 
> > control non-python tests. He referred me to this example:
> > self-contained yaml 
> > example<http://codespeak.net/~hpk/pytest/example/nonpython.html>
> >
> > This got me going on the issue of collecting and running the tests.
> >
> > I have a follow-up question about aggregating results from these "foreign 
> > tests". I am using cxxunit, but the specifics are not important to my 
> > question. Each non-python test involves multiple asserts, failures and 
> > possibly errors. These test results could be reported in any of the usual 
> > formats (logs, stdout, JUnit xml format, custom).
> 
> (sidenote: I guess you are aware of the --junitxml option.)
> 
> > My question is this: is there a good way to track assert pass/fail counts 
> > correctly in py.test?
> >
> > One crude idea is to have cxxtest print out its assert attempts, passes and 
> > failures to a log, parse the log and then deliberately pass and fail a 
> > correct number of "shadow assertions" in python using the same log 
> > messages. Is there a more direct way?
> 
> Parsing a log sounds right.  However, there currently is no notion of 
> "counting
> asserts" in py.test (or other popular python testing frameworks i am aware 
> of).
> 
> Do you happen to have a way to find out the number of test functions/cases
> ahead of running a test file?  And to instruct cxxtest to run a particular
> function?
> 
> If so you could map cxx test functions to py.test Items and get
> more fine-granular "." printing and error representation.
> 
> Otherwise i guess you can only represent the whole cxxtest file as
> one test item.  Also because of other scenarios i am wondering how/if
> to help this situation, btw.
> 
> cheers,
> holger

-- 
_______________________________________________
py-dev mailing list
py-dev@codespeak.net
http://codespeak.net/mailman/listinfo/py-dev

Reply via email to