Hi Eli,

On Wed, Oct 27, 2010 at 17:00 -0700, Ateljevich, Eli wrote:
> On another list, I asked Holger about how to use py.test as a wrapper to 
> control non-python tests. He referred me to this example:
> self-contained yaml 
> example<http://codespeak.net/~hpk/pytest/example/nonpython.html>
> 
> This got me going on the issue of collecting and running the tests.
> 
> I have a follow-up question about aggregating results from these "foreign 
> tests". I am using cxxunit, but the specifics are not important to my 
> question. Each non-python test involves multiple asserts, failures and 
> possibly errors. These test results could be reported in any of the usual 
> formats (logs, stdout, JUnit xml format, custom).

(sidenote: I guess you are aware of the --junitxml option.)

> My question is this: is there a good way to track assert pass/fail counts 
> correctly in py.test?
> 
> One crude idea is to have cxxtest print out its assert attempts, passes and 
> failures to a log, parse the log and then deliberately pass and fail a 
> correct number of "shadow assertions" in python using the same log messages. 
> Is there a more direct way?

Parsing a log sounds right.  However, there currently is no notion of "counting
asserts" in py.test (or other popular python testing frameworks i am aware of).

Do you happen to have a way to find out the number of test functions/cases
ahead of running a test file?  And to instruct cxxtest to run a particular
function?

If so you could map cxx test functions to py.test Items and get
more fine-granular "." printing and error representation.

Otherwise i guess you can only represent the whole cxxtest file as
one test item.  Also because of other scenarios i am wondering how/if
to help this situation, btw.  

cheers,
holger
_______________________________________________
py-dev mailing list
py-dev@codespeak.net
http://codespeak.net/mailman/listinfo/py-dev

Reply via email to