On Mon, Feb 15, 2010 at 08:30 +0100, Pontus Åström wrote: > Ronny Pfannschmidt wrote: > > On Fri, 2010-02-12 at 22:23 +0100, holger krekel wrote: > > > > I have some basic ideas about structuring that kind of test. > > > > A) steped reporting, so each test reports the current step > > B) collection of dependend test-items, having each test item as `step` > > > > A requires extending the py.test reporting (but might be easy) > > B requires extending the py.test test execution > > > Could you just elaborate a bit on the above items and giving the > rationale for each approach. I currently have some difficulty > understanding what you mean.
Regarding approach B i recommend to check out the fine docs that Ronny pointed to, in this file: http://bitbucket.org/aafshar/glashammer-testing/src/tip/glashammer/utils/testing .py Regarding approach A: currently test items are collected and executed and reported. They are the basic unit of testing and they are meant to be independent and isolated from each other although they may share and often do share fixture code. Domain-specific acceptance testing tests may run longer and they may involve "logical" steps that need to happen in a certain order. Mapping those steps to test items conflicts with the isolation above. One idea is to simply flexibilize reporting and signal "step" results to the terminal (or other) reporters. And otherwize keep the isolation. This is approach A. B works now, and i'd help to make A work if there are concrete use cases and no better solution. HTH, holger _______________________________________________ py-dev mailing list py-dev@codespeak.net http://codespeak.net/mailman/listinfo/py-dev