On Fri, Feb 01, 2008 at 01:16:05AM -0800, Dan Price wrote:

> On Thu 31 Jan 2008 at 09:06PM, Danek Duvall wrote:
> > Looks nice.  A couple things that come to mind:
> > 
> >   - You don't seem to allow for tests whose expected failure can be
> >     suppressed.  I know we don't want to do that much, but when we're
> >     coming up to a release, it'd be nice to have a "clean" run once we've
> >     suppressed all non-stopper bugs.
> 
> I'm not aware of a way to do that in python's unittest framework.

I haven't looked at it at all, so maybe this makes no sense whatsoever, but
could you subclass something to trap the failures, note that it's an
expected one, and print success instead of failure?

> >   - There's no way to check to see that something happened correctly on the
> >     filesystem that can't be seen in return codes or a second command
> 
> I don't really understand-- wouldn't we simply call os.whatever() from
> the test case?

Yes, of course.  I was just in the mode of everything being done from your
interface, rather than using arbitrary python code.  :)

> >   - It might be useful to ensure that network operations are closed
> >     forcefully if they appear to be hung.
> 
> Ok.  As far as I know, there is no timeout mechanism, another issue I
> see with unittest.  I'm not yet clear if I could add some of these
> features with a subclass of unittest, and/or if the time investment
> would be worth it...

Probably not worth it for now, but it'll be nice to know that automated
test runs will always complete, especially given that we have known
problems with client/server deadlock hangs.

Danek
_______________________________________________
pkg-discuss mailing list
[email protected]
http://mail.opensolaris.org/mailman/listinfo/pkg-discuss

Reply via email to