Hi,

I've just landed a change to use the great testtools feature of result
attachments (addDetails()) to attach any generated oops to the result.

I did this after I broke XMLRPC and had several tests that just failed
with an OOPS id on the xmlrpc client side.

On Tue, 4 May 2010 10:38:11 +0100, Jonathan Lange <[email protected]> wrote:
> Perhaps we should go a step further and fail tests if they generate
> any oopses? It would be easy enough to provide a method that flushes
> out the stored oopses list.

I think it would be good to do this as well. I'm unsure about a method
to clear self.oopses though, as it could mask issues.

How about

  def clearOopses(self, count=1):
      self.assertEqual(
          count, len(self.oopses), "There were an unexpected number "
          "of oops generated by this test: expected %d, got %d."
          % (count, len(self.oopses)))
      self.oopses = []

This will ensure that if you manage to trigger two oops when you expect
only one you will get a failure, and the code I just landed will make
sure that you can see them.

Thanks,

James

_______________________________________________
Mailing list: https://launchpad.net/~launchpad-dev
Post to     : [email protected]
Unsubscribe : https://launchpad.net/~launchpad-dev
More help   : https://help.launchpad.net/ListHelp

Reply via email to