On 11/24/2014 01:02 PM, pcrews wrote:
On 11/24/2014 09:40 AM, Ben Nemec wrote:
On 11/24/2014 08:50 AM, Matthew Gilliard wrote:
1/ assertFalse() vs assertEqual(x, False) - these are semantically
different because of python's notion of truthiness, so I don't think
we ought to make this a rule.

2/ expected/actual - incorrect failure messages have cost me more time
than I should admit to. I don't see any reason not to try to improve
in this area, even if it's difficult to automate.

Personally I'd rather kill the expected, actual ordering and just have
first, second or something that doesn't imply which value is which.
Because it can't be automatically enforced, we'll _never_ fix all of the
expected, actual mistakes (and will continually introduce new ones), so
I'd prefer to eliminate the confusion by not requiring a specific
ordering.

++.  It should be a part of review to ensure that the test (including
error messages) makes sense.  Simply having a (seemingly costly to
implement and enforce) rule stating that something must adhere to a
pattern does not guarantee that.

So, as a proponent of the (expected, actual) parameter order thing, I'll just say that I actually agree with Patrick and Ben about NOT having a hacking rule for this. The reason is because of what Ben noted: there's really no way to programmatically check for this.

assertEqual(expected, actual, msg="nom nom nom cookie cookie yum")
matches the pattern, but the message still doesn't necessarily provide
much worth.

Focusing on making tests informative and clear about what is thought to
be broken on failure seems to be the better target (imo).

Agreed. And for me, I think pointing out that the default failure message for testtools.TestCase.assertEqual() uses the terms "reference" (expected) and "actual" is a reason why reviewers *should* ask patch submitters to use (expected, actual) ordering. I just don't think it's something that can be hacking-rule-tested for...

Best,
-jay

Alternatively I suppose we could require kwargs for expected and actual
in assertEqual.  That would at least make it more obvious when someone
has gotten it backward, but again that's a ton of code churn for minimal
gain IMHO.


3/ warn{ing} -
https://github.com/openstack/nova/blob/master/nova/hacking/checks.py#L322


On the overarching point: There is no way to get started with
OpenStack, other than starting small.  My first ever patch (a tidy-up)
was rejected for being trivial, and that was confusing and
disheartening. Nova has a lot on its plate, sure, and plenty of
pending code reviews.  But there is also a lot of inconsistency and
unloved code which *is* worth fixing, because a tidy codebase is a joy
to work with, *and* these changes are ideal to bring new reviewers and
developers into the project.

Linus' post on this from the LKML is almost a decade old (!) but
worth reading.
https://lkml.org/lkml/2004/12/20/255

   MG

_______________________________________________
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



_______________________________________________
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



_______________________________________________
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

_______________________________________________
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to