On Fri, May 06, 2016 at 01:28:49PM -0700, Joe Stringer wrote: > Is the below testsuite failure on anyone's radar? It seems to be > failing maybe 30% of the time on Travis. Travis is known to run the > tests on heavily loaded systems and as such is likely to randomly > reorder thread execution which increases the likelihood of race > conditions causing testsuite failures; perhaps this could be > reproduced locally by running significantly more tests at a time than > you have cores. > > 2027: ovn.at:2203 ovn -- 1 HVs, 2 LSs, 1 lport/LS, 1 LR > > https://travis-ci.org/openvswitch/ovs/jobs/128351921#L7594 > > The test output is like this: > > ../../tests/ovn.at:2321: cat received.packets > --- expout 2016-05-05 00:00:35.843273515 +0000 > +++ > /home/travis/build/openvswitch/ovs/openvswitch-2.5.90/_build/tests/testsuite.dir/at-groups/2027/stdout > 2016-05-05 00:00:35.843273515 +0000 > @@ -1 +1,2 @@ > > f0000001020400000001020408004500001c000000003f110100c0a80102ac100102003511110008 > +f0000001020400000001020408004500001c000000003f110100c0a80102ac100102003511110008 > > Seems like we receive one more copy of the packet than the test > expects. Is there a way we could use OVS_WAIT_UNTIL or something to > address this race?
The most common races I see in the OVN tests would be addressed by the idea I proposed here: http://openvswitch.org/pipermail/dev/2016-April/070041.html (please see the remainder of the thread for refinements) I think that Ryan Moats (CCed) is planning to work on that. However, it's not obvious to me how a lack of flows would cause *extra* packets, so there might be another issue here too. _______________________________________________ dev mailing list dev@openvswitch.org http://openvswitch.org/mailman/listinfo/dev