Because? On Tue, Jul 28, 2015 at 12:54 AM, Bernd Mathiske <be...@mesosphere.io> wrote:
> IMHO we would be better off with exception-based asserts, checks, and > expects. > > Bernd > > > On Jul 28, 2015, at 7:53 AM, Paul Brett <pbr...@twitter.com.INVALID> > wrote: > > > > Michael > > > > I think Ben's suggestion of using Try<> is just what we want for common > > functions. > > > > In regards to ASSERTs, they can cause tests to be skipped within > > instantiations of the fixtures or test case as expected. > > > > For example, If you look at tests such as > > SlaveRecoveryTest::ReconnectExecutor, it has 9 ASSERTs in a single test > > case. The first 5 are in setup code and seem pretty reasonable but the > > last 4 are: > > > > 489 // Executor should inform about the unacknowledged update. > > 490 ASSERT_EQ(1, reregister.updates_size()); > > 491 const StatusUpdate& update = reregister.updates(0); > > 492 ASSERT_EQ(task.task_id(), update.status().task_id()); > > 493 ASSERT_EQ(TASK_RUNNING, update.status().state()); > > 494 > > 495 // Scheduler should receive the recovered update. > > 496 AWAIT_READY(status); > > 497 ASSERT_EQ(TASK_RUNNING, status.get().state()); > > > > So looking at this code, I suspect that lines 492 and 493 might be better > > as EXPECT? What about 497? What follows afterwards is only cleanup > code, > > so either it is not necessary and we can omit it or 497 should be an > > expect. > > > > Looking through the tests directory, this appears to be a common pattern. > > Of course, it is all harmless while the code is passing the tests but > when > > a change breaks things, the scope of the breakage can be obscured because > > of the skipped test conditions. > > > > Given the restrictions you point out on the use of ASSERT combined with > the > > ability to hide failing tests, I believe we should have a strong > preference > > for EXPECT over ASSERT unless it is clear that every subsequent test in > the > > test cast is dependent on the result of this test. > > > > Just my 5c worth > > > > @paul_b > > > > On Mon, Jul 27, 2015 at 7:34 PM, Michael Park <mcyp...@gmail.com> wrote: > > > >> Paul, > >> > >> With ASSERT, I completely agree with you about the perils of using > ASSERT > >>> that you list above, but additionally we have cases where ASSERT exits > a > >>> test fixture skipping later tests that might or might not have failed. > >> > >> > >> We should only be using *ASSERT_** in cases where it doesn't make sense > to > >> proceed with the rest of the test if the condition fails, so exiting the > >> test case seems like it's exactly what we would want. If you're saying > that > >> we currently use it incorrectly, then yeah, we should perhaps write a > guide > >> to help with how to use it correctly. But it sounds like you're saying > we > >> shouldn't use it at all? > >> > >> Since the CHECK macro aborts the test harness, a single test failure > >>> prevents tests from running on all the remaining tests. Particularly > >>> annoying for anyone running automated regression tests. > >> > >> > >> Perhaps my suggestion of resorting to *CHECK_** was not a good one, but > I > >> still don't think *EXPECT_** is what we want. If we have a condition in > >> which it doesn't make sense to proceed with the rest of the test, we > should > >> stop. Perhaps the helper function should return a *Try* as Ben > suggested, > >> proceeded by an *ASSERT_** of the result within the test case or > something > >> like that. > >> > >> I mainly wanted to inform folks of this limitation and the corresponding > >> confusing error message that follows. > >> > >> On 27 July 2015 at 18:42, Benjamin Mahler <benjamin.mah...@gmail.com> > >> wrote: > >> > >>> Michael, note that we've avoided having ASSERT_ or EXPECT_ inside test > >>> helper methods because they print out the line number of the helper > >> method, > >>> rather than the line number where the helper method was called from the > >>> test. This is why we've been pretty careful when adding helpers and > have > >>> tried to push assertions into the test itself (e.g. helper returns a > Try > >>> instead of internally asserting). > >>> > >>> Paul, are you saying that ASSERT within one case in a fixture will stop > >>> running all other cases for the fixture? Do you have a pointer to this? > >>> Sounds surprising. > >>> > >>> On Mon, Jul 27, 2015 at 3:04 PM, Paul Brett <pbr...@twitter.com.invalid > > > >>> wrote: > >>> > >>>> Mike > >>>> > >>>> I would suggest that we want to avoid both ASSERT and CHECK macros in > >>>> tests. > >>>> > >>>> With ASSERT, I completely agree with you about the perils of using > >> ASSERT > >>>> that you list above, but additionally we have cases where ASSERT exits > >> a > >>>> test fixture skipping later tests that might or might not have failed. > >>>> > >>>> Since the CHECK macro aborts the test harness, a single test failure > >>>> prevents tests from running on all the remaining tests. Particularly > >>>> annoying for anyone running automated regression tests. > >>>> > >>>> We should add this to either the style guide or > mesos-testing-patterns. > >>>> > >>>> -- @paul_b > >>>> > >>>> On Mon, Jul 27, 2015 at 2:28 PM, Michael Park <mcyp...@gmail.com> > >> wrote: > >>>> > >>>>> I've had at least 3 individuals who ran into the issue of *ASSERT_** > >>>> macro > >>>>> placement and since the generated error message is less than > >> useless, I > >>>>> would like to share with you what the issue is. > >>>>> > >>>>> The source of the issue is that *ASSERT_** macros from *gtest* can > >> only > >>>> be > >>>>> placed in functions that return *void* as described in Assertion > >>>> Placement > >>>>> < > >>>>> > >>>> > >>> > >> > https://code.google.com/p/googletest/wiki/AdvancedGuide#Assertion_Placement > >>>>>> > >>>>> . > >>>>> > >>>>> By placing it in a non-void function, you get useless error messages > >>> like > >>>>> this: > >>>>> > >>>>> From *GCC*: "*error: void value not ignored as it ought to be*" > >>>>> From *Clang*: "*error: could not convert > >>>>> ‘testing::internal::AssertHelper((testing::TestPartResult::Type)2u, > >>>> ((const > >>>>> char*)"../../src/tests/containerizer/port_mapping_tests.cpp"), 320, > >>>>> > >>>>> > >>>> > >>> > >> > gtest_ar.testing::AssertionResult::failure_message()).testing::internal::AssertHelper::operator=(testing::Message())’ > >>>>> from ‘void’ to ‘int’*" > >>>>> > >>>>> I think the following are typical situations that result in this > >> mess: > >>>>> > >>>>> - Using them in *constructors/destructors*. For example, it would > >> be > >>>>> really confusing if you're simply translating a *SetUp*/*TearDown* > >>> of > >>>> a > >>>>> test fixture to be *constructor/destructor* instead. > >>>>> - Using them in *main*, since it returns an *int*. > >>>>> - Refactoring a chunk of logic from a test case into a helper > >>> function > >>>>> that doesn't return *void*. For example, if we were factor out the > >>>>> following code inside of a test case: > >>>>> > >>>>> > >>>>> > >>>>> > >>>>> *AWAIT_READY(offers); ASSERT_EQ(1u, offers.get().size()); > >>> offer = > >>>>> offers.get()[0]; * > >>>>> into a function like this: > >>>>> > >>>>> > >>>>> > >>>>> > >>>>> > >>>>> * Offer getOffer(Future<vector<Offer>> &offers) { > >>>>> AWAIT_READY(offers); ASSERT_EQ(1u, offers.get().size()); > >>>>> return > >>>>> offers.get()[0]; }* > >>>>> > >>>>> this innocent-looking transformation would trigger the obscure > >> error > >>>>> message and you'll be upset once you figure out why. > >>>>> > >>>>> If you encounter this case, prefer to resort to *CHECK_** from > >> *glog*, > >>>>> rather > >>>>> than *EXPECT_**, since *CHECK_** has closer semantics. > >>>>> > >>>>> I hope this will help folks save time and also reduce the amount of > >>> head > >>>>> banging! > >>>>> > >>>>> Thanks, > >>>>> > >>>>> MPark. > >>>>> > >>>> > >>>> > >>>> > >>>> -- > >>>> -- Paul Brett > >>>> > >>> > >> > > > > > > > > -- > > @paul_b > >