+1, but…

If we are going to touch all our tests, then IMHO while at it we might as well 
make a jump forward to something better than the current local return void to 
abort tests in macros. 

If we used exceptions instead, it should be easy to catch those in a wrapper 
somewhere in the test class and then we could install some test-class-specific 
or test-superclass-specific or test-specific, but NOT macro-specific (!), extra 
code that prints out extra diagnostic info iff the test has failed. For 
example, it can dump the contents of the sandbox.

If you really don’t like exceptions (?), we could make tests return a value 
instead of void and make the macros indicate that the test failed that way. 
Then we could also have a failure hook in the wrapper. (This is somewhat 
inferior, because it still does not support using the macros in nested 
methods/functions with a different return type. So I prefer exceptions. Not 
saying I want them everywhere in Mesos. Just in tests! 2c)

I have started to modify one individual AWAIT macro for the above purpose, 
because I need more info when an unreproducable flaky fetcher cache tests 
failure happens. I need the extra info dump then and there when it happens, 
because I cannot get to it it later. The problem with this approach is that it 
instruments only one macro. Here is what it looks like (ugly!):

#define AWAIT_READY_FOR_WITH_FAILURE_HOOK(actual, duration, onFailureHook) \
  GTEST_PRED_FORMAT2_( \
     AwaitAssertReady, \
     actual, \
     duration, \
     return [=](const char *message) { \
       onFailureHook(); \
       GTEST_FATAL_FAILURE_(message); \
     })

#define AWAIT_READY_WITH_FAILURE_HOOK(actual, onFailureHook) \
  AWAIT_READY_FOR_WITH_FAILURE_HOOK(actual, Seconds(15), onFailureHook)

Then I use it this way:

    AWAIT_READY_WITH_FAILURE_HOOK(
        someFuture,
        [=]() { logSandbox(); }); // action on failure by timeout

Before you ask, this approach does not work, i.e. the output on failure does 
not happen:

    AWAIT_READY(someFuture
      .onFailed([=]() { logSandbox(); });

---

Always printing out all the info would IMHO be prohibitively much.

An alternative would be to have two log files, one at a lower log level for 
success only and one with the highest log level, which buildbot prints only in 
case something bad happens. etc. We could overwrite the latter after each 
individual test. This approach would mean extra work on the logging system.

Opinions?

Of course, Paul’s proposal should be tackled in any case. :-)

Bernd

> On Aug 15, 2015, at 1:24 AM, Marco Massenzio <ma...@mesosphere.io> wrote:
> 
> +1
> 
> *Marco Massenzio*
> 
> *Distributed Systems Engineerhttp://codetrips.com <http://codetrips.com>*
> 
> On Fri, Aug 14, 2015 at 3:46 PM, Paul Brett <pbr...@twitter.com.invalid>
> wrote:
> 
>> We are currently using the Google log CHECK macros (CHECK_SOME,
>> CHECK_NOTNULL etc) in the test harness, usually to verify test setup.  When
>> these checks fail, it causes the test harness to abort rather than simply
>> move onto the next test. The abort prevents any subsequent tests from
>> running, hiding errors and preventing the generation of the XML test
>> report.
>> 
>> I would like to propose that we eliminate the use of CHECK in the test
>> harness and replace it with the appropriate Google test macros to fail the
>> test case.
>> ​  I​
>> am not proposing that we change the use of CHECK outside the test harness
>> (although CHECK calls in master and slave can also kill the test harness).
>> 
>> For void functions, CHECK can
>> ​ easily​
>> be replaced with the corresponding ASSERT equivalent.
>> 
>> For non-void function, ASSERT cannot be used because it does not return the
>> correct data type and hence we need to use a combination of ADD_FAILURE()
>> and return.
>> 
>> For example:
>> 
>>    CHECK(foo)
>> 
>> would become:
>> 
>>    if(!foo) {
>>        ADD_FAILURE();
>>        return anything;
>>    }
>> 
>> If there is general agreement, I will raise tickets to update the Mesos
>> testing patterns document and each of the test cases.
>> 
>> ​Thanks
>> ​
>> 
>> -- Paul Brett
>> 

Reply via email to