----------------------------------------------------------- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/63589/#review190748 -----------------------------------------------------------
PASS: Mesos patch 63589 was successfully built and tested. Reviews applied: `['63589']` All the build artifacts available at: http://dcos-win.westus.cloudapp.azure.com/mesos-build/review/63589 - Mesos Reviewbot Windows On Nov. 6, 2017, 6:41 p.m., Andrei Budnik wrote: > > ----------------------------------------------------------- > This is an automatically generated e-mail. To reply, visit: > https://reviews.apache.org/r/63589/ > ----------------------------------------------------------- > > (Updated Nov. 6, 2017, 6:41 p.m.) > > > Review request for mesos, Alexander Rukletsov, Benno Evers, and Gilbert Song. > > > Bugs: MESOS-7506 > https://issues.apache.org/jira/browse/MESOS-7506 > > > Repository: mesos > > > Description > ------- > > Previously, some tests tried to advance the clock until task status > update was sent, while task's container was destroying. Container > destruction consists of multiple steps, where some steps have a timeout > specified, e.g. `cgroups::DESTROY_TIMEOUT`. So, there was a race > between container destruction process and the loop that advanced the > clock, leading to the following outcomes: > > (1) Container destroyed, before clock advancing reaches timeout. > > (2) Triggered timeout due to clock advancing, before container > destruction completes. That results in leaving orphaned > containers that will be detected by Slave destructor in > `tests/cluster.cpp`, so the test will fail. > > This change gets rid of the loop and resumes clock after a single > advancing of the clock. > > > Diffs > ----- > > src/tests/slave_recovery_tests.cpp 64bba047c6eaee563126a8bd1c6fa048f18172e1 > src/tests/slave_tests.cpp cf2fbac4cc53d632c385eb72adb0d80ef942e8a6 > > > Diff: https://reviews.apache.org/r/63589/diff/2/ > > > Testing > ------- > > 1. make check > 2. internal ci (5x) > > > Thanks, > > Andrei Budnik > >
