> On Aug. 22, 2013, 6:39 p.m., Benjamin Hindman wrote: > > src/master/master.cpp, line 212 > > <https://reviews.apache.org/r/13746/diff/1/?file=343857#file343857line212> > > > > What about adding a boolean to removeFramework which indicates what > > kind of removal to do, i.e., whether or not to send messages to schedulers > > or slaves?
This is certainly achievable but I am afraid not in a clean way. This requires us to put boolean flags on both removeFramework and removeSlave, which do three things: 1) Send multiple messages 2) Update data structures for continuous running 3) Free up resources We only need the third kind in the destruction process, which means the first two kinds, which are 80% of the lines, have to be put in multiple branches which reduces the readability of the logic flow in these methods. Thoughts? - Jiang Yan ----------------------------------------------------------- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/13746/#review25420 ----------------------------------------------------------- On Aug. 22, 2013, 6:34 p.m., Jiang Yan Xu wrote: > > ----------------------------------------------------------- > This is an automatically generated e-mail. To reply, visit: > https://reviews.apache.org/r/13746/ > ----------------------------------------------------------- > > (Updated Aug. 22, 2013, 6:34 p.m.) > > > Review request for mesos, Benjamin Hindman, Ben Mahler, Ian Downes, Jie Yu, > and Vinod Kone. > > > Bugs: MESOS-655 > https://issues.apache.org/jira/browse/MESOS-655 > > > Repository: mesos-git > > > Description > ------- > > - In production, Masters exit by either LOG(FATAL) or exit(). > - Also fixed tests that incorrectly relied on the messages sent by > Master::~Master(). > > > Diffs > ----- > > src/master/master.cpp d53b8bb97da45834790cca6e04b70b969a8d3453 > src/tests/allocator_zookeeper_tests.cpp > b84fa86274c80cc85e2b2eeadd6eb08da34433db > > Diff: https://reviews.apache.org/r/13746/diff/ > > > Testing > ------- > > make check on Linux and OSX > > > Thanks, > > Jiang Yan Xu > >
