[jira] [Created] (MESOS-9544) SLRP does not clean up destroyed persistent volumes.

2019-01-30 Thread Chun-Hung Hsiao (JIRA)
Chun-Hung Hsiao created MESOS-9544:
--

 Summary: SLRP does not clean up destroyed persistent volumes.
 Key: MESOS-9544
 URL: https://issues.apache.org/jira/browse/MESOS-9544
 Project: Mesos
  Issue Type: Bug
  Components: storage
Affects Versions: 1.7.1, 1.7.0, 1.6.1, 1.6.0, 1.5.2, 1.5.1, 1.5.0
Reporter: Chun-Hung Hsiao
Assignee: Chun-Hung Hsiao


When a persistent volume created on a {{ROOT}} disk is destroyed, the agent 
will clean up its data: 
https://github.com/apache/mesos/blob/f44535bca811720fc272c9abad2bc78652d61fe3/src/slave/slave.cpp#L4397
However, this is not the case for PVs on SLRP disks. The agent relies on the 
SLRP to do the cleanup:
https://github.com/apache/mesos/blob/f44535bca811720fc272c9abad2bc78652d61fe3/src/slave/slave.cpp#L4472
But SLRP simply updates its metadata and do nothing:
https://github.com/apache/mesos/blob/f44535bca811720fc272c9abad2bc78652d61fe3/src/resource_provider/storage/provider.cpp#L2805

This would lead to data leakage if the framework does not call `CREATE_DISK` 
but just unreserve it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (MESOS-9219) StorageLocalResourceProviderTest.ProfileDisappeared is flaky

2019-01-30 Thread Chun-Hung Hsiao (JIRA)


[ 
https://issues.apache.org/jira/browse/MESOS-9219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16756636#comment-16756636
 ] 

Chun-Hung Hsiao commented on MESOS-9219:


[~bbannier] Do you have the full log? I'm touching this test so I was wondering 
if this will be fixed along with https://reviews.apache.org/r/69866/.

> StorageLocalResourceProviderTest.ProfileDisappeared is flaky
> 
>
> Key: MESOS-9219
> URL: https://issues.apache.org/jira/browse/MESOS-9219
> Project: Mesos
>  Issue Type: Bug
>  Components: test
>Affects Versions: 1.8.0
> Environment: ubuntu-16.04-cmake-default
>Reporter: Benjamin Bannier
>Priority: Major
>  Labels: flaky, storage
>
> We observed a test flake in 
> {{StorageLocalResourceProviderTest.ProfileDisappeared}} in our CI while 
> testing master (aka c9c2be8982d3585994493dab9916c89deb61ec48). I have not 
> confirmed whether this also affects 1.7.x.
> The test failed with
> {noformat}
> I/home/ubuntu/workspace/mesos/Mesos_CI-build/FLAG/CMake/label/mesos-ec2-ubuntu-16.04/mesos/src/tests/storage_local_resource_provider_tests.cpp:1217:
>  Failure
> Failed to wait 15secs for createVolumeStatus
> 0907 11:27:24.638801  3720 provider.cpp:3050] Applying conversion from 
> 'disk(allocated: storage)(reservations: 
> [(DYNAMIC,storage)])[RAW(,test1)]:2048' to 'disk(allocated: 
> storage)(reservations: 
> [(DYNAMIC,storage)])[MOUNT(8dc0abf1-ddec-44c9-a67b-4c4428a26312,test1)]:2048' 
> for operation (uuid: 8dc0abf1-ddec-44c9-a67b-4c4428a26312)
> /home/ubuntu/workspace/mesos/Mesos_CI-build/FLAG/CMake/label/mesos-ec2-ubuntu-16.04/mesos/src/tests/storage_local_resource_provider_tests.cpp:1157:
>  Failure
> Actual function call count doesn't match EXPECT_CALL(sched, 
> resourceOffers(&driver, OffersHaveAnyResource( 
> &Resources::hasResourceProvider)))...
>  Expected: to be called at least 4 times
>Actual: called once - unsatisfied and active
> {noformat}
> and
> {noformat}
> /home/ubuntu/workspace/mesos/Mesos_CI-build/FLAG/CMake/label/mesos-ec2-ubuntu-16.04/mesos/3rdparty/libprocess/src/../include/process/gmock.hpp:504:
>  Failure
> Actual function call count doesn't match EXPECT_CALL(filter->mock, filter(to, 
> testing::A()))...
> Expected args: message matcher (32-byte object <10-99 62-06 00-00 00-00 
> 2B-00 00-00 00-00 00-00 2B-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00>, 
> 1-byte object <10>)
>  Expected: to be called once
>Actual: never called - unsatisfied and active
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (MESOS-9543) Consider improving benchmark result output

2019-01-30 Thread Benjamin Bannier (JIRA)
Benjamin Bannier created MESOS-9543:
---

 Summary: Consider improving benchmark result output
 Key: MESOS-9543
 URL: https://issues.apache.org/jira/browse/MESOS-9543
 Project: Mesos
  Issue Type: Improvement
  Components: test
Reporter: Benjamin Bannier


We should consider improving how benchmarks report their results.

As an example, consider 
{{SlaveAndFrameworkCount/HierarchicalAllocator_BENCHMARK_Test.DeclineOffers/1}}.
 It logs lines like
{noformat}
[==] Running 10 tests from 1 test case.
[--] Global test environment set-up.
[--] 10 tests from 
SlaveAndFrameworkCount/HierarchicalAllocator_BENCHMARK_Test
[ RUN  ] 
SlaveAndFrameworkCount/HierarchicalAllocator_BENCHMARK_Test.DeclineOffers/0
Using 1000 agents and 1 frameworks
Added 1 frameworks in 526091ns
Added 1000 agents in 61.116343ms
round 0 allocate() took 14.70722ms to make 0 offers after filtering 1000 offers
round 1 allocate() took 15.055396ms to make 0 offers after filtering 1000 offers
[   OK ] 
SlaveAndFrameworkCount/HierarchicalAllocator_BENCHMARK_Test.DeclineOffers/0 
(135 ms)
[ RUN  ] 
SlaveAndFrameworkCount/HierarchicalAllocator_BENCHMARK_Test.DeclineOffers/1
{noformat}
I believe there are a number of usability issues with this output format
 * lines with benchmark data need to be {{grep}}'d from the test log depending 
on some test-dependent format
 * test parameters need to be manually inferred from the test name
 * no consistent time unit is used throughout, but instead {{Duration}} values 
are just pretty printed

This makes it hard to consume this results in a generic way (e.g., for 
plotting, comparison, etc.) as to do that one likely needs to implement a 
custom log parser (for each test).

We should consider introducing a generic way to log results from tests which 
requires minimal intervention.

One possible output format could be JSON as it allows to combine heterogeneous 
data like in above example (which might be harder to do in CSV). There exists a 
number of standard tools which can be used to filter JSON data; it can also be 
read by many data analysis tools (e.g., {{pandas}}). Example for above data:
{noformat}
{
"case": "SlaveAndFrameworkCount/HierarchicalAllocator_BENCHMARK_Test",
"test": "DeclineOffers/0",
"parameters": [1000, 1],
"benchmarks": {
"add_agents": [61.116343],
"add_frameworks": [0.0526091],
"allocate": [
{"round": 0, "time": 14.70722, "offers": 0, "filtering": 1000},
{"round": 1, "time": 15.055396, "offers": 0, "filtering": 1000}
]
}
}
{noformat}
Such data could be logged at the end of the test execution with a clear prefix 
to allow aggregating data from many benchmark results in a single log file with 
tools like {{grep}}. We could provide that in addition to what is already 
logged (which might be generated by the same tool).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (MESOS-9507) Agent could not recover due to empty docker volume checkpointed files.

2019-01-30 Thread Gilbert Song (JIRA)


 [ 
https://issues.apache.org/jira/browse/MESOS-9507?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gilbert Song reassigned MESOS-9507:
---

Assignee: Gilbert Song
  Sprint: Containerization RI10 Spr 39
Story Points: 5

> Agent could not recover due to empty docker volume checkpointed files.
> --
>
> Key: MESOS-9507
> URL: https://issues.apache.org/jira/browse/MESOS-9507
> Project: Mesos
>  Issue Type: Bug
>  Components: containerization
>Reporter: Gilbert Song
>Assignee: Gilbert Song
>Priority: Critical
>  Labels: containerizer
>
> Agent could not recover due to empty docker volume checkpointed files. Please 
> see logs:
> {noformat}
> Nov 12 17:12:00 guppy mesos-agent[38960]: E1112 17:12:00.978682 38969 
> slave.cpp:6279] EXIT with status 1: Failed to perform recovery: Collect 
> failed: Collect failed: Failed to recover docker volumes for orphan container 
> e1b04051-1e4a-47a9-b866-1d625cda1d22: JSON parse failed: syntax error at line 
> 1 near:
> Nov 12 17:12:00 guppy mesos-agent[38960]: To remedy this do as follows: 
> Nov 12 17:12:00 guppy mesos-agent[38960]: Step 1: rm -f 
> /var/lib/mesos/slave/meta/slaves/latest
> Nov 12 17:12:00 guppy mesos-agent[38960]: This ensures agent doesn't recover 
> old live executors.
> Nov 12 17:12:00 guppy mesos-agent[38960]: Step 2: Restart the agent. 
> Nov 12 17:12:00 guppy systemd[1]: dcos-mesos-slave.service: main process 
> exited, code=exited, status=1/FAILURE
> Nov 12 17:12:00 guppy systemd[1]: Unit dcos-mesos-slave.service entered 
> failed state.
> Nov 12 17:12:00 guppy systemd[1]: dcos-mesos-slave.service failed.
> {noformat}
> This is caused by agent recovery after the volume state file is created but 
> before checkpointing finishes. Basically the docker volume is not mounted 
> yet, so the docker volume isolator should skip recovering this volume.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (MESOS-9533) CniIsolatorTest.ROOT_CleanupAfterReboot is flaky.

2019-01-30 Thread Gilbert Song (JIRA)


[ 
https://issues.apache.org/jira/browse/MESOS-9533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16756440#comment-16756440
 ] 

Gilbert Song commented on MESOS-9533:
-

yea, did not realize the test was backported. I will backport the fix now

> CniIsolatorTest.ROOT_CleanupAfterReboot is flaky.
> -
>
> Key: MESOS-9533
> URL: https://issues.apache.org/jira/browse/MESOS-9533
> Project: Mesos
>  Issue Type: Bug
>  Components: cni, containerization
>Affects Versions: 1.8.0
> Environment: centos-6 with SSL enabled
>Reporter: Gilbert Song
>Assignee: Gilbert Song
>Priority: Major
>  Labels: cni, flaky-test
> Fix For: 1.4.3, 1.5.3, 1.6.2, 1.7.2, 1.8.0
>
>
> {noformat}
> Error Message
> ../../src/tests/containerizer/cni_isolator_tests.cpp:2685
> Mock function called more times than expected - returning directly.
> Function call: statusUpdate(0x7fffc7c05aa0, @0x7fe637918430 136-byte 
> object <80-24 29-45 E6-7F 00-00 00-00 00-00 00-00 00-00 3E-E8 00-00 00-00 
> 00-00 00-B8 0E-20 F0-55 00-00 C0-03 07-18 E6-7F 00-00 20-17 05-18 E6-7F 00-00 
> 10-50 05-18 E6-7F 00-00 50-D1 04-18 E6-7F 00-00 ... 00-00 00-00 00-00 00-00 
> 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 
> 00-00 00-00 00-00 F0-89 16-E9 58-2B D7-41 00-00 00-00 01-00 00-00 18-00 00-00 
> 0B-00 00-00>)
>  Expected: to be called 3 times
>Actual: called 4 times - over-saturated and active
> Stacktrace
> ../../src/tests/containerizer/cni_isolator_tests.cpp:2685
> Mock function called more times than expected - returning directly.
> Function call: statusUpdate(0x7fffc7c05aa0, @0x7fe637918430 136-byte 
> object <80-24 29-45 E6-7F 00-00 00-00 00-00 00-00 00-00 3E-E8 00-00 00-00 
> 00-00 00-B8 0E-20 F0-55 00-00 C0-03 07-18 E6-7F 00-00 20-17 05-18 E6-7F 00-00 
> 10-50 05-18 E6-7F 00-00 50-D1 04-18 E6-7F 00-00 ... 00-00 00-00 00-00 00-00 
> 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 
> 00-00 00-00 00-00 F0-89 16-E9 58-2B D7-41 00-00 00-00 01-00 00-00 18-00 00-00 
> 0B-00 00-00>)
>  Expected: to be called 3 times
>Actual: called 4 times - over-saturated and active
> {noformat}
> It was from this commit 
> https://github.com/apache/mesos/commit/c338f5ada0123c0558658c6452ac3402d9fbec29



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (MESOS-9542) Hierarchical allocator check failure when an operation on a shutdown framework finishes

2019-01-30 Thread Greg Mann (JIRA)


 [ 
https://issues.apache.org/jira/browse/MESOS-9542?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Greg Mann reassigned MESOS-9542:


Assignee: Joseph Wu

> Hierarchical allocator check failure when an operation on a shutdown 
> framework finishes
> ---
>
> Key: MESOS-9542
> URL: https://issues.apache.org/jira/browse/MESOS-9542
> Project: Mesos
>  Issue Type: Bug
>  Components: master
>Affects Versions: 1.5.0, 1.5.1, 1.5.2, 1.6.0, 1.6.1, 1.7.0, 1.7.1, 1.8.0
>Reporter: Benjamin Bannier
>Assignee: Joseph Wu
>Priority: Blocker
>  Labels: foundations, mesosphere, mesosphere-dss-ga, 
> operation-feedback
>
> When a non-speculated operation like e.g., {{CREATE_DISK}} becomes terminal 
> after the originating framework was torn down, we run into an assertion 
> failure in the allocator.
> {noformat}
> I0129 11:55:35.764394 57857 master.cpp:11373] Updating the state of operation 
> 'operation' (uuid: 10a782bd-9e60-42da-90d6-c00997a25645) for framework 
> a4d0499b-c0d3-4abf-8458-73e595d061ce- (latest state: OPERATION_PENDING, 
> status update state: OPERATION_FINISHED)
> F0129 11:55:35.764744 57925 hierarchical.cpp:834] Check failed: 
> frameworks.contains(frameworkId){noformat}
> With non-speculated operations like e.g., {{CREATE_DISK}} it became possible 
> that operations outlive their originating framework. This was not possible 
> with speculated operations like {{RESERVE}} which were always applied 
> immediately by the master.
> The master does not take this into account, but instead unconditionally calls 
> {{Allocator::updateAllocation}} which asserts that the framework is still 
> known to the allocator.
> Reproducer:
>  * register a framework with the master.
>  * add a master with a resource provider.
>  * let the framework trigger a non-speculated operation like {{CREATE_DISK.}}
>  * tear down the framework before a terminal operation status update reaches 
> the master; this causes the master to e.g., remove the framework from the 
> allocator.
>  * let a terminal, successful operation status update reach the master
>  * 💥 
> To solve this we should cleanup the lifetimes of operations. Since operations 
> can outlive their framework (unlike e.g., tasks), we probably need a 
> different approach here.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (MESOS-9477) Documentation for operation feedback

2019-01-30 Thread Greg Mann (JIRA)


 [ 
https://issues.apache.org/jira/browse/MESOS-9477?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Greg Mann reassigned MESOS-9477:


Assignee: Greg Mann

> Documentation for operation feedback
> 
>
> Key: MESOS-9477
> URL: https://issues.apache.org/jira/browse/MESOS-9477
> Project: Mesos
>  Issue Type: Task
>  Components: documentation
>Reporter: Greg Mann
>Assignee: Greg Mann
>Priority: Major
>  Labels: documentation, foundations, mesosphere
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (MESOS-9473) Add end to end tests for operations on agent default resources.

2019-01-30 Thread Greg Mann (JIRA)


 [ 
https://issues.apache.org/jira/browse/MESOS-9473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Greg Mann reassigned MESOS-9473:


Assignee: Gastón Kleiman

> Add end to end tests for operations on agent default resources.
> ---
>
> Key: MESOS-9473
> URL: https://issues.apache.org/jira/browse/MESOS-9473
> Project: Mesos
>  Issue Type: Task
>  Components: master
>Reporter: Gastón Kleiman
>Assignee: Gastón Kleiman
>Priority: Major
>  Labels: foundations, mesosphere, operation-feedback
>
> Making note of particular cases we need to test:
> * Verify that frameworks will receive OPERATION_GONE_BY_OPERATOR for 
> operations on agent default resources when an agent is marked gone
> * Verify that frameworks will receive OPERATION_GONE_BY_OPERATOR when they 
> reconcile operations on agents which have been marked gone



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (MESOS-9210) Mesos v1 scheduler library does not properly handle SUBSCRIBE retries

2019-01-30 Thread Till Toenshoff (JIRA)


[ 
https://issues.apache.org/jira/browse/MESOS-9210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16756207#comment-16756207
 ] 

Till Toenshoff commented on MESOS-9210:
---

1.5.x:
{noformat}
commit 5f21540e5f45d05e7b5c4450f5074bc30d254fa1
Author: Till Toenshoff 
Date: Wed Jan 30 13:51:34 2019 +0100

Fixed scheduler library on multiple SUBSCRIBE requests per connection.

The HTTP scheduler API dictates that on a single connection, the
scheduler may only send a single SUBSCRIBE request. Due to recent
authentication related changes, this contract got broken. This patch
restores the contract.

Review: https://reviews.apache.org/r/69839/

(cherry picked from commit a5b9fcafbdf8663707e19c818be8a2da1eff8622){noformat}
1.6.x:
{noformat}
commit d97fa5cd0d3c72d7f89b443603f255d57571b07c
Author: Till Toenshoff 
Date: Wed Jan 30 13:51:34 2019 +0100

Fixed scheduler library on multiple SUBSCRIBE requests per connection.

The HTTP scheduler API dictates that on a single connection, the
scheduler may only send a single SUBSCRIBE request. Due to recent
authentication related changes, this contract got broken. This patch
restores the contract.

Review: https://reviews.apache.org/r/69839/

(cherry picked from commit a5b9fcafbdf8663707e19c818be8a2da1eff8622){noformat}
1.7.x:
{noformat}
commit 4018683beeebadf6722a722ed5d9d8a7e396de8e
Author: Till Toenshoff 
Date: Wed Jan 30 05:37:17 2019 +0100

Fixed scheduler library on multiple SUBSCRIBE requests per connection.

The HTTP scheduler API dictates that on a single connection, the
scheduler may only send a single SUBSCRIBE request. Due to recent
authentication related changes, this contract got broken. This patch
restores the contract and adds a test validating that the library is
enforcing it.

Review: https://reviews.apache.org/r/69839/

(cherry picked from commit a5b9fcafbdf8663707e19c818be8a2da1eff8622){noformat}

> Mesos v1 scheduler library does not properly handle SUBSCRIBE retries
> -
>
> Key: MESOS-9210
> URL: https://issues.apache.org/jira/browse/MESOS-9210
> Project: Mesos
>  Issue Type: Bug
>Affects Versions: 1.5.1, 1.6.1, 1.7.0
>Reporter: Vinod Kone
>Assignee: Till Toenshoff
>Priority: Major
>  Labels: integration, mesosphere
>
> After the authentication related refactor done as part of 
> [https://reviews.apache.org/r/62594/], the state of the scheduler is checked 
> in `send` 
> ([https://github.com/apache/mesos/blob/master/src/scheduler/scheduler.cpp#L234)]
>   but it is changed in `_send` 
> ([https://github.com/apache/mesos/blob/master/src/scheduler/scheduler.cpp#L234).]
>  As a result, we can have 2 SUBSCRIBE calls in flight at the same time on the 
> same connection! This is not good and not spec compliant of a HTTP client 
> that is expecting a streaming response.
> We need to fix the library to either drop the retried SUBSCRIBE call if one 
> is in progress (as it was before the refactor) or close the old connection 
> and start a new connection to send the retried SUBSCRIBE call.
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (MESOS-9533) CniIsolatorTest.ROOT_CleanupAfterReboot is flaky.

2019-01-30 Thread Alexander Rukletsov (JIRA)


[ 
https://issues.apache.org/jira/browse/MESOS-9533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16755853#comment-16755853
 ] 

Alexander Rukletsov commented on MESOS-9533:


I've reopened this because I have observed the same failure on the {{1.7.x}} 
branch. I've also set up fix versions to match those in MESOS-9518 since I 
suppose that are the branches where the test have been back introduced.

> CniIsolatorTest.ROOT_CleanupAfterReboot is flaky.
> -
>
> Key: MESOS-9533
> URL: https://issues.apache.org/jira/browse/MESOS-9533
> Project: Mesos
>  Issue Type: Bug
>  Components: cni, containerization
>Affects Versions: 1.8.0
> Environment: centos-6 with SSL enabled
>Reporter: Gilbert Song
>Assignee: Gilbert Song
>Priority: Major
>  Labels: cni, flaky-test
> Fix For: 1.4.3, 1.5.3, 1.6.2, 1.7.2, 1.8.0
>
>
> {noformat}
> Error Message
> ../../src/tests/containerizer/cni_isolator_tests.cpp:2685
> Mock function called more times than expected - returning directly.
> Function call: statusUpdate(0x7fffc7c05aa0, @0x7fe637918430 136-byte 
> object <80-24 29-45 E6-7F 00-00 00-00 00-00 00-00 00-00 3E-E8 00-00 00-00 
> 00-00 00-B8 0E-20 F0-55 00-00 C0-03 07-18 E6-7F 00-00 20-17 05-18 E6-7F 00-00 
> 10-50 05-18 E6-7F 00-00 50-D1 04-18 E6-7F 00-00 ... 00-00 00-00 00-00 00-00 
> 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 
> 00-00 00-00 00-00 F0-89 16-E9 58-2B D7-41 00-00 00-00 01-00 00-00 18-00 00-00 
> 0B-00 00-00>)
>  Expected: to be called 3 times
>Actual: called 4 times - over-saturated and active
> Stacktrace
> ../../src/tests/containerizer/cni_isolator_tests.cpp:2685
> Mock function called more times than expected - returning directly.
> Function call: statusUpdate(0x7fffc7c05aa0, @0x7fe637918430 136-byte 
> object <80-24 29-45 E6-7F 00-00 00-00 00-00 00-00 00-00 3E-E8 00-00 00-00 
> 00-00 00-B8 0E-20 F0-55 00-00 C0-03 07-18 E6-7F 00-00 20-17 05-18 E6-7F 00-00 
> 10-50 05-18 E6-7F 00-00 50-D1 04-18 E6-7F 00-00 ... 00-00 00-00 00-00 00-00 
> 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 
> 00-00 00-00 00-00 F0-89 16-E9 58-2B D7-41 00-00 00-00 01-00 00-00 18-00 00-00 
> 0B-00 00-00>)
>  Expected: to be called 3 times
>Actual: called 4 times - over-saturated and active
> {noformat}
> It was from this commit 
> https://github.com/apache/mesos/commit/c338f5ada0123c0558658c6452ac3402d9fbec29



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)