Re: Review Request 70609: Added missing onDiscard handler in timeout case for `cgroups::destroy`.

2019-05-09 Thread Gilbert Song


> On May 8, 2019, 5:25 p.m., Gilbert Song wrote:
> > src/linux/cgroups.cpp
> > Lines 1605-1616 (original), 1605-1617 (patched)
> > 
> >
> > Seems like the commit description `onDiscarded` does not align with the 
> > implementation `onDiscard`. If we call `onDiscard` here, it will shortcut 
> > the `onAny` below. Seems to me both do not make sense here.
> > 
> > I do not see any `hasDiscard` handler or `onDiscard` callback 
> > associated with this future. Probably we should consider remove 
> > `_destroy()`?

ok, please ignore my proposal above.

Still, the patch seems neither a fix nor a workaround. we cannot simulate the 
stuck issue in that way.

There are two cases:
1. freezer cgroup - it is fine to call future.discard() and .onAny because 
destructor of the internal::destroyer will discard the promise.
2. systemd (our case in this issue) - cgroups::remove() is a blocking call, 
which means the .after() 1 min timeout was not triggered yet. We were stucking 
at cgroups::remove(). 
https://github.com/apache/mesos/blob/master/src/linux/cgroups.cpp#L1574

Probably need to do more experiments to understand why systemd hierarchy stuck 
at our cgroups::remove().


- Gilbert


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/70609/#review215143
---


On May 8, 2019, 6:28 a.m., Andrei Budnik wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/70609/
> ---
> 
> (Updated May 8, 2019, 6:28 a.m.)
> 
> 
> Review request for mesos, Gilbert Song, Jie Yu, and Qian Zhang.
> 
> 
> Bugs: MESOS-9306
> https://issues.apache.org/jira/browse/MESOS-9306
> 
> 
> Repository: mesos
> 
> 
> Description
> ---
> 
> Previously, when cgroup destruction took longer than the given timeout,
> we called discard on the future. However, only `onAny` callback was
> subscribed on this future, so DISCARDED state was not handled.
> This patch adds missing `onDiscarded` handler.
> 
> 
> Diffs
> -
> 
>   src/linux/cgroups.cpp 73646c9eb39948192acedb67e3d2fb13acb14b30 
> 
> 
> Diff: https://reviews.apache.org/r/70609/diff/1/
> 
> 
> Testing
> ---
> 
> 1. In order to imitate hanging `cgroups::destroy`, the following code have 
> been added to the beggining of the `destroy()` function:
> ```
> Future destroy(const string& hierarchy, const string& cgroup)
> {
>   Owned> promise(new Promise());
>   return promise->future()
>   ...
> ```
> 
> Observed behaviour _without_ this patch applied:
>  Container is stuck in `DESTROING` state, the last message in logs is:
> ```
> I0508 09:12:29.14 21426 linux_launcher.cpp:618] Destroying cgroup 
> '/sys/fs/cgroup/freezer/mesos/e2066055-69b4-4272-8b8d-352e308aaaca'
> ```
> 
> Observed behaviour _with_ this patch applied:
>  Container finishes deinitialization after 1 minute timeout:
> ```
> E0508 09:13:29.143476 21423 slave.cpp:6591] Termination of executor 'a' of 
> framework f7d6437a-beae-45eb-80ab-ef92e839f352- failed: Failed to kill 
> all processes in the container: Timed out after 1mins
> ```
> 
> 2. sudo make check
> 
> 
> Thanks,
> 
> Andrei Budnik
> 
>



Re: Review Request 70549: Added authorization for `UpdateQuota` call in the master.

2019-05-09 Thread Alexander Rukletsov

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/70549/#review215160
---




include/mesos/authorizer/authorizer.proto
Lines 138-140 (original), 139-144 (patched)


If my understanding is correct, both `UPDATE_QUOTA` and 
`UPDATE_QUOTA_CONFIG` are per role but use different objects in payload. Can we 
convert `QuotaConfig` to `QuotaInfo` for the purpose of authorization and spare 
extra action altogether? I see that authorization implementation is identical 
and does not rely on differences between `QuotaConfig` and `QuotaInfo`.



include/mesos/authorizer/authorizer.proto
Lines 143 (patched)


If you do plan to keep this action, the naming convention in this file 
suggests: `UPDATE_QUOTA_WITH_CONFIG` or `UPDATE_QUOTA_WITH_QUOTA_CONFIG`.


- Alexander Rukletsov


On May 6, 2019, 11 p.m., Meng Zhu wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/70549/
> ---
> 
> (Updated May 6, 2019, 11 p.m.)
> 
> 
> Review request for mesos, Andrei Sekretenko and Benjamin Mahler.
> 
> 
> Bugs: MESOS-9640
> https://issues.apache.org/jira/browse/MESOS-9640
> 
> 
> Repository: mesos
> 
> 
> Description
> ---
> 
> A new authorizable action `update_quota_configs` is added.
> This disambiguates with the old action `update_quota`
> which are used for the old `SetQuota` and
> `RemoveQuota` calls. `update_quota` action requires
> `QuotaInfo` as the object while the new `UpdatedQuota` call
> uses `QuotaConfig`. To keep it compatible with any external
> authorization modules, a new action `update_quota_config`
> is introduced.
> 
> 
> Diffs
> -
> 
>   include/mesos/authorizer/authorizer.proto 
> e2740c402732bb37db991ec92b9301e58b33215b 
>   src/master/master.hpp 7d9732f1e432f6f0290d234242864cbdbf381fa8 
>   src/master/quota_handler.cpp 5d449e6f027a69ccaa0ac3473ea4cf57441601f3 
> 
> 
> Diff: https://reviews.apache.org/r/70549/diff/2/
> 
> 
> Testing
> ---
> 
> make check
> 
> 
> Thanks,
> 
> Meng Zhu
> 
>



Re: Review Request 70591: Added `struct RoleInfo` to track role reservations and framework IDs.

2019-05-09 Thread Andrei Sekretenko


> On May 6, 2019, 7:38 p.m., Meng Zhu wrote:
> > src/master/allocator/mesos/hierarchical.cpp
> > Line 2119 (original), 2122 (patched)
> > 
> >
> > Shouldn't this be:
> > 
> > ```
> > if (!roles.contains(role) || roles.at(role).frameworks.empty()) {
> >   
> > }
> > ```
> 
> Andrei Sekretenko wrote:
> Why should it? 
> After this check the code loops through frameworkSorters, which are empty 
> if there are no frameworks, so the `offerable` will be empty and no offers 
> will be made.
> 
> And checking for absence of any tracked frameworks now requires looping 
> through the `roles` hashmap, which is too expensive.
> 
> Meng Zhu wrote:
> OK, that makes sense. I was just looking at the comment above it. Given 
> the change, maybe we should remove the comment.

I'm thinking about removing this check. 

It was initially introduced here: https://reviews.apache.org/r/37177/diff/8
and then modified here: https://reviews.apache.org/r/41075/diff/13
and I cannot see a rationale behind either version of this check.


The only functional difference introduced by this check is suppressing the log 
message at the end of `deallocate()`:
`VLOG(2) << "No inverse offers to send out!";`
Suppressing this message in case of no frameworks makes some sense (fewer red 
herrings in the log), but this (or something better) can be done in another way.


- Andrei


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/70591/#review215068
---


On May 7, 2019, 1:58 p.m., Andrei Sekretenko wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/70591/
> ---
> 
> (Updated May 7, 2019, 1:58 p.m.)
> 
> 
> Review request for mesos, Benjamin Mahler and Meng Zhu.
> 
> 
> Bugs: MESOS-9701
> https://issues.apache.org/jira/browse/MESOS-9701
> 
> 
> Repository: mesos
> 
> 
> Description
> ---
> 
> This patch introduces `struct RoleInfo` which contains the framework IDs and  
>   
> reservatons tracked under a role. Two separate hashmaps used for this purpose 
>   
> previously are replaced by a single hashmap, thus the need to keep the
>   
> two consistent with each other is removed. This simplifies the role tracking  
>   
> logic in the allocator. The patch introduces minimal to no performance impact.
> 
> 
> Diffs
> -
> 
>   src/master/allocator/mesos/hierarchical.hpp 
> c2058baca5159da4cdcab77afd5de3c0d5ae6c48 
>   src/master/allocator/mesos/hierarchical.cpp 
> 64a076ddd29711437d539a06bb0470755828cc87 
> 
> 
> Diff: https://reviews.apache.org/r/70591/diff/2/
> 
> 
> Testing
> ---
> 
> make check
> 
> Benchmarking: 5 runs of 
> `BENCHMARK_HierarchicalAllocator_WithQuotaParam.LargeAndSmallQuota/5` with 
> the optimized build.
> 
> Performance impact in this benchmark seems to be negligible.
> 
> BEFORE:
> Added 3000 agents in 51.553929ms
> Added 3000 frameworks in 15.174748344secs
> Benchmark setup: 3000 agents, 3000 roles, 3000 frameworks, with random sorter
> Made 3856 allocations in 12.400805171secs
> Made 0 allocation in 12.5850238secs
> 
> Added 3000 agents in 55.739336ms
> Added 3000 frameworks in 14.730404769secs
> Benchmark setup: 3000 agents, 3000 roles, 3000 frameworks, with random sorter
> Made 3856 allocations in 12.563439682secs
> Made 0 allocation in 13.063555055secs
> 
> Added 3000 agents in 54.414733ms
> Added 3000 frameworks in 15.10136842secs
> Benchmark setup: 3000 agents, 3000 roles, 3000 frameworks, with random sorter
> Made 3856 allocations in 13.501664915secs
> Made 0 allocation in 12.89034382secs
> 
> Added 3000 agents in 52.58252ms
> Added 3000 frameworks in 14.048350298secs
> Benchmark setup: 3000 agents, 3000 roles, 3000 frameworks, with random sorter
> Made 3856 allocations in 12.299952145secs
> Made 0 allocation in 11.888248811secs
> 
> Added 3000 agents in 52.821439ms
> Added 3000 frameworks in 15.344450583secs
> Benchmark setup: 3000 agents, 3000 roles, 3000 frameworks, with random sorter
> Made 3856 allocations in 12.63425secs
> Made 0 allocation in 12.427171541secs
> 
> 
> AFTER:
> 
> Added 3000 agents in 69.716648ms
> Added 3000 frameworks in 15.249001979secs
> Benchmark setup: 3000 agents, 3000 roles, 3000 frameworks, with random sorter
> Made 3856 allocations in 12.860494226secs
> Made 0 allocation in 12.228866329secs
> 
> Added 3000 agents in 52.639388ms
> Added 3000 frameworks in 15.207895482secs
> Benchmark setup: 3000 agents, 3000 roles, 3000 frameworks, with random sorter
> Made 3856 allocations in 12.504777266secs
> Made 0 allocation in 12.70388062secs
> 
> Added 3000 agents in 56.865794ms
> Added 3000 frameworks in 15.284003915secs
> 

Re: Review Request 70589: Logged when `/__processes__` returns.

2019-05-09 Thread Alexander Rukletsov

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/70589/
---

(Updated May 9, 2019, 1:09 p.m.)


Review request for mesos, Benjamin Mahler and Till Toenshoff.


Bugs: MESOS-9766
https://issues.apache.org/jira/browse/MESOS-9766


Repository: mesos


Description
---

Adds a log entry when a response with generated by `/__processes__`
is about to be returned to the client.


Diffs
-

  3rdparty/libprocess/src/process.cpp 124836472313721a5dbfe4b1ca55f0da3cecd66b 


Diff: https://reviews.apache.org/r/70589/diff/1/


Testing
---

built & run on Mac OS 10.13.6

```
I0503 13:13:00.885133 59219968 process.cpp:3599] Handling HTTP event for 
process '__processes__' with path: '/__processes__'
I0503 13:13:00.886037 59219968 process.cpp:3412] HTTP GET for /__processes__ 
from 192.168.178.47:59548: '200 OK' after 1.06214ms
```


Thanks,

Alexander Rukletsov



Re: Review Request 70581: Add flag ignoring docker manifest config metadata.

2019-05-09 Thread James Peach

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/70581/#review215172
---




docs/configuration/agent.md
Lines 648 (patched)


I think that the way this is phrased, an operator will reasonably expect 
that just specifying this flag will be enough to get the desired result. It's 
not clear that you also have to remove the `docker/runtime` isolator.

So, I suggest that we add a flag check in `MesosContainerizer::create` and 
erase `docker/runtime` from the isolator set if this flag is specified.

Then we can rename this flag to `--ignore_docker_runtime` which is a more 
consistent with the rest of the Mesos configuration.

Finally, we should also update the documentation at 
`docs/isolators/docker-runtime.md` to mention the flag.



src/slave/containerizer/mesos/provisioner/store.cpp
Lines 79 (patched)


Better to just add this to the existing conditional (in the middle).


- James Peach


On May 9, 2019, 7 p.m., Jacob Janco wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/70581/
> ---
> 
> (Updated May 9, 2019, 7 p.m.)
> 
> 
> Review request for mesos, Gilbert Song and James Peach.
> 
> 
> Bugs: MESOS-9760
> https://issues.apache.org/jira/browse/MESOS-9760
> 
> 
> Repository: mesos
> 
> 
> Description
> ---
> 
> Docker runtime isolator propagates manifest configuration metadata.
> This is not always desirable, e.g. a customer may want to ignore the
> defined WORKDIR/ENV defined in the manifest.
> 
> https://issues.apache.org/jira/browse/MESOS-9760
> 
> 
> Diffs
> -
> 
>   docs/configuration/agent.md 04d7114e92c189afd04a8889b3ab853ade9deec6 
>   docs/upgrades.md ece8462b9950626166c832d3b554ad51f68f42e1 
>   src/slave/containerizer/mesos/provisioner/store.cpp 
> 11fce0eb47e9e6dfce6289afe04a1d58a0c4461a 
>   src/slave/flags.hpp 09921cb6172202b5c1d2f8d03f9ccaeb3d0e8c94 
>   src/slave/flags.cpp 49a350f9e13409493fa9612c53d9d58b62122371 
> 
> 
> Diff: https://reviews.apache.org/r/70581/diff/2/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Jacob Janco
> 
>



Re: Review Request 70581: Add flag ignoring docker manifest config metadata.

2019-05-09 Thread Jacob Janco

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/70581/
---

(Updated May 9, 2019, 7 p.m.)


Review request for mesos, Gilbert Song and James Peach.


Bugs: MESOS-9760
https://issues.apache.org/jira/browse/MESOS-9760


Repository: mesos


Description
---

Docker runtime isolator propagates manifest configuration metadata.
This is not always desirable, e.g. a customer may want to ignore the
defined WORKDIR/ENV defined in the manifest.

https://issues.apache.org/jira/browse/MESOS-9760


Diffs (updated)
-

  docs/configuration/agent.md 04d7114e92c189afd04a8889b3ab853ade9deec6 
  docs/upgrades.md ece8462b9950626166c832d3b554ad51f68f42e1 
  src/slave/containerizer/mesos/provisioner/store.cpp 
11fce0eb47e9e6dfce6289afe04a1d58a0c4461a 
  src/slave/flags.hpp 09921cb6172202b5c1d2f8d03f9ccaeb3d0e8c94 
  src/slave/flags.cpp 49a350f9e13409493fa9612c53d9d58b62122371 


Diff: https://reviews.apache.org/r/70581/diff/2/

Changes: https://reviews.apache.org/r/70581/diff/1-2/


Testing
---


Thanks,

Jacob Janco



Re: Review Request 70591: Added `struct RoleInfo` to track role reservations and framework IDs.

2019-05-09 Thread Meng Zhu


> On May 6, 2019, 12:38 p.m., Meng Zhu wrote:
> > src/master/allocator/mesos/hierarchical.cpp
> > Line 2119 (original), 2122 (patched)
> > 
> >
> > Shouldn't this be:
> > 
> > ```
> > if (!roles.contains(role) || roles.at(role).frameworks.empty()) {
> >   
> > }
> > ```
> 
> Andrei Sekretenko wrote:
> Why should it? 
> After this check the code loops through frameworkSorters, which are empty 
> if there are no frameworks, so the `offerable` will be empty and no offers 
> will be made.
> 
> And checking for absence of any tracked frameworks now requires looping 
> through the `roles` hashmap, which is too expensive.
> 
> Meng Zhu wrote:
> OK, that makes sense. I was just looking at the comment above it. Given 
> the change, maybe we should remove the comment.
> 
> Andrei Sekretenko wrote:
> I'm thinking about removing this check. 
> 
> It was initially introduced here: 
> https://reviews.apache.org/r/37177/diff/8
> and then modified here: https://reviews.apache.org/r/41075/diff/13
> and I cannot see a rationale behind either version of this check.
> 
> 
> The only functional difference introduced by this check is suppressing 
> the log message at the end of `deallocate()`:
> `VLOG(2) << "No inverse offers to send out!";`
> Suppressing this message in case of no frameworks makes some sense (fewer 
> red herrings in the log), but this (or something better) can be done in 
> another way.

Sounds good to me. I am OK with removing that.


- Meng


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/70591/#review215068
---


On May 7, 2019, 6:58 a.m., Andrei Sekretenko wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/70591/
> ---
> 
> (Updated May 7, 2019, 6:58 a.m.)
> 
> 
> Review request for mesos, Benjamin Mahler and Meng Zhu.
> 
> 
> Bugs: MESOS-9701
> https://issues.apache.org/jira/browse/MESOS-9701
> 
> 
> Repository: mesos
> 
> 
> Description
> ---
> 
> This patch introduces `struct RoleInfo` which contains the framework IDs and  
>   
> reservatons tracked under a role. Two separate hashmaps used for this purpose 
>   
> previously are replaced by a single hashmap, thus the need to keep the
>   
> two consistent with each other is removed. This simplifies the role tracking  
>   
> logic in the allocator. The patch introduces minimal to no performance impact.
> 
> 
> Diffs
> -
> 
>   src/master/allocator/mesos/hierarchical.hpp 
> c2058baca5159da4cdcab77afd5de3c0d5ae6c48 
>   src/master/allocator/mesos/hierarchical.cpp 
> 64a076ddd29711437d539a06bb0470755828cc87 
> 
> 
> Diff: https://reviews.apache.org/r/70591/diff/2/
> 
> 
> Testing
> ---
> 
> make check
> 
> Benchmarking: 5 runs of 
> `BENCHMARK_HierarchicalAllocator_WithQuotaParam.LargeAndSmallQuota/5` with 
> the optimized build.
> 
> Performance impact in this benchmark seems to be negligible.
> 
> BEFORE:
> Added 3000 agents in 51.553929ms
> Added 3000 frameworks in 15.174748344secs
> Benchmark setup: 3000 agents, 3000 roles, 3000 frameworks, with random sorter
> Made 3856 allocations in 12.400805171secs
> Made 0 allocation in 12.5850238secs
> 
> Added 3000 agents in 55.739336ms
> Added 3000 frameworks in 14.730404769secs
> Benchmark setup: 3000 agents, 3000 roles, 3000 frameworks, with random sorter
> Made 3856 allocations in 12.563439682secs
> Made 0 allocation in 13.063555055secs
> 
> Added 3000 agents in 54.414733ms
> Added 3000 frameworks in 15.10136842secs
> Benchmark setup: 3000 agents, 3000 roles, 3000 frameworks, with random sorter
> Made 3856 allocations in 13.501664915secs
> Made 0 allocation in 12.89034382secs
> 
> Added 3000 agents in 52.58252ms
> Added 3000 frameworks in 14.048350298secs
> Benchmark setup: 3000 agents, 3000 roles, 3000 frameworks, with random sorter
> Made 3856 allocations in 12.299952145secs
> Made 0 allocation in 11.888248811secs
> 
> Added 3000 agents in 52.821439ms
> Added 3000 frameworks in 15.344450583secs
> Benchmark setup: 3000 agents, 3000 roles, 3000 frameworks, with random sorter
> Made 3856 allocations in 12.63425secs
> Made 0 allocation in 12.427171541secs
> 
> 
> AFTER:
> 
> Added 3000 agents in 69.716648ms
> Added 3000 frameworks in 15.249001979secs
> Benchmark setup: 3000 agents, 3000 roles, 3000 frameworks, with random sorter
> Made 3856 allocations in 12.860494226secs
> Made 0 allocation in 12.228866329secs
> 
> Added 3000 agents in 52.639388ms
> Added 3000 frameworks in 15.207895482secs
> Benchmark setup: 3000 agents, 3000 roles, 3000 frameworks, with random sorter
> Made 3856 

Re: Review Request 70613: Randomized the agents in the second allocation stage.

2019-05-09 Thread Benjamin Mahler

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/70613/#review215169
---


Fix it, then Ship it!




Can you file a ticket for this?


src/master/allocator/mesos/hierarchical.cpp
Lines 2008-2009 (patched)


"eliminate effect" seems vague, maybe something like "spread out" the 
effect of the first stage, which tends to allocate from the front of the agent 
list more so than the back?


- Benjamin Mahler


On May 9, 2019, 12:14 a.m., Meng Zhu wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/70613/
> ---
> 
> (Updated May 9, 2019, 12:14 a.m.)
> 
> 
> Review request for mesos, Andrei Sekretenko and Benjamin Mahler.
> 
> 
> Repository: mesos
> 
> 
> Description
> ---
> 
> Before this patch, agents are randomized before the 1st
> allocation stage (the quota allocation stage) but not in
> the 2nd stage. One perceived issue is that resources on
> the agents in the front of the queue are likely to be mostly
> allocated in the 1st stage, leaving only slices of resources
> available for the second stage. Thus we may see consistently
> low quality offers for role/frameworks that get allocated first
> in the 2nd stage.
> 
> This patch randomizes the agents again before the 2nd stage to
> eliminate any effect left by the 1st stage allocation.
> 
> 
> Diffs
> -
> 
>   src/master/allocator/mesos/hierarchical.cpp 
> 64a076ddd29711437d539a06bb0470755828cc87 
> 
> 
> Diff: https://reviews.apache.org/r/70613/diff/1/
> 
> 
> Testing
> ---
> 
> make check
> 
> 
> Thanks,
> 
> Meng Zhu
> 
>



Re: Review Request 70570: Logged headroom related info in the allocator.

2019-05-09 Thread Benjamin Mahler

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/70570/#review215168
---


Fix it, then Ship it!





src/master/allocator/mesos/hierarchical.cpp
Lines 2012-2015 (patched)


I don't think we need to quote out the variables like this, as it tends to 
lead to a more verbose comment than necessary, e.g.:

```
For logging purposes, we track the number of agents that had
resources held back for quota headroom, as well as how many
resources in total were held back.
```



src/master/allocator/mesos/hierarchical.cpp
Lines 2119 (patched)


more importantly, we do not track it there? right? (the logging seems 
secondary to the fact that we don't track the information in the first stage)

probably this comment about how we track it should be up above the 
variables rather than in the logging of it?



src/master/allocator/mesos/hierarchical.cpp
Lines 2128 (patched)


is this accurate?

it can actually be a combination of: filtering, limits being reached, and 
suppressed roles?

Maybe don't include this for now and just log the available headroom?


- Benjamin Mahler


On May 9, 2019, 12:22 a.m., Meng Zhu wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/70570/
> ---
> 
> (Updated May 9, 2019, 12:22 a.m.)
> 
> 
> Review request for mesos and Benjamin Mahler.
> 
> 
> Bugs: MESOS-9759
> https://issues.apache.org/jira/browse/MESOS-9759
> 
> 
> Repository: mesos
> 
> 
> Description
> ---
> 
> This patch logs `requiredHeadroom` and `availableHeadroom`
> before each allocation cycle and logs `requiredHeadroom`,
> resources and number of agents held back for quota `headroom`
> as well as resources filtered at the end of each allocation
> cycle.
> 
> While we also held resources back for quota headroom in the
> first stage, we do not log it there. This is because in the
> second stage, we try to allocate all resources (including the
> ones held back in the first stage). Thus only resources held
> back in the second stage are truly held back for the whole
> allocation cycle.
> 
> 
> Diffs
> -
> 
>   src/master/allocator/mesos/hierarchical.cpp 
> 64a076ddd29711437d539a06bb0470755828cc87 
> 
> 
> Diff: https://reviews.apache.org/r/70570/diff/5/
> 
> 
> Testing
> ---
> 
> N/A
> 
> 
> Thanks,
> 
> Meng Zhu
> 
>



Re: Review Request 70581: Add flag ignoring docker manifest config metadata.

2019-05-09 Thread Jacob Janco


> On May 2, 2019, 10:16 p.m., James Peach wrote:
> > The new flag should also be documented in the 
> > [upgrade](https://github.com/apache/mesos/blob/master/docs/upgrades.md) 
> > doc. You'll need to add a new 1.9.x section.

Done.


- Jacob


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/70581/#review215004
---


On May 9, 2019, 7 p.m., Jacob Janco wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/70581/
> ---
> 
> (Updated May 9, 2019, 7 p.m.)
> 
> 
> Review request for mesos, Gilbert Song and James Peach.
> 
> 
> Bugs: MESOS-9760
> https://issues.apache.org/jira/browse/MESOS-9760
> 
> 
> Repository: mesos
> 
> 
> Description
> ---
> 
> Docker runtime isolator propagates manifest configuration metadata.
> This is not always desirable, e.g. a customer may want to ignore the
> defined WORKDIR/ENV defined in the manifest.
> 
> https://issues.apache.org/jira/browse/MESOS-9760
> 
> 
> Diffs
> -
> 
>   docs/configuration/agent.md 04d7114e92c189afd04a8889b3ab853ade9deec6 
>   docs/upgrades.md ece8462b9950626166c832d3b554ad51f68f42e1 
>   src/slave/containerizer/mesos/provisioner/store.cpp 
> 11fce0eb47e9e6dfce6289afe04a1d58a0c4461a 
>   src/slave/flags.hpp 09921cb6172202b5c1d2f8d03f9ccaeb3d0e8c94 
>   src/slave/flags.cpp 49a350f9e13409493fa9612c53d9d58b62122371 
> 
> 
> Diff: https://reviews.apache.org/r/70581/diff/2/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Jacob Janco
> 
>



Re: Review Request 70583: Eliminated copying 'suppressedRoles' and changed its type.

2019-05-09 Thread Benjamin Mahler

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/70583/#review215170
---


Fix it, then Ship it!




The description links to itself? I assume the link should be to: 
https://reviews.apache.org/r/70531 ?


src/master/master.cpp
Line 2872 (original), 2874 (patched)


shouldn't we std::move it here?


- Benjamin Mahler


On May 7, 2019, 11:09 a.m., Andrei Sekretenko wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/70583/
> ---
> 
> (Updated May 7, 2019, 11:09 a.m.)
> 
> 
> Review request for mesos and Benjamin Mahler.
> 
> 
> Bugs: MESOS-7258
> https://issues.apache.org/jira/browse/MESOS-7258
> 
> 
> Repository: mesos
> 
> 
> Description
> ---
> 
> This patch changes the type of `suppressedRoles` argument of  
>   
> `Master::_subscribe()` method from `const set&` to \  
>   
> `RepeatedFieldPtr&&` and uses move semantics to eliminate copying.
>   
>   
>   
> Change from `set` to `RepeatedPtrField` is required due to 
> the  
> need to pass the suppressed roles into 
> `Master::validateFrameworkSubscription()` 
> once again. This need arises in the dependent patch:  
>   
> https://reviews.apache.org/r/70583/.  
>   
>   
>   
> Move semantics is employed to avoid copying `suppressedRoles` 
>   
> in the deferred call.
> 
> 
> Diffs
> -
> 
>   src/master/http.cpp 765bbf1442f26dcc08c5404f714b6a2ef6616ed8 
>   src/master/master.hpp 7d9732f1e432f6f0290d234242864cbdbf381fa8 
>   src/master/master.cpp a8ee6297e1587c160a47b806914d3c3aa4f35cd7 
> 
> 
> Diff: https://reviews.apache.org/r/70583/diff/1/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Andrei Sekretenko
> 
>



Re: Review Request 70531: Made using validateFrameworkSubscription() in UPDATE_FRAMEWORK possible.

2019-05-09 Thread Benjamin Mahler

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/70531/#review215171
---


Fix it, then Ship it!




Looks like the description didn't run through the git commit hooks?

```
$ ./support/apply-reviews.py -r 70531
2019-05-09 17:58:38 URL:https://reviews.apache.org/r/70531/diff/raw/ 
[2632/2632] -> "70531.patch" [1]
Checking 2 C++ files
Done processing src/master/master.cpp
Done processing src/master/master.hpp
Error: No line in the commit message summary may exceed 72 characters.
```


src/master/master.hpp
Lines 1148 (patched)


paren on the previous line


- Benjamin Mahler


On May 7, 2019, 11:15 a.m., Andrei Sekretenko wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/70531/
> ---
> 
> (Updated May 7, 2019, 11:15 a.m.)
> 
> 
> Review request for mesos and Benjamin Mahler.
> 
> 
> Bugs: MESOS-7258
> https://issues.apache.org/jira/browse/MESOS-7258
> 
> 
> Repository: mesos
> 
> 
> Description
> ---
> 
> This patch replaces the `Call:Subscribe` message in the arguments of  
>   
> `Master::validateFrameworkSubscription()` with its constituents:  
>   
> the `FrameworkInfo` and the list of suppressed roles. 
> 
>   
>   
> The motivation behind this change is to help keep the new UPDATE_FRAMEWORK 
> call 
> (being introduced in MESOS-7258) consistent with re-subscribing the framework 
>   
> by using the same `Master::validateFrameworkSubscription()` method to 
> validate  
> both SUBSCRIBE and UPDATE_FRAMEWORK calls.
> 
> 
> Diffs
> -
> 
>   src/master/master.hpp 7d9732f1e432f6f0290d234242864cbdbf381fa8 
>   src/master/master.cpp a8ee6297e1587c160a47b806914d3c3aa4f35cd7 
> 
> 
> Diff: https://reviews.apache.org/r/70531/diff/3/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Andrei Sekretenko
> 
>



Re: Review Request 70591: Added `struct RoleInfo` to track role reservations and framework IDs.

2019-05-09 Thread Meng Zhu

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/70591/#review215166
---




src/master/allocator/mesos/hierarchical.hpp
Lines 111-114 (patched)


I find `roleInfo` as a OK name, espcially it is a value indexed by a role 
name.

It seems odd to me that we leave `TODO` for such a task here (it's obvious 
we can either do it now or, due the lack of good choices, unlikely to happen in 
the future).



src/master/allocator/mesos/hierarchical.hpp
Lines 115 (patched)


add space before `{`:

struct RoleInfo {



src/master/allocator/mesos/hierarchical.hpp
Lines 118 (patched)


New line above this.



src/master/allocator/mesos/hierarchical.cpp
Lines 2547-2572 (original), 2549-2577 (patched)


There are a lot `roles.at(role)` here, let's keep it in a variable?

```
RoleInfo& roleInfo = roles.at(role);

```



src/master/allocator/mesos/hierarchical.cpp
Lines 2609-2623 (original), 2614-2624 (patched)


Thanks for the refactoring! Let's pull this refactor out into a separate 
patch in front this one.


- Meng Zhu


On May 7, 2019, 6:58 a.m., Andrei Sekretenko wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/70591/
> ---
> 
> (Updated May 7, 2019, 6:58 a.m.)
> 
> 
> Review request for mesos, Benjamin Mahler and Meng Zhu.
> 
> 
> Bugs: MESOS-9701
> https://issues.apache.org/jira/browse/MESOS-9701
> 
> 
> Repository: mesos
> 
> 
> Description
> ---
> 
> This patch introduces `struct RoleInfo` which contains the framework IDs and  
>   
> reservatons tracked under a role. Two separate hashmaps used for this purpose 
>   
> previously are replaced by a single hashmap, thus the need to keep the
>   
> two consistent with each other is removed. This simplifies the role tracking  
>   
> logic in the allocator. The patch introduces minimal to no performance impact.
> 
> 
> Diffs
> -
> 
>   src/master/allocator/mesos/hierarchical.hpp 
> c2058baca5159da4cdcab77afd5de3c0d5ae6c48 
>   src/master/allocator/mesos/hierarchical.cpp 
> 64a076ddd29711437d539a06bb0470755828cc87 
> 
> 
> Diff: https://reviews.apache.org/r/70591/diff/2/
> 
> 
> Testing
> ---
> 
> make check
> 
> Benchmarking: 5 runs of 
> `BENCHMARK_HierarchicalAllocator_WithQuotaParam.LargeAndSmallQuota/5` with 
> the optimized build.
> 
> Performance impact in this benchmark seems to be negligible.
> 
> BEFORE:
> Added 3000 agents in 51.553929ms
> Added 3000 frameworks in 15.174748344secs
> Benchmark setup: 3000 agents, 3000 roles, 3000 frameworks, with random sorter
> Made 3856 allocations in 12.400805171secs
> Made 0 allocation in 12.5850238secs
> 
> Added 3000 agents in 55.739336ms
> Added 3000 frameworks in 14.730404769secs
> Benchmark setup: 3000 agents, 3000 roles, 3000 frameworks, with random sorter
> Made 3856 allocations in 12.563439682secs
> Made 0 allocation in 13.063555055secs
> 
> Added 3000 agents in 54.414733ms
> Added 3000 frameworks in 15.10136842secs
> Benchmark setup: 3000 agents, 3000 roles, 3000 frameworks, with random sorter
> Made 3856 allocations in 13.501664915secs
> Made 0 allocation in 12.89034382secs
> 
> Added 3000 agents in 52.58252ms
> Added 3000 frameworks in 14.048350298secs
> Benchmark setup: 3000 agents, 3000 roles, 3000 frameworks, with random sorter
> Made 3856 allocations in 12.299952145secs
> Made 0 allocation in 11.888248811secs
> 
> Added 3000 agents in 52.821439ms
> Added 3000 frameworks in 15.344450583secs
> Benchmark setup: 3000 agents, 3000 roles, 3000 frameworks, with random sorter
> Made 3856 allocations in 12.63425secs
> Made 0 allocation in 12.427171541secs
> 
> 
> AFTER:
> 
> Added 3000 agents in 69.716648ms
> Added 3000 frameworks in 15.249001979secs
> Benchmark setup: 3000 agents, 3000 roles, 3000 frameworks, with random sorter
> Made 3856 allocations in 12.860494226secs
> Made 0 allocation in 12.228866329secs
> 
> Added 3000 agents in 52.639388ms
> Added 3000 frameworks in 15.207895482secs
> Benchmark setup: 3000 agents, 3000 roles, 3000 frameworks, with random sorter
> Made 3856 allocations in 12.504777266secs
> Made 0 allocation in 12.70388062secs
> 
> Added 3000 agents in 56.865794ms
> Added 3000 frameworks in 15.284003915secs
> Benchmark setup: 3000 agents, 3000 roles, 3000 frameworks, with random sorter
> Made 3856 allocations in 13.86859815secs
> Made 0 allocation in 12.538958231secs
> 
> Added 3000 agents in 56.028013ms
> Added 3000 frameworks in 

Review Request 70618: Encapsulate a framework sorter inside of a RoleInfo.

2019-05-09 Thread Andrei Sekretenko

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/70618/
---

Review request for mesos, Benjamin Mahler and Meng Zhu.


Bugs: MESOS-9701
https://issues.apache.org/jira/browse/MESOS-9701


Repository: mesos


Description
---

This patch consolidates in the `RoleInfo` the logic of tracking frameworks
tied to a specific role.

To do this properly, `RoleInfo` is turned into a class which wraps 
access to the framework sorter so that no other entity is capable
of adding/removing clients (the frameworks) to/from the framework sorter.

The patch introduces almost no performance impact.


Diffs
-

  src/master/allocator/mesos/hierarchical.hpp 
c2058baca5159da4cdcab77afd5de3c0d5ae6c48 
  src/master/allocator/mesos/hierarchical.cpp 
64a076ddd29711437d539a06bb0470755828cc87 


Diff: https://reviews.apache.org/r/70618/diff/1/


Testing
---

make check

Benchmarking: 5 runs of 
BENCHMARK_HierarchicalAllocator_WithQuotaParam.LargeAndSmallQuota/5 with the 
optimized build.

BEFORE (master):

Added 3000 agents in 52.729305ms
Added 3000 frameworks in 13.799200231secs
Benchmark setup: 3000 agents, 3000 roles, 3000 frameworks, with random sorter
Made 3856 allocations in 12.560201008secs
Made 0 allocation in 12.143722849secs

Added 3000 agents in 64.789364ms
Added 3000 frameworks in 14.175436362secs
Benchmark setup: 3000 agents, 3000 roles, 3000 frameworks, with random sorter
Made 3856 allocations in 12.505877513secs
Made 0 allocation in 12.424587206secs

Added 3000 agents in 50.942631ms
Added 3000 frameworks in 14.26206239secs
Benchmark setup: 3000 agents, 3000 roles, 3000 frameworks, with random sorter
Made 3856 allocations in 12.220950624secs
Made 0 allocation in 12.495832704secs

Added 3000 agents in 50.660372ms
Added 3000 frameworks in 14.246438788secs
Benchmark setup: 3000 agents, 3000 roles, 3000 frameworks, with random sorter
Made 3856 allocations in 12.505903032secs
Made 0 allocation in 12.334731087secs

Added 3000 agents in 50.292757ms
Added 3000 frameworks in 14.236187327secs
Benchmark setup: 3000 agents, 3000 roles, 3000 frameworks, with random sorter
Made 3856 allocations in 12.531212952secs
Made 0 allocation in 12.282740343secs

---
AFTER (master + previous patch https://reviews.apache.org/r/70591/ + this 
patch):

Added 3000 agents in 51.465368ms
Added 3000 frameworks in 13.599017611secs
Benchmark setup: 3000 agents, 3000 roles, 3000 frameworks, with random sorter
Made 3856 allocations in 12.272279231secs
Made 0 allocation in 12.026509432secs

Added 3000 agents in 52.5567ms
Added 3000 frameworks in 13.67345101secs
Benchmark setup: 3000 agents, 3000 roles, 3000 frameworks, with random sorter
Made 3856 allocations in 12.447551363secs
Made 0 allocation in 12.045692187secs

Added 3000 agents in 52.455703ms
Added 3000 frameworks in 13.344338641secs
Benchmark setup: 3000 agents, 3000 roles, 3000 frameworks, with random sorter
Made 3856 allocations in 11.988632162secs
Made 0 allocation in 11.558150131secs

Added 3000 agents in 53.579201ms
Added 3000 frameworks in 13.681435728secs
Benchmark setup: 3000 agents, 3000 roles, 3000 frameworks, with random sorter
Made 3856 allocations in 11.966754231secs
Made 0 allocation in 12.720889223secs

Added 3000 agents in 52.08713ms
Added 3000 frameworks in 13.562008608secs
Benchmark setup: 3000 agents, 3000 roles, 3000 frameworks, with random sorter
Made 3856 allocations in 12.19201724secs
Made 0 allocation in 11.727037034secs


Thanks,

Andrei Sekretenko



Re: Review Request 70596: Launched tasks with more memory in SLRP unit tests.

2019-05-09 Thread Chun-Hung Hsiao


> On May 6, 2019, 11:55 a.m., Benjamin Bannier wrote:
> > src/tests/storage_local_resource_provider_tests.cpp
> > Line 1795 (original), 1795 (patched)
> > 
> >
> > Could we move this into a test class-specific constant?
> > 
> > Here and below.

Made this a function that takes `persistentVolumes` as additional resources and 
adds cpus and mem to them.


- Chun-Hung


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/70596/#review215058
---


On May 3, 2019, 9:09 p.m., Chun-Hung Hsiao wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/70596/
> ---
> 
> (Updated May 3, 2019, 9:09 p.m.)
> 
> 
> Review request for mesos, Benjamin Bannier and Jan Schlicht.
> 
> 
> Bugs: MESOS-9765
> https://issues.apache.org/jira/browse/MESOS-9765
> 
> 
> Repository: mesos
> 
> 
> Description
> ---
> 
> Raised the task memory to 128MB (which are the value used in most persistent
> volume tests) in all SLRP tests that launch tasks to avoid OOM.
> 
> 
> Diffs
> -
> 
>   src/tests/storage_local_resource_provider_tests.cpp 
> ecd7aeef1cb3d1a5b4b3419dfd912d41a8c6 
> 
> 
> Diff: https://reviews.apache.org/r/70596/diff/1/
> 
> 
> Testing
> ---
> 
> sudo make check
> 
> 
> Thanks,
> 
> Chun-Hung Hsiao
> 
>



Review Request 70620: Made SLRP allow changes in volume context.

2019-05-09 Thread Chun-Hung Hsiao

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/70620/
---

Review request for mesos, Benjamin Bannier and James DeFelice.


Bugs: MESOS-9395
https://issues.apache.org/jira/browse/MESOS-9395


Repository: mesos


Description
---

To make SLRP more robust against non-conforming CSI plugins that change
volume contexts, the `getExistVolumes` method returns a list of resource
conversions consisting of one for converting old volume contexts to new
volume contexts, and one to remove missing volumes and add new volumes.

To make the interfaces consistent, `getStoragePools` now also returns a
list of resource conversions consisting of one conversion.


Diffs
-

  include/mesos/mesos.proto dc6a87f14864cb4d46b9856f251c2946a475162d 
  include/mesos/v1/mesos.proto e8086e0f70cac73876a8ae31db3365b5059b5c44 
  src/resource_provider/storage/provider.cpp 
999fe95bb6f38f5a25068accd854b37788b24028 


Diff: https://reviews.apache.org/r/70620/diff/1/


Testing
---

sudo make check
more testing done later in chain


Thanks,

Chun-Hung Hsiao



Review Request 70622: Added a unit test to verify if SLRP allows changes in volume context.

2019-05-09 Thread Chun-Hung Hsiao

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/70622/
---

Review request for mesos and Benjamin Bannier.


Bugs: MESOS-9395
https://issues.apache.org/jira/browse/MESOS-9395


Repository: mesos


Description
---

Added a unit test to verify if SLRP allows changes in volume context.


Diffs
-

  src/examples/test_csi_plugin.cpp 03f782ead136a79c4c5b099496072933f6737598 
  src/tests/storage_local_resource_provider_tests.cpp 
ecd7aeef1cb3d1a5b4b3419dfd912d41a8c6 


Diff: https://reviews.apache.org/r/70622/diff/1/


Testing
---

make check


Thanks,

Chun-Hung Hsiao



Re: Review Request 70579: Added a Task Scheduler to simplify testing.

2019-05-09 Thread Chun-Hung Hsiao

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/70579/#review215174
---



Thanks for your work on this test scheduler!


src/scheduler/scheduler.cpp
Lines 1010 (patched)


Instead of introducing this, we could change the `TestMesos` template in 
`src/tests/mesos.hpp` to the following:
```
namespace scheduler {

template
class TestMesos : public Mesos
{
   ...
};

} // namespace scheduler

namespace v1 {
namespace scheduler {

using TestMesos = tests::scheduler::TestMesos<
mesos::v1::scheduler::Mesos,
MockHTTPSchedluer<
mesos::v1::scheduler::Mesos,
mesos::v1::scheduler::Event>>;

} // namespace scheduler
} // namespace v1
```

Then we can use `tests::scheduler::TestMesos` in `TaskScheduler`.



src/tests/utils/task_scheduler.hpp
Lines 17 (patched)


Would it be more consistent if we put this file at 
`src/tests/scheduler.hpp`, and call the scheduler `TestScheduler`, just like 
what we have in `src/tests/allocator.hpp`?

BTW we already have `src/tests/utils.hpp`, which has the same prefix 
`src/tests/utils`, not sure if we want to avoid this.



src/tests/utils/task_scheduler.cpp
Lines 70 (patched)


How about the following:

struct Launchable
{
  virtual v1::Resources resources() const = 0;
  virtual v1::Offer::Operation operation(...) = 0;
};

struct Task : public Launchable
{
  ...
};

struct TaskGroup : public Launchable
{
  ...
};

std::queue launchQueue;



src/tests/utils/task_scheduler.cpp
Lines 86 (patched)


If we take the `TestMesos` suggestion I proposed above, we won't need to 
store the master pid.



src/tests/utils/task_scheduler.cpp
Lines 90 (patched)


According to the style guide, this should be `subscribed_`.



src/tests/utils/task_scheduler.cpp
Lines 114 (patched)


It seems to me that `TaskScheduler` is redundent since glog will prepend 
the log message with the file name.

Ditto for all logging below.



src/tests/utils/task_scheduler.cpp
Lines 133 (patched)


Would it make sense to just use `DEFAULT_FRAMEWORK_INFO`?



src/tests/utils/task_scheduler.cpp
Lines 144 (patched)


`createCallSubscribe(frameworkInfo)`.



src/tests/utils/task_scheduler.cpp
Lines 158 (patched)


How about `frameworkInfo.roles(0)`? Ditto below for all `allocate("*")`.



src/tests/utils/task_scheduler.cpp
Lines 265-268 (patched)


```
case ...::HEARTBEAT:
case ...::UPDATE_OPERATION_STATUS:
case ...::INVERSE_OFFERS:
case ...::RESCIND_INVERSE_OFFERS:
  break;
```
Also let's move this to the end and merge with the other two events.

If you prefer to keep the order and use multiple `break`s, let's just leave 
one space between the colon and `break`.



src/tests/utils/task_scheduler.cpp
Lines 286 (patched)


break the line apart



src/tests/utils/task_scheduler.cpp
Lines 289 (patched)


```
offers_.erase(std::remove_if(...), offers_.end());
```
Although the effect is the same since there will be only one element 
removed, pairing `std::remove_if` and `end` will be more idomatic.



src/tests/utils/task_scheduler.cpp
Lines 306 (patched)


`createCallAcknowledge(frameworkId, status.agent_id(), event.update())`



src/tests/utils/task_scheduler.cpp
Lines 332 (patched)


How about just `JSON::protobuf(event.failure())`?



src/tests/utils/task_scheduler.cpp
Lines 381 (patched)


Unfortunately there is no `createCallAccept` helper that takes a list of 
offers. How about adding one in `src/tests/mesos.hpp` then using it here?



src/tests/utils/task_scheduler.cpp
Lines 409 (patched)


Do you want to support launching more than one items within a single offer?



src/tests/utils/task_scheduler.cpp
Lines 456 (patched)


Instead of doing the wiring, we could instead make 

Review Request 70621: Used full paths as volume IDs for the test CSI plugin.

2019-05-09 Thread Chun-Hung Hsiao

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/70621/
---

Review request for mesos and Benjamin Bannier.


Bugs: MESOS-9395
https://issues.apache.org/jira/browse/MESOS-9395


Repository: mesos


Description
---

The full paths of simulated volumes are now in their ID instead of their
contextual information. This simplifies SLRP tests, and makes it cleaner
if we want to customize the contextual information in the future.


Diffs
-

  src/examples/test_csi_plugin.cpp 03f782ead136a79c4c5b099496072933f6737598 
  src/tests/storage_local_resource_provider_tests.cpp 
ecd7aeef1cb3d1a5b4b3419dfd912d41a8c6 


Diff: https://reviews.apache.org/r/70621/diff/1/


Testing
---

make check


Thanks,

Chun-Hung Hsiao