-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/62643/#review187639
-----------------------------------------------------------




include/mesos/agent/agent.proto
Lines 347 (patched)
<https://reviews.apache.org/r/62643/#comment264705>

    Can you elaborate a bit about why wrapping `repeated Resource resources = 
4;` into a new message `TaskResourceLimitation`?



src/slave/containerizer/mesos/containerizer.cpp
Lines 2698 (patched)
<https://reviews.apache.org/r/62643/#comment264699>

    Kill this blank line.


- Qian Zhang


On Oct. 10, 2017, 8:08 a.m., James Peach wrote:
> 
> -----------------------------------------------------------
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/62643/
> -----------------------------------------------------------
> 
> (Updated Oct. 10, 2017, 8:08 a.m.)
> 
> 
> Review request for mesos, Jie Yu and Qian Zhang.
> 
> 
> Bugs: MESOS-7963
>     https://issues.apache.org/jira/browse/MESOS-7963
> 
> 
> Repository: mesos
> 
> 
> Description
> -------
> 
> Updated the agent API so that we can propagate information
> from the container termination up to the `WaitNestedContainer`
> response. We now propagate resources all the way from the
> container limitation to the `WaitNestedContainer` response so
> that an executor can know specifically which resource limit
> violation caused the container termination.
> 
> 
> Diffs
> -----
> 
>   include/mesos/agent/agent.proto 7c8c8a7d8298e91e4e002327b3b27d4c74b5cbae 
>   include/mesos/slave/containerizer.proto 
> 84f9ca765fe6e29ddd2f7956ba0976e10b21d685 
>   include/mesos/v1/agent/agent.proto 3e199124b23fa027232790d99370fe2f33660096 
>   src/slave/containerizer/mesos/containerizer.cpp 
> 4d5dc13f363f5d8886983d7dd06a5cecc177c345 
>   src/slave/http.cpp f4c3e6b5ec2943806c78ba3ca524588a655c5bb7 
> 
> 
> Diff: https://reviews.apache.org/r/62643/diff/2/
> 
> 
> Testing
> -------
> 
> make check (Fedora 26)
> 
> 
> Thanks,
> 
> James Peach
> 
>

Reply via email to