-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/68400/#review207790
-----------------------------------------------------------




src/tests/containerizer/xfs_quota_tests.cpp
Line 299 (original), 302-303 (patched)
<https://reviews.apache.org/r/68400/#comment291289>

    Since these checks are not relevent to the test, would it be better to have 
another unit test to verify `getDeviceForPath`?



src/tests/containerizer/xfs_quota_tests.cpp
Lines 343 (patched)
<https://reviews.apache.org/r/68400/#comment291293>

    I'm a bit confused here. We created a 10MB `file` and a partially created 
~1MB `file2`, so wouldn't the total usage (`info->used`) become ~11MB? Why is 
`used => info->used` true?


- Chun-Hung Hsiao


On Aug. 16, 2018, 11:52 p.m., James Peach wrote:
> 
> -----------------------------------------------------------
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/68400/
> -----------------------------------------------------------
> 
> (Updated Aug. 16, 2018, 11:52 p.m.)
> 
> 
> Review request for mesos, Chun-Hung Hsiao, Ilya Pronin, Jie Yu, Joseph Wu, 
> and Jiang Yan Xu.
> 
> 
> Bugs: MESOS-5158
>     https://issues.apache.org/jira/browse/MESOS-5158
> 
> 
> Repository: mesos
> 
> 
> Description
> -------
> 
> To manage persistent volumes in the `disk/xfs` isolator we need
> to keep track of the block device that hosts a given filesystem
> path. Expose the `getDeviceForPath` helper API to directly return
> this information.
> 
> 
> Diffs
> -----
> 
>   src/slave/containerizer/mesos/isolators/xfs/utils.hpp 
> e269eb5489ceed8937775a30b3420f7960ab4cd4 
>   src/slave/containerizer/mesos/isolators/xfs/utils.cpp 
> b691c76e94f686e1a866380dca99ef0fa18e2f49 
>   src/tests/containerizer/xfs_quota_tests.cpp 
> 59ec182c1c3af3978156044f03d9e3d784d51fce 
> 
> 
> Diff: https://reviews.apache.org/r/68400/diff/1/
> 
> 
> Testing
> -------
> 
> sudo make check (Fedora 28)
> 
> 
> Thanks,
> 
> James Peach
> 
>

Reply via email to