This is an automatically generated e-mail. To reply, visit:

Mostly LGTM. Just a few additional comments.

src/slave/containerizer/mesos/isolators/xfs/disk.cpp (lines 21 - 22)

    Not used anymore.

src/slave/containerizer/mesos/isolators/xfs/disk.cpp (line 26)

    This is not used.

src/slave/containerizer/mesos/isolators/xfs/disk.cpp (lines 28 - 30)

    One blank line above and below as per convention.
    And I don't think any of the three are still used?

src/slave/containerizer/mesos/isolators/xfs/disk.cpp (line 31)

    This is not used either.

src/slave/containerizer/mesos/isolators/xfs/disk.cpp (lines 38 - 39)

    Not used anymore.

src/slave/containerizer/mesos/isolators/xfs/disk.cpp (lines 63 - 66)

    There is some awkwardness with the -1: we don't use the last element in the 
range so the check needs to be `ranges.range(i).end() - 1` as well.
    However, I am thinking if we can just use the inclusive right bounds. It 
seems to me the open right bound is because of the empty interval set check, 
see my comments below.

src/slave/containerizer/mesos/isolators/xfs/disk.cpp (line 69)

    s/open/closed/ if we agree on the comment on `nextProjectId()`.

src/slave/containerizer/mesos/isolators/xfs/disk.cpp (line 116)

    We should always add a CHECK if we expect it to never be none. In fact 
it'll never be error either because it simply returns `::getuid()`.
    It's just being clear that we intentionally don't check other conditions: 
`CHECK_SOME(uid);`, Kinda like "This page is intentionally left blank".
    At this point it seems there is not much value to use `os::getuid()` rather 
than `::getuid()` but let's just keep the use of `os::getuid()` with this CHECK 
for the sake of "There should be one-- and preferably only one --obvious way" 
to get uid.

src/slave/containerizer/mesos/isolators/xfs/disk.cpp (line 199)

    Otherwise it sounds like it encompasses both alive and known orphans and we 
should merge it with `orphans`. Keeping them separate is more explict so we 
should keep `alive` IMO.

src/slave/containerizer/mesos/isolators/xfs/disk.cpp (lines 214 - 217)

    Unless there is a collision in UUID this should never happen. I trust UUID 
but we can do a CHECK here instead. :)
    CHECK(!infos.contains(containerId)) << "ContainerIDs should never collide";

src/slave/containerizer/mesos/isolators/xfs/disk.cpp (line 221)

    Add a comment here. Somthing along the lines of:
    // We fail the isolator recovery upon failure in any container because
    // failing to get the project ID usually suggests some fatal issue
    // on the host.

src/slave/containerizer/mesos/isolators/xfs/disk.cpp (line 234)

    s/message/call later/ because it's not a 'message'.

src/slave/containerizer/mesos/isolators/xfs/disk.cpp (line 236)

    // Note that we don't wait for the result of the cleanups as we don't 
    // want to block agent recovery for unknown orphans.

src/slave/containerizer/mesos/isolators/xfs/disk.cpp (lines 406 - 413)

    We make a best effort to cleanup but we should still propagate the failure 
to the containerizer so it can do some containerizer-wide logging and metrics 
    How about:
    if (quotaStatus.isError() || projectStatus.isError()) {
      return Failure("Failed to cleanup '" + info->directory + "'");
    } else {
      return Nothing();

src/slave/containerizer/mesos/isolators/xfs/disk.cpp (line 407)

    Should we assume that freeProjectId already doesn't include 

src/slave/containerizer/mesos/isolators/xfs/disk.cpp (lines 419 - 430)

    This seems better:
    if (freeProjectIds.empty()) {
      return None();
    prid_t projectId = freeProjectIds.begin()->lower();
    freeProjectIds -= projectId;
    return projectId;

src/slave/containerizer/mesos/isolators/xfs/disk.cpp (lines 420 - 422)

    Took another look at this: it seems that this won't happen. A few tests I 
wrote (plus the ones already in interval_tests.cpp show that empty intervals 
get eliminated automatically.

src/slave/flags.cpp (line 777)

    s/range/ranges/ since this is what we actually support.

- Jiang Yan Xu

On April 6, 2016, 11:43 a.m., James Peach wrote:
> -----------------------------------------------------------
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/44948/
> -----------------------------------------------------------
> (Updated April 6, 2016, 11:43 a.m.)
> Review request for mesos, Jie Yu and Jiang Yan Xu.
> Bugs: MESOs-4828
>     https://issues.apache.org/jira/browse/MESOs-4828
> Repository: mesos
> Description
> -------
> Track sandbox directory usage by dynamically assigning XFS project
> quotas. We track a range of XFS project IDs, assigning a project ID
> and a project quota to each sandbox as it is created. When the task
> reaches the quota, writes will fail with EDQUOT, and the task will have
> an opportunity to handle that.
> Quotas are not applied to volume resources since the isolator interface
> has no insight into the volume lifecycle. Thus it is not currently
> possible to accurately assign and reclaim project IDs.
> If LOW is the lower bound of the project ID range and HIGH is the upper
> bound, you can show the currently allocated project quotas using the
> xfs_quota command:
>   $ xfs_quota -x -c "report -a -n -L LOW -U HIGH"
> To show the project ID assigned to the file PATH, use the xfs_io command:
>   $ xfs_io -r -c stat PATH
> Diffs
> -----
>   src/Makefile.am 55d3b341361bed25f3aa966d77060c88be29e5b0 
>   src/slave/containerizer/mesos/containerizer.cpp 
> a5dd22380066aa85de04d485052084e2629681c0 
>   src/slave/containerizer/mesos/isolators/xfs/disk.hpp PRE-CREATION 
>   src/slave/containerizer/mesos/isolators/xfs/disk.cpp PRE-CREATION 
>   src/slave/containerizer/mesos/isolators/xfs/utils.hpp PRE-CREATION 
>   src/slave/containerizer/mesos/isolators/xfs/utils.cpp PRE-CREATION 
>   src/slave/flags.hpp 69e1b01e09d2a15bee5e0745b751f47aaefe3fbe 
>   src/slave/flags.cpp 315cf47d268bce0a0255a061d64e414c736c8125 
> Diff: https://reviews.apache.org/r/44948/diff/
> Testing
> -------
> Make check. Manual testing. Tests in subsequent patches.
> Thanks,
> James Peach

Reply via email to