-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/30546/#review75951
-----------------------------------------------------------


This is looking good!


include/mesos/mesos.proto
<https://reviews.apache.org/r/30546/#comment123294>

    Can you add a comment about those counters?



src/slave/containerizer/isolators/cgroups/mem.hpp
<https://reviews.apache.org/r/30546/#comment123362>

    Not your fault, but please remove the `virtual` modifier.



src/slave/containerizer/isolators/cgroups/mem.hpp
<https://reviews.apache.org/r/30546/#comment123299>

    Can we merge these three using a hashmap here:
    
    ```
    hashmap<pressure::Level, Owned<pressure::Listener>> pressureListeners;
    ```



src/slave/containerizer/isolators/cgroups/mem.hpp
<https://reviews.apache.org/r/30546/#comment123359>

    Please add a comment about this function.



src/slave/containerizer/isolators/cgroups/mem.hpp
<https://reviews.apache.org/r/30546/#comment123361>

    As I mentioned below, this function is a continuation of `usage`, therefore 
should be named as `_usage`.
    
    Also, move the declaration right above `_cleanup`.



src/slave/containerizer/isolators/cgroups/mem.cpp
<https://reviews.apache.org/r/30546/#comment123364>

    ```
    return await(
        (info->pressureListener[LOW].isSome() ?
         info->pressureListener[LOW].get()->counter() :
         Failure("LOW pressure listener unavailable")),
        (info->pressureListener[MEDIUM].isSome() ?
         info->pressureListener[MEDIUM].get()->counter() :
         Failure("MEDIUM pressure listener unavailable")),
        (info->pressureListener[CRITICAL].isSome() ?
         info->pressureListener[CRITICAL].get()->counter() :
         Failure("CRITICAL pressure listener unavailable")))
      .then(defer(PID<CgroupsMemIsolatorProcess>(this),
                  &CgroupsMemIsolatorProcess::_usage,
                  containerId,
                  result,
                  lambda::_1));
    ```



src/slave/containerizer/isolators/cgroups/mem.cpp
<https://reviews.apache.org/r/30546/#comment123308>

    s/pressureWaited/_usage/
    
    This is a continuation of `usage`.



src/slave/containerizer/isolators/cgroups/mem.cpp
<https://reviews.apache.org/r/30546/#comment123302>

    You can use a loop here:
    
    ```
    vector<pressure::Level> levels = {
      pressure::LOW,
      pressure::MEDIUM,
      pressure::CRITICAL
    };
    
    foreach (pressure::Level level, levels) {
      listener = Listener::create(..., level);
      info->pressureListeners[level] = listener.get();
    }
    ```



src/slave/containerizer/isolators/cgroups/mem.cpp
<https://reviews.apache.org/r/30546/#comment123305>

    Instead of doing this, can you add a LOG(INFO) for each pressure listener 
creation?



src/slave/containerizer/isolators/cgroups/mem.cpp
<https://reviews.apache.org/r/30546/#comment123365>

    Here, you can use tuples::get<0>, tuples::get<1>, tuples::get<2>



src/slave/containerizer/isolators/cgroups/mem.cpp
<https://reviews.apache.org/r/30546/#comment123366>

    ```
    Future<uint64_t> low = tuples::get<0>(future.get());
    if (low.isReady()) {
      result.set_mem_low_pressure_counter(low.get());
    } else {
      if (info->pressureListener[LOW].isSome()) {
        LOG(ERROR) << "Failed to listen on low pressure events "
                   << for container " << containerId << ": "
                   << (low.isFailed() ? low.failure() : "discarded");
                   
        info->pressureListener[LOW] = None();
      }
    }
    
    ...
    ```



src/tests/slave_recovery_tests.cpp
<https://reviews.apache.org/r/30546/#comment123373>

    I suggest moving this test to cgroups_isolator_tests.cpp, or even create a 
cgroups_memory_isolator_tests.cpp and move tests accordingly.
    
    I don't think we should move all slave recovery related tests to a single 
file, especially the test is for testing the slave recovery for a special 
isolator.



src/tests/slave_recovery_tests.cpp
<https://reviews.apache.org/r/30546/#comment123367>

    No need to use `this->`



src/tests/slave_recovery_tests.cpp
<https://reviews.apache.org/r/30546/#comment123368>

    s/> >/>>/



src/tests/slave_recovery_tests.cpp
<https://reviews.apache.org/r/30546/#comment123369>

    Why do you need a temp? can you just write to the sandbox which will get 
GCed?



src/tests/slave_recovery_tests.cpp
<https://reviews.apache.org/r/30546/#comment123370>

    No need to create a vector here, you can just do:
    
    ```
    driver.launchTasks(offers1.get()[0].id(), {tasks1});
    ```



src/tests/slave_recovery_tests.cpp
<https://reviews.apache.org/r/30546/#comment123371>

    Add a new line above.



src/tests/slave_recovery_tests.cpp
<https://reviews.apache.org/r/30546/#comment123372>

    Just do a
    
    ```
    ASSERT_LE(waited, Seconds(5));
    ```



src/tests/slave_recovery_tests.cpp
<https://reviews.apache.org/r/30546/#comment123374>

    Can you test the slave recovery in a separate test?


- Jie Yu


On Feb. 28, 2015, 1:43 a.m., Chi Zhang wrote:
> 
> -----------------------------------------------------------
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/30546/
> -----------------------------------------------------------
> 
> (Updated Feb. 28, 2015, 1:43 a.m.)
> 
> 
> Review request for mesos, Dominic Hamon, Ian Downes, and Jie Yu.
> 
> 
> Bugs: MESOS-2136
>     https://issues.apache.org/jira/browse/MESOS-2136
> 
> 
> Repository: mesos
> 
> 
> Description
> -------
> 
> MemIsolator: expose memory pressures for containers.
> 
> 
> Diffs
> -----
> 
>   include/mesos/mesos.proto 9df972d750ce1e4a81d2e96cc508d6f83cad2fc8 
>   src/slave/containerizer/isolators/cgroups/mem.hpp 
> a00f723de61b9bccd76a2948b6d14fde7a993a8d 
>   src/slave/containerizer/isolators/cgroups/mem.cpp 
> 6299ca4ba01b65daa3d75c64150e2738e32b841e 
>   src/tests/slave_recovery_tests.cpp 53adae0118a26e6d25a9ff20c6374cc8e73275b1 
> 
> Diff: https://reviews.apache.org/r/30546/diff/
> 
> 
> Testing
> -------
> 
> 
> Thanks,
> 
> Chi Zhang
> 
>

Reply via email to