Re: Review Request 66103: Introduce mesos disk collector

2018-03-23 Thread Aurora ReviewBot

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/66103/#review199901
---


Ship it!




Master (f32086d) is green with this patch.
  ./build-support/jenkins/build.sh

I will refresh this build result if you post a review containing "@ReviewBot 
retry"

- Aurora ReviewBot


On March 23, 2018, 5:39 p.m., Reza Motamedi wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/66103/
> ---
> 
> (Updated March 23, 2018, 5:39 p.m.)
> 
> 
> Review request for Aurora, David McLaughlin, Daniel Knightly, Franck Cuny, 
> Jordan Ly, Santhosh Kumar Shanmugham, and Stephan Erb.
> 
> 
> Repository: aurora
> 
> 
> Description
> ---
> 
> When disk isolation is enabled in a Mesos agent it calculates the disk usage 
> for each container. 
> Thermos Observer also monitors disk usage using `twitter.common.dirutil`, 
> essentially repeating the work already done by the agent. In practice, we see 
> that disk monitoring is one of the most expensive resource monitoring tasks. 
> For instance, when there are deeply nested directories, the CPU utilization 
> of the observer process can easily reach 1.5 CPUs. It would be ideal if we 
> delegate the disk monitoring task to the agent and do it only once. With this 
> approach, when disk collection has improved in the agent (for instance by 
> implementing XFS isolation), we can simply benefit from it without any code 
> change. Some more information about the problem is provided in AURORA-1918.
> 
> This patch that introduces `MesosDiskCollector` which queries the agent's API 
> endpoint to lookup disk_used_bytes. Note that there is also resource 
> monitoring in thermos executor. Currently, I left the disk collector there to 
> use the `du` implementation. That can be changed in a later patch.
> 
> I modified some vagrant config files including `aurora-executor.service` and 
> `etc_mesos-slave/isolation` for testing. They can be left as is. I included 
> them in this patch to show how this would work e2e.
> 
> 
> Diffs
> -
> 
>   3rdparty/python/requirements.txt 4ac242cfa2c1c19cb7447816ab86e748839d3d11 
>   RELEASE-NOTES.md 51ab6c724694244bf616b29e9beace4a4a3f5252 
>   docs/reference/observer-configuration.md 
> 8a443c94f7f37f9454989781f722101a97c99f15 
>   examples/jobs/hello_world.aurora 5401bfebe753b5e53abd08baeac501144ced9b5a 
>   examples/vagrant/mesos_config/etc_mesos-slave/isolation 
> 1a7028ffc70116b104ef3ad22b7388f637707a0f 
>   examples/vagrant/systemd/thermos.service 
> 01925bcd2ae44f100df511f3c3951c3f5a1a72aa 
>   src/main/python/apache/aurora/tools/thermos_observer.py 
> dd9f0c46ceac9e939b1b763073314161de0ea614 
>   src/main/python/apache/thermos/monitoring/BUILD 
> 65ba7088f65e7baa5d30744736ba456b46a55e86 
>   src/main/python/apache/thermos/monitoring/disk.py 
> 986d33a5000f8d5db15cb639c81f8b1d756ffa05 
>   src/main/python/apache/thermos/monitoring/resource.py 
> adcdc751c03460dc801a18278faa96d6bd64722b 
>   src/main/python/apache/thermos/observer/task_observer.py 
> a6870d48bddf2a2ccede7bb68195f2baae1d0e47 
>   
> src/test/python/apache/aurora/executor/common/test_resource_manager_integration.py
>  fe74bd1d3ecd89fca1b5b2251202cbbc0f24 
>   src/test/python/apache/thermos/monitoring/BUILD 
> 8f2b39336dce6c7b580e6ba0009f60afdcb89179 
>   src/test/python/apache/thermos/monitoring/test_disk.py 
> 362393bfd1facf3198e2d438d0596b16700b72b8 
>   src/test/python/apache/thermos/monitoring/test_resource.py 
> e577e552d4ee1807096a15401851bb9fd95fa426 
> 
> 
> Diff: https://reviews.apache.org/r/66103/diff/10/
> 
> 
> Testing
> ---
> 
> - I added unit tests.
> - Tested in vagrant and it works as intenced.
> - I also built and deployed in our test enviroment. In order to measure 
> imporoved performance I created jobs with nested folders and noticed 
> reduction in CPU utilization of the Observer process, by at least 60%. (1.5 
> CPU cores to 0.4 CPU cores)
> 
> Here is one specific test setup: On two hosts I created a two tasks. Each 
> task creates identical nested directory structures and files in them. The 
> overall size is 30GB. test_host_1 runs the current version of observer and 
> test_host_2 runs Observer with this patch and also has mesos_disk_collection 
> enabled. The results are as follows:
> 
> ```
> rezam[7]TEST_HOST_1 ~ $ while true; do echo `date`; curl localhost:1338/vars 
> -s | grep cpu; sleep 10; done
> Thu Mar 22 04:36:17 UTC 2018
> observer.observer_cpu 108.9
> Thu Mar 22 04:36:27 UTC 2018
> observer.observer_cpu 123.2
> Thu Mar 22 04:36:38 UTC 2018
> observer.observer_cpu 123.2
> Thu Mar 22 04:36:48 UTC 2018
> observer.observer_cpu 123.2
> Thu Mar 22 04:36:58 UTC 2018
> observer.observer_cpu 111.0
> Thu Mar 22 04:37:08 UTC 2018
> 

Re: Review Request 66103: Introduce mesos disk collector

2018-03-23 Thread Reza Motamedi

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/66103/
---

(Updated March 23, 2018, 5:39 p.m.)


Review request for Aurora, David McLaughlin, Daniel Knightly, Franck Cuny, 
Jordan Ly, Santhosh Kumar Shanmugham, and Stephan Erb.


Changes
---

Updating the BUILD file to include the dependencies of requests library


Repository: aurora


Description
---

When disk isolation is enabled in a Mesos agent it calculates the disk usage 
for each container. 
Thermos Observer also monitors disk usage using `twitter.common.dirutil`, 
essentially repeating the work already done by the agent. In practice, we see 
that disk monitoring is one of the most expensive resource monitoring tasks. 
For instance, when there are deeply nested directories, the CPU utilization of 
the observer process can easily reach 1.5 CPUs. It would be ideal if we 
delegate the disk monitoring task to the agent and do it only once. With this 
approach, when disk collection has improved in the agent (for instance by 
implementing XFS isolation), we can simply benefit from it without any code 
change. Some more information about the problem is provided in AURORA-1918.

This patch that introduces `MesosDiskCollector` which queries the agent's API 
endpoint to lookup disk_used_bytes. Note that there is also resource monitoring 
in thermos executor. Currently, I left the disk collector there to use the `du` 
implementation. That can be changed in a later patch.

I modified some vagrant config files including `aurora-executor.service` and 
`etc_mesos-slave/isolation` for testing. They can be left as is. I included 
them in this patch to show how this would work e2e.


Diffs (updated)
-

  3rdparty/python/requirements.txt 4ac242cfa2c1c19cb7447816ab86e748839d3d11 
  RELEASE-NOTES.md 51ab6c724694244bf616b29e9beace4a4a3f5252 
  docs/reference/observer-configuration.md 
8a443c94f7f37f9454989781f722101a97c99f15 
  examples/jobs/hello_world.aurora 5401bfebe753b5e53abd08baeac501144ced9b5a 
  examples/vagrant/mesos_config/etc_mesos-slave/isolation 
1a7028ffc70116b104ef3ad22b7388f637707a0f 
  examples/vagrant/systemd/thermos.service 
01925bcd2ae44f100df511f3c3951c3f5a1a72aa 
  src/main/python/apache/aurora/tools/thermos_observer.py 
dd9f0c46ceac9e939b1b763073314161de0ea614 
  src/main/python/apache/thermos/monitoring/BUILD 
65ba7088f65e7baa5d30744736ba456b46a55e86 
  src/main/python/apache/thermos/monitoring/disk.py 
986d33a5000f8d5db15cb639c81f8b1d756ffa05 
  src/main/python/apache/thermos/monitoring/resource.py 
adcdc751c03460dc801a18278faa96d6bd64722b 
  src/main/python/apache/thermos/observer/task_observer.py 
a6870d48bddf2a2ccede7bb68195f2baae1d0e47 
  
src/test/python/apache/aurora/executor/common/test_resource_manager_integration.py
 fe74bd1d3ecd89fca1b5b2251202cbbc0f24 
  src/test/python/apache/thermos/monitoring/BUILD 
8f2b39336dce6c7b580e6ba0009f60afdcb89179 
  src/test/python/apache/thermos/monitoring/test_disk.py 
362393bfd1facf3198e2d438d0596b16700b72b8 
  src/test/python/apache/thermos/monitoring/test_resource.py 
e577e552d4ee1807096a15401851bb9fd95fa426 


Diff: https://reviews.apache.org/r/66103/diff/10/

Changes: https://reviews.apache.org/r/66103/diff/9-10/


Testing
---

- I added unit tests.
- Tested in vagrant and it works as intenced.
- I also built and deployed in our test enviroment. In order to measure 
imporoved performance I created jobs with nested folders and noticed reduction 
in CPU utilization of the Observer process, by at least 60%. (1.5 CPU cores to 
0.4 CPU cores)

Here is one specific test setup: On two hosts I created a two tasks. Each task 
creates identical nested directory structures and files in them. The overall 
size is 30GB. test_host_1 runs the current version of observer and test_host_2 
runs Observer with this patch and also has mesos_disk_collection enabled. The 
results are as follows:

```
rezam[7]TEST_HOST_1 ~ $ while true; do echo `date`; curl localhost:1338/vars -s 
| grep cpu; sleep 10; done
Thu Mar 22 04:36:17 UTC 2018
observer.observer_cpu 108.9
Thu Mar 22 04:36:27 UTC 2018
observer.observer_cpu 123.2
Thu Mar 22 04:36:38 UTC 2018
observer.observer_cpu 123.2
Thu Mar 22 04:36:48 UTC 2018
observer.observer_cpu 123.2
Thu Mar 22 04:36:58 UTC 2018
observer.observer_cpu 111.0
Thu Mar 22 04:37:08 UTC 2018
observer.observer_cpu 111.0
Thu Mar 22 04:37:18 UTC 2018
observer.observer_cpu 111.0


rezam[7]TEST_HOST_2 ~ $ while true; do echo `date`; curl localhost:1338/vars -s 
| grep cpu; sleep 10; done
Thu Mar 22 04:36:20 UTC 2018
observer.observer_cpu 1.3
Thu Mar 22 04:36:30 UTC 2018
observer.observer_cpu 1.3
Thu Mar 22 04:36:40 UTC 2018
observer.observer_cpu 1.3
Thu Mar 22 04:36:50 UTC 2018
observer.observer_cpu 1.2
Thu Mar 22 04:37:00 UTC 2018
observer.observer_cpu 1.2
Thu Mar 22 04:37:10 UTC 2018
observer.observer_cpu 1.2
Thu Mar 22 

Re: Review Request 66192: [WIP] Variable group size updates

2018-03-23 Thread Stephan Erb


> On March 21, 2018, 10:33 p.m., Jordan Ly wrote:
> > I am mostly concerned about the UX. Users will be able to specify both 
> > batch size and variable batch size and must know that variable batch sizing 
> > takes precedent over other strategies.
> > 
> > Is it worth it to make a larger investment into the Thrift interface and 
> > avoid ambiguity? Or refactor the current batching strategy to use the new 
> > variable codepath (a single batch size specified to the variable strategy 
> > should behave the same as the current implementation).
> 
> Santhosh Kumar Shanmugham wrote:
> +1 I think it should be an either-or. There should be logic in the API to 
> clearly message this case.
> 
> Renan DelValle wrote:
> I've thought about refactoring the batching strategy, and I'm willing to 
> travel down that path as the current batch stategy is a specific case (single 
> step, repeated over and over as Jordan pointed out) of the functionality that 
> I'm trying to achieve with this patch but I'll have to do something like wrap 
> around a single item list which might look funky. I'll give it a shot.
> 
> Changing the Thrift interface is always tricky because it almost always 
> results in some yak shaving but if the consensus is that this is the better, 
> more clear approach, then I'm game to implement it that way.
> 
> Appreciate all the feedback!

Just an idea: Would it help to cap `step` at the length of `maxActive`, so that 
the last batch size is used for all remaining instances? This would also give 
us a somewhat nice deprecation path for the existing fixed batch size.

In addition, I believe you will need to account for that case anyway. A user 
might accidentally have a list of batches whose sum is smaller than the 
instance count of the update. We need to make sure that either the update 
request gets reject at submission or that it runs through properly. The 
proposal above would allow the update to run through.


- Stephan


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/66192/#review199707
---


On March 21, 2018, 3:10 a.m., Renan DelValle wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/66192/
> ---
> 
> (Updated March 21, 2018, 3:10 a.m.)
> 
> 
> Review request for Aurora, David McLaughlin, Santhosh Kumar Shanmugham, and 
> Stephan Erb.
> 
> 
> Repository: aurora
> 
> 
> Description
> ---
> 
> Adding support for variable group sizes when executing an update.
> 
> Design doc for this change is here: 
> https://docs.google.com/document/d/1xGk4ueH8YlmJCk6hQJh85u4to4M1VQD0l630IOchvgY/edit#heading=h.lg3hty82f5cz
> 
> I opted for the path of least resistance with regards to the Thrift changes 
> as I didn't see any benefit in making the larger changes required to make the 
> interfaces a bit more flexible.
> 
> Requesting feedback on these changes and the approach from the community 
> before I proceed.
> 
> Tests will be added after the community approves of the direciton and 
> approach.
> 
> Note to reviewers: Changes made in ActiveLimitedStrategy.java were made to 
> move towards getting rid of FluentIterable. I figured since I was touching 
> that code, it wouldn't hurt to test the Java 8 equivalent of it. I can get 
> rid of the change here and make it in a separate patch if desired.
> 
> 
> Diffs
> -
> 
>   api/src/main/thrift/org/apache/aurora/gen/api.thrift 
> ef754e32172e7490a47a13e7b526f243ffa3efeb 
>   api/src/main/thrift/org/apache/aurora/gen/storage.thrift 
> b79e2045ccda05d5058565f81988dfe33feea8f1 
>   src/main/java/org/apache/aurora/scheduler/storage/JobUpdateStore.java 
> b25f3831cecc58c90375c90b16142421f8f09e38 
>   src/main/java/org/apache/aurora/scheduler/storage/durability/Loader.java 
> 10864f122eff5027c88d835baae6de483d960218 
>   
> src/main/java/org/apache/aurora/scheduler/storage/durability/WriteRecorder.java
>  8d70cae35289a9e36142bab288cf0c9398ebd2d4 
>   
> src/main/java/org/apache/aurora/scheduler/storage/mem/MemJobUpdateStore.java 
> 9e86b9e276ea90a249284a824705b5bbf19dcbce 
>   
> src/main/java/org/apache/aurora/scheduler/thrift/SchedulerThriftInterface.java
>  9fc0416086dd3eb2e2f4e8f659da59fcdea2b22b 
>   
> src/main/java/org/apache/aurora/scheduler/updater/JobUpdateControllerImpl.java
>  f8be8058f3a80a18b999d2666e2adb33e1e55fef 
>   src/main/java/org/apache/aurora/scheduler/updater/UpdateFactory.java 
> 3992aa77fc305adc390a4aaeb1d3939d6241ddbd 
>   
> src/main/java/org/apache/aurora/scheduler/updater/strategy/ActiveLimitedStrategy.java
>  855ea9c20788b51695b7eff5ac0970f0d52a9546 
>   
> src/main/java/org/apache/aurora/scheduler/updater/strategy/VariableBatchStrategy.java
>  PRE-CREATION 
> 
> 
> Diff: 

Re: Review Request 66103: Introduce mesos disk collector

2018-03-23 Thread Stephan Erb


> On March 22, 2018, 11:31 p.m., Stephan Erb wrote:
> > src/test/python/apache/thermos/monitoring/BUILD
> > Lines 21 (patched)
> > 
> >
> > Requests has a few dependencies. I believe you need to list those here 
> > as well in order to ensure pants pulls in the correct versions.
> 
> Reza Motamedi wrote:
> I have built observer with this patch a couple of times on Jenkins boxes 
> as well. 
> 
> 
> Are you referring to requests-kerberos and requests-mock? I will include 
> them here as well.

Please see this review https://reviews.apache.org/r/64382/. The `requests` 
package depends on `certifi`, `chardet`, `idna`, and `urllib3` and for proper 
pinning of transitive dependencies we must list those as well.


- Stephan


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/66103/#review199812
---


On March 23, 2018, 2:11 a.m., Reza Motamedi wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/66103/
> ---
> 
> (Updated March 23, 2018, 2:11 a.m.)
> 
> 
> Review request for Aurora, David McLaughlin, Daniel Knightly, Franck Cuny, 
> Jordan Ly, Santhosh Kumar Shanmugham, and Stephan Erb.
> 
> 
> Repository: aurora
> 
> 
> Description
> ---
> 
> When disk isolation is enabled in a Mesos agent it calculates the disk usage 
> for each container. 
> Thermos Observer also monitors disk usage using `twitter.common.dirutil`, 
> essentially repeating the work already done by the agent. In practice, we see 
> that disk monitoring is one of the most expensive resource monitoring tasks. 
> For instance, when there are deeply nested directories, the CPU utilization 
> of the observer process can easily reach 1.5 CPUs. It would be ideal if we 
> delegate the disk monitoring task to the agent and do it only once. With this 
> approach, when disk collection has improved in the agent (for instance by 
> implementing XFS isolation), we can simply benefit from it without any code 
> change. Some more information about the problem is provided in AURORA-1918.
> 
> This patch that introduces `MesosDiskCollector` which queries the agent's API 
> endpoint to lookup disk_used_bytes. Note that there is also resource 
> monitoring in thermos executor. Currently, I left the disk collector there to 
> use the `du` implementation. That can be changed in a later patch.
> 
> I modified some vagrant config files including `aurora-executor.service` and 
> `etc_mesos-slave/isolation` for testing. They can be left as is. I included 
> them in this patch to show how this would work e2e.
> 
> 
> Diffs
> -
> 
>   3rdparty/python/requirements.txt 4ac242cfa2c1c19cb7447816ab86e748839d3d11 
>   RELEASE-NOTES.md 51ab6c724694244bf616b29e9beace4a4a3f5252 
>   docs/reference/observer-configuration.md 
> 8a443c94f7f37f9454989781f722101a97c99f15 
>   examples/jobs/hello_world.aurora 5401bfebe753b5e53abd08baeac501144ced9b5a 
>   examples/vagrant/mesos_config/etc_mesos-slave/isolation 
> 1a7028ffc70116b104ef3ad22b7388f637707a0f 
>   examples/vagrant/systemd/thermos.service 
> 01925bcd2ae44f100df511f3c3951c3f5a1a72aa 
>   src/main/python/apache/aurora/tools/thermos_observer.py 
> dd9f0c46ceac9e939b1b763073314161de0ea614 
>   src/main/python/apache/thermos/monitoring/BUILD 
> 65ba7088f65e7baa5d30744736ba456b46a55e86 
>   src/main/python/apache/thermos/monitoring/disk.py 
> 986d33a5000f8d5db15cb639c81f8b1d756ffa05 
>   src/main/python/apache/thermos/monitoring/resource.py 
> adcdc751c03460dc801a18278faa96d6bd64722b 
>   src/main/python/apache/thermos/observer/task_observer.py 
> a6870d48bddf2a2ccede7bb68195f2baae1d0e47 
>   
> src/test/python/apache/aurora/executor/common/test_resource_manager_integration.py
>  fe74bd1d3ecd89fca1b5b2251202cbbc0f24 
>   src/test/python/apache/thermos/monitoring/BUILD 
> 8f2b39336dce6c7b580e6ba0009f60afdcb89179 
>   src/test/python/apache/thermos/monitoring/test_disk.py 
> 362393bfd1facf3198e2d438d0596b16700b72b8 
>   src/test/python/apache/thermos/monitoring/test_resource.py 
> e577e552d4ee1807096a15401851bb9fd95fa426 
> 
> 
> Diff: https://reviews.apache.org/r/66103/diff/9/
> 
> 
> Testing
> ---
> 
> - I added unit tests.
> - Tested in vagrant and it works as intenced.
> - I also built and deployed in our test enviroment. In order to measure 
> imporoved performance I created jobs with nested folders and noticed 
> reduction in CPU utilization of the Observer process, by at least 60%. (1.5 
> CPU cores to 0.4 CPU cores)
> 
> Here is one specific test setup: On two hosts I created a two tasks. Each 
> task creates identical nested directory structures and files in them. The 
> overall size is 30GB.