-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/56797/#review166014
-----------------------------------------------------------


Ship it!




This brings up the discussion we had around `TaskHistoryPruner` design 
alternatives ([rb](https://reviews.apache.org/r/56575/)):
1. Load all expired tasks at once, filter and delete.
2. Load in smaller batch sizes (perhaps per job), filter, and delete (maybe 
also add a `Thread.sleep()` pause).

The take away lesson here is converting tasks from `ISchedulerTask` to 
`TaskStatus` in smaller batches, with delays in between, releaves heap 
pressure. By the same logic, I would assume pruning expired tasks in batches 
(option 2 above) would produce less heap pressure (even though is not as 
efficient).

- Mehrdad Nurolahzade


On Feb. 17, 2017, 4:13 p.m., David McLaughlin wrote:
> 
> -----------------------------------------------------------
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/56797/
> -----------------------------------------------------------
> 
> (Updated Feb. 17, 2017, 4:13 p.m.)
> 
> 
> Review request for Aurora, Mehrdad Nurolahzade and Zameer Manji.
> 
> 
> Repository: aurora
> 
> 
> Description
> -------
> 
> This is a small change to relieve GC pressure while explicit reconciliation 
> runs. It moves the IScheduledTask -> TaskStatus conversion into the batch 
> processing closure so that any object allocation and collection overhead is 
> delayed until the batch is actually processed. It has a noticable effect on 
> GC for large amounts of RUNNING tasks.
> 
> 
> Diffs
> -----
> 
>   
> src/main/java/org/apache/aurora/scheduler/reconciliation/TaskReconciler.java 
> ec7ccafcd360c00beceb067963bc430b6b8d8256 
> 
> Diff: https://reviews.apache.org/r/56797/diff/
> 
> 
> Testing
> -------
> 
> This is running in prod at Twitter. Our post-snapshot stop the world GC hit 
> is reduced dramatically maybe about 80% of the time with this change.
> 
> 
> Thanks,
> 
> David McLaughlin
> 
>

Reply via email to