[ 
https://issues.apache.org/jira/browse/KUDU-2929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16918119#comment-16918119
 ] 

Andrew Wong commented on KUDU-2929:
-----------------------------------

That's an interesting idea. It's worth considering that maintenance ops can 
consume a large amount memory, e.g. Todd mentioned in KUDU-2383 that he saw 
each maintenance thread using hundreds of MBs of RAM, so I'd be a little 
hesitant about always having threads that might perform memory-intensive ops. 
That said, I do like the idea of at all times performing the "correct" 
operation.

 

Another though: it'd be great if we could formalize the "cost" of these 
operations (at least compactions) and the expected gain and take those into 
account when deciding, similar to measuring the anchored memory. [~adar] do you 
have any thoughts on this, since I think you were working in this area with the 
merge iterator work you did.

> Don't starve compactions under memory pressure
> ----------------------------------------------
>
>                 Key: KUDU-2929
>                 URL: https://issues.apache.org/jira/browse/KUDU-2929
>             Project: Kudu
>          Issue Type: Improvement
>          Components: perf, tablet
>            Reporter: Andrew Wong
>            Priority: Major
>
> When a server is under memory pressure, the maintenance manager exclusively 
> will look for the maintenance op that frees up the most memory. Some 
> operations, like compactions, do not register any amount of "anchored memory" 
> and effectively don't qualify for consideration.
> This means that when a tablet server is under memory pressure, compactions 
> will never be scheduled, even though compacting may actually end up reducing 
> memory (e.g. combining many rowsets-worth of CFileReaders into a single 
> rowset). While it makes sense to prefer flushes to compactions, it probably 
> doesn't make sense to do nothing vs compact.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

Reply via email to