[
https://issues.apache.org/jira/browse/HBASE-2462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12924717#action_12924717
]
stack commented on HBASE-2462:
------------------------------
hbase.regionserver.hlog.blocksize == fs default block size. Better to use fs
default block size rather than an hlog setting.
Whats rationale of rule 4? Do you rather mean the compaction threshold here?
Sorry, whats max(files)? The largest file? And sum(files) is all files or
just some subset (you keep adding to the subset till you are > 150% the
biggest?)
So, you think this algo will make for less compactions yet keep count of files
low?
> Review compaction heuristic and move compaction code out so standalone and
> independently testable
> -------------------------------------------------------------------------------------------------
>
> Key: HBASE-2462
> URL: https://issues.apache.org/jira/browse/HBASE-2462
> Project: HBase
> Issue Type: Improvement
> Reporter: stack
> Assignee: Jonathan Gray
> Priority: Critical
>
> Anything that improves our i/o profile makes hbase run smoother. Over in
> HBASE-2457, good work has been done already describing the tension between
> minimizing compactions versus minimizing count of store files. This issue is
> about following on from what has been done in 2457 but also, breaking the
> hard-to-read compaction code out of Store.java out to a standalone class that
> can be the easier tested (and easily analyzed for its performance
> characteristics).
> If possible, in the refactor, we'd allow specification of alternate merge
> sort implementations.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.