[
https://issues.apache.org/jira/browse/HBASE-19358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16279809#comment-16279809
]
Jingyun Tian commented on HBASE-19358:
--------------------------------------
[~carp84] here is my test result:
Split one 512MB HLog on a single regionserver
!https://issues.apache.org/jira/secure/attachment/12900819/split-1-log.png!
we can see in most situation new logic has a better performance than the old
one.
The motivation I do this improvement is when a cluster has to restart, if there
are too many regions per region, the restart is prone to failure and we have to
split one hlog each time to avoid errors. So I test when restart the whole
cluster, how many throughput it can reach with different thread count.
Throughput when we restart a cluster, which has 18 regionservers and 18
datanodes
!https://issues.apache.org/jira/secure/attachment/12900818/split_test_result.png!
blue series represent the throughput of the cluster has 20000 regions and 1111
regions per rs, while red series has 40000 regions, 2222 regions per rs and
orange series has 80000 regions and 4444 per rs.
This is the table if the chart is not clear:
!https://issues.apache.org/jira/secure/attachment/12900821/split-table.png!
Depend on this chart, I think the time cost when you restart the whole cluster
is not related to the thread count. More regions this Hlog contains, more time
it will cost to split.
> Improve the stability of splitting log when do fail over
> --------------------------------------------------------
>
> Key: HBASE-19358
> URL: https://issues.apache.org/jira/browse/HBASE-19358
> Project: HBase
> Issue Type: Improvement
> Components: MTTR
> Affects Versions: 0.98.24
> Reporter: Jingyun Tian
> Assignee: Jingyun Tian
> Attachments: newLogic.jpg, previousLogic.jpg, split-1-log.png,
> split-table.png, split_test_result.png
>
>
> The way we splitting log now is like the following figure:
> !https://issues.apache.org/jira/secure/attachment/12899558/previousLogic.jpg!
> The problem is the OutputSink will write the recovered edits during splitting
> log, which means it will create one WriterAndPath for each region. If the
> cluster is small and the number of regions per rs is large, it will create
> too many HDFS streams at the same time. Then it is prone to failure since
> each datanode need to handle too many streams.
> Thus I come up with a new way to split log.
> !https://issues.apache.org/jira/secure/attachment/12899557/newLogic.jpg!
> We cached the recovered edits unless exceeds the memory limits we set or
> reach the end, then we have a thread pool to do the rest things: write them
> to files and move to the destination.
> The biggest benefit is we can control the number of streams we create during
> splitting log,
> it will not exceeds *_hbase.regionserver.wal.max.splitters *
> hbase.regionserver.hlog.splitlog.writer.threads_*, but before it is
> *_hbase.regionserver.wal.max.splitters * the number of region the hlog
> contains_*.
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)