[
https://issues.apache.org/jira/browse/HBASE-2821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14134282#comment-14134282
]
Hadoop QA commented on HBASE-2821:
----------------------------------
{color:red}-1 overall{color}. Here are the results of testing the latest
attachment
http://issues.apache.org/jira/secure/attachment/12668787/HBASE-2821.patch
against trunk revision .
ATTACHMENT ID: 12668787
{color:green}+1 @author{color}. The patch does not contain any @author
tags.
{color:green}+1 tests included{color}. The patch appears to include 16 new
or modified tests.
{color:green}+1 javac{color}. The applied patch does not increase the
total number of javac compiler warnings.
{color:green}+1 javac{color}. The applied patch does not increase the
total number of javac compiler warnings.
{color:green}+1 javadoc{color}. The javadoc tool did not generate any
warning messages.
{color:red}-1 findbugs{color}. The patch appears to introduce 1 new
Findbugs (version 2.0.3) warnings.
{color:green}+1 release audit{color}. The applied patch does not increase
the total number of release audit warnings.
{color:green}+1 lineLengths{color}. The patch does not introduce lines
longer than 100
{color:green}+1 site{color}. The mvn site goal succeeds with this patch.
{color:red}-1 core tests{color}. The patch failed these unit tests:
org.apache.hadoop.hbase.regionserver.TestHRegion
{color:red}-1 core zombie tests{color}. There are 1 zombie test(s):
Test results:
https://builds.apache.org/job/PreCommit-HBASE-Build/10891//testReport/
Findbugs warnings:
https://builds.apache.org/job/PreCommit-HBASE-Build/10891//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings:
https://builds.apache.org/job/PreCommit-HBASE-Build/10891//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings:
https://builds.apache.org/job/PreCommit-HBASE-Build/10891//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings:
https://builds.apache.org/job/PreCommit-HBASE-Build/10891//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings:
https://builds.apache.org/job/PreCommit-HBASE-Build/10891//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings:
https://builds.apache.org/job/PreCommit-HBASE-Build/10891//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings:
https://builds.apache.org/job/PreCommit-HBASE-Build/10891//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings:
https://builds.apache.org/job/PreCommit-HBASE-Build/10891//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings:
https://builds.apache.org/job/PreCommit-HBASE-Build/10891//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Console output:
https://builds.apache.org/job/PreCommit-HBASE-Build/10891//console
This message is automatically generated.
> Keep young storefiles at lower replication
> ------------------------------------------
>
> Key: HBASE-2821
> URL: https://issues.apache.org/jira/browse/HBASE-2821
> Project: HBase
> Issue Type: Improvement
> Components: regionserver
> Reporter: Todd Lipcon
> Assignee: Andrew Purtell
> Fix For: 2.0.0
>
> Attachments: HBASE-2821.patch, HBASE-2821.patch, HBASE-2821.patch,
> lifetime-distribution.png, storefile_age.pl
>
>
> jgray and I were brainstorming some ideas about this:
> In a typical heavy-write scenario, many store files do not last very long.
> They're flushed and then within a small number of seconds a compaction runs
> and they get deleted. For these "short lifetime" store files, it's less
> likely that a failure will occur during the window in which they're valid.
> So, I think we can consider some optimizations like the following:
> - Flush files at replication count 2. Scan once a minute for any store files
> in the region that are older than 2 minutes. If they're found, increase their
> replication to 3. (alternatively, queue them to avoid scanning)
> - More dangerous: flush files at replication count 1, but don't count them
> when figuring log expiration. So, if they get lost, we force log splitting to
> recover.
> The performance gain here is that we avoid the network and disk transfer of
> writing the third replica for a file that we're just about to delete anyway.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)