[
https://issues.apache.org/jira/browse/HBASE-10591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13911096#comment-13911096
]
Hadoop QA commented on HBASE-10591:
-----------------------------------
{color:red}-1 overall{color}. Here are the results of testing the latest
attachment
http://issues.apache.org/jira/secure/attachment/12630838/hbase-10591_v3.patch
against trunk revision .
ATTACHMENT ID: 12630838
{color:green}+1 @author{color}. The patch does not contain any @author
tags.
{color:green}+1 tests included{color}. The patch appears to include 12 new
or modified tests.
{color:green}+1 hadoop1.0{color}. The patch compiles against the hadoop
1.0 profile.
{color:green}+1 hadoop1.1{color}. The patch compiles against the hadoop
1.1 profile.
{color:green}+1 javadoc{color}. The javadoc tool did not generate any
warning messages.
{color:green}+1 javac{color}. The applied patch does not increase the
total number of javac compiler warnings.
{color:green}+1 findbugs{color}. The patch does not introduce any new
Findbugs (version 1.3.9) warnings.
{color:green}+1 release audit{color}. The applied patch does not increase
the total number of release audit warnings.
{color:red}-1 lineLengths{color}. The patch introduces the following lines
longer than 100:
+ + "\"hbase.hregion.memstore.flush.size\" ("+ flushSize + ") is
too small, which might cause "
+ + " must be between 1K and 16MB Set " + CONF_KEY + " to false at
conf or table descriptor "
+ throw new DoNotRetryIOException("Replication scope for column family "
+ hcd.getNameAsString()
{color:green}+1 site{color}. The mvn site goal succeeds with this patch.
{color:red}-1 core tests{color}. The patch failed these unit tests:
org.apache.hadoop.hbase.client.TestFromClientSide
org.apache.hadoop.hbase.client.TestFromClientSideWithCoprocessor
Test results:
https://builds.apache.org/job/PreCommit-HBASE-Build/8792//testReport/
Findbugs warnings:
https://builds.apache.org/job/PreCommit-HBASE-Build/8792//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings:
https://builds.apache.org/job/PreCommit-HBASE-Build/8792//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings:
https://builds.apache.org/job/PreCommit-HBASE-Build/8792//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings:
https://builds.apache.org/job/PreCommit-HBASE-Build/8792//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings:
https://builds.apache.org/job/PreCommit-HBASE-Build/8792//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings:
https://builds.apache.org/job/PreCommit-HBASE-Build/8792//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings:
https://builds.apache.org/job/PreCommit-HBASE-Build/8792//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings:
https://builds.apache.org/job/PreCommit-HBASE-Build/8792//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings:
https://builds.apache.org/job/PreCommit-HBASE-Build/8792//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output:
https://builds.apache.org/job/PreCommit-HBASE-Build/8792//console
This message is automatically generated.
> Sanity check table configuration in createTable
> -----------------------------------------------
>
> Key: HBASE-10591
> URL: https://issues.apache.org/jira/browse/HBASE-10591
> Project: HBase
> Issue Type: Improvement
> Reporter: Enis Soztutar
> Assignee: Enis Soztutar
> Fix For: 0.99.0
>
> Attachments: hbase-10591_v1.patch, hbase-10591_v2.patch,
> hbase-10591_v3.patch, hbase-10591_v4.patch
>
>
> We had a cluster completely become unoperational, because a couple of table
> was erroneously created with MAX_FILESIZE set to 4K, which resulted in 180K
> regions in a short interval, and bringing the master down due to HBASE-4246.
> We can do some sanity checking in master.createTable() and reject the
> requests. We already check the compression there, so it seems a good place.
> Alter table should also check for this as well.
--
This message was sent by Atlassian JIRA
(v6.1.5#6160)