[ 
https://issues.apache.org/jira/browse/HBASE-10591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13910022#comment-13910022
 ] 

Lars Hofhansl commented on HBASE-10591:
---------------------------------------

The limits look fine to me. The only place where we might violate those are our 
own tests (small region sizes for many splits, etc).

Should we limit the valid BLOCKSIZE too? I heard quite often that folks mix 
HFile BLOCKSIZE and HDFS block size. Should we limit the BLOCKSIZE to 1KB <= 
BLOCKSIZE <= 1MB? Or maybe 1KB <= BLOCKSIZE <= 10MB?


> Sanity check table configuration in createTable
> -----------------------------------------------
>
>                 Key: HBASE-10591
>                 URL: https://issues.apache.org/jira/browse/HBASE-10591
>             Project: HBase
>          Issue Type: Improvement
>            Reporter: Enis Soztutar
>            Assignee: Enis Soztutar
>             Fix For: 0.99.0
>
>         Attachments: hbase-10591_v1.patch
>
>
> We had a cluster completely become unoperational, because a couple of table 
> was erroneously created with MAX_FILESIZE set to 4K, which resulted in 180K 
> regions in a short interval, and bringing the master down due to  HBASE-4246.
> We can do some sanity checking in master.createTable() and reject the 
> requests. We already check the compression there, so it seems a good place. 
> Alter table should also check for this as well.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

Reply via email to