[
https://issues.apache.org/jira/browse/HBASE-10591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13910048#comment-13910048
]
Enis Soztutar commented on HBASE-10591:
---------------------------------------
bq. Should we limit the valid BLOCKSIZE too? I heard quite often that folks mix
HFile BLOCKSIZE and HDFS block size. Should we limit the BLOCKSIZE to 1KB <=
BLOCKSIZE <= 1MB? Or maybe 1KB <= BLOCKSIZE <= 10MB?
Good idea. 1KB <= BLOCKSIZE <= 10MB? seems reasonable.
> Sanity check table configuration in createTable
> -----------------------------------------------
>
> Key: HBASE-10591
> URL: https://issues.apache.org/jira/browse/HBASE-10591
> Project: HBase
> Issue Type: Improvement
> Reporter: Enis Soztutar
> Assignee: Enis Soztutar
> Fix For: 0.99.0
>
> Attachments: hbase-10591_v1.patch, hbase-10591_v2.patch
>
>
> We had a cluster completely become unoperational, because a couple of table
> was erroneously created with MAX_FILESIZE set to 4K, which resulted in 180K
> regions in a short interval, and bringing the master down due to HBASE-4246.
> We can do some sanity checking in master.createTable() and reject the
> requests. We already check the compression there, so it seems a good place.
> Alter table should also check for this as well.
--
This message was sent by Atlassian JIRA
(v6.1.5#6160)