[
https://issues.apache.org/jira/browse/HBASE-12320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14180596#comment-14180596
]
Lars Hofhansl commented on HBASE-12320:
---------------------------------------
Interesting. I assume the same happens with a large row-key...?
128k column names are unlikely, 128k row keys can happen. In either case, this
would be a bad problem.
[~qian wang] any chance to formulate this as a unittest? Maybe it's possible to
add this to TestHRegion.
If not, I can spend some time on this this week.
> hfile index can't flush to disk in memstore when key portion of kv is larger
> than 128kb and hfile generated two level index above
> ---------------------------------------------------------------------------------------------------------------------------------
>
> Key: HBASE-12320
> URL: https://issues.apache.org/jira/browse/HBASE-12320
> Project: HBase
> Issue Type: Bug
> Components: HFile
> Affects Versions: 0.94.6, 0.98.1
> Environment: cdh4.5.0
> cdh5.1.0
> Reporter: qian wang
> Attachments: TestLongIndex.java
>
>
> when you make a big key portion of a keyvalue, for example, a big rowkey or a
> big family or a big qualifer. all in words, once the kv.getkeylength of kv
> that you put is larger than 128kb and it is the first kv of the datablock.
> And you go on writing a new datablock so as to make two-level index in the
> hfile. You couldn't flush the data in memstore. The rs that corresponding
> region lies in will keep flushing state and the new hfile in .tmp of the
> region is writing forever. You can't stop it unless kill the rs.
> From my point, the index generation logic of hifle result in the bug, when
> the intermediate index is larger than 128kb, it will generate next level
> index, but the first key above 128kb will cause an endless loop to generate
> next level index.
> attached my test code, you can try it in an empty table in your test cluster
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)