[
https://issues.apache.org/jira/browse/HBASE-15133?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15108156#comment-15108156
]
ramkrishna.s.vasudevan commented on HBASE-15133:
------------------------------------------------
Then values like storeLimit and storeOffset can also be changed to long?
Because
bq. this.storeLimit = scan.getMaxResultsPerColumnFamily();
So in your type of use cases you can even set Integer.MAX_VALUE or even more
than that for getMaxResultsPerColumnFamily().
Just out of curiousity
How long does it take to scan such a wide row?
> Data loss after compaction when a row has more than Integer.MAX_VALUE columns
> -----------------------------------------------------------------------------
>
> Key: HBASE-15133
> URL: https://issues.apache.org/jira/browse/HBASE-15133
> Project: HBase
> Issue Type: Bug
> Components: Compaction
> Reporter: Toshihiro Suzuki
> Assignee: Toshihiro Suzuki
> Attachments: HBASE-15133.patch, master.patch
>
>
> We have lost the data in our development environment when a row has more than
> Integer.MAX_VALUE columns after compaction.
> I think the reason is type of StoreScanner's countPerRow is int.
> https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreScanner.java#L67
> After changing the type to long, it seems to be fixed.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)