[
https://issues.apache.org/jira/browse/HBASE-15496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15345865#comment-15345865
]
Yu Li commented on HBASE-15496:
-------------------------------
Mind talking more about this "internal compaction" sir? [~stack]
Recently we found a scenario which will cause large row in our system, that if
there's a long running scan (now the scan could run a quite long time with the
scan heartbeat feature) and after getScanner there're lots of updates on the
row, all new versions will get flushed and won't get removed before scan close
even compaction chim in. Any thoughts on how to avoid such a case to happen? Or
please correct me if any misunderstanding of the mechanism in my statement.
Thanks.
> Throw RowTooBigException only for user scan/get
> -----------------------------------------------
>
> Key: HBASE-15496
> URL: https://issues.apache.org/jira/browse/HBASE-15496
> Project: HBase
> Issue Type: Improvement
> Components: Scanners
> Reporter: Guanghao Zhang
> Assignee: Guanghao Zhang
> Priority: Minor
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-15496.patch, HBASE-15496.patch
>
>
> When config hbase.table.max.rowsize, RowTooBigException may be thrown by
> StoreScanner. But region flush/compact should catch it or throw it only for
> user scan.
> Exceptions:
> org.apache.hadoop.hbase.regionserver.RowTooBigException: Max row size
> allowed: 10485760, but row is bigger than that
> at
> org.apache.hadoop.hbase.regionserver.StoreScanner.seekScanners(StoreScanner.java:355)
> at
> org.apache.hadoop.hbase.regionserver.StoreScanner.<init>(StoreScanner.java:276)
> at
> org.apache.hadoop.hbase.regionserver.StoreScanner.<init>(StoreScanner.java:238)
> at
> org.apache.hadoop.hbase.regionserver.compactions.Compactor.createScanner(Compactor.java:403)
> at
> org.apache.hadoop.hbase.regionserver.compactions.DefaultCompactor.compact(DefaultCompactor.java:95)
> at
> org.apache.hadoop.hbase.regionserver.DefaultStoreEngine$DefaultCompactionContext.compact(DefaultStoreEngine.java:131)
> at org.apache.hadoop.hbase.regionserver.HStore.compact(HStore.java:1211)
> at org.apache.hadoop.hbase.regionserver.HRegion.compact(HRegion.java:1952)
> at org.apache.hadoop.hbase.regionserver.HRegion.compact(HRegion.java:1774)
> or
> org.apache.hadoop.hbase.regionserver.RowTooBigException: Max row size
> allowed: 10485760, but the row is bigger than that.
> at
> org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:576)
> at
> org.apache.hadoop.hbase.regionserver.StoreFlusher.performFlush(StoreFlusher.java:132)
> at
> org.apache.hadoop.hbase.regionserver.DefaultStoreFlusher.flushSnapshot(DefaultStoreFlusher.java:75)
> at org.apache.hadoop.hbase.regionserver.HStore.flushCache(HStore.java:880)
> at
> org.apache.hadoop.hbase.regionserver.HStore$StoreFlusherImpl.flushCache(HStore.java:2155)
> at
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushCacheAndCommit(HRegion.java:2454)
> at
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2193)
> at
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2162)
> at
> org.apache.hadoop.hbase.regionserver.HRegion.flushcache(HRegion.java:2053)
> at org.apache.hadoop.hbase.regionserver.HRegion.flush(HRegion.java:1979)
> at
> org.apache.hadoop.hbase.regionserver.TestRowTooBig.testScannersSeekOnFewLargeCells(TestRowTooBig.java:101)
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)