[
https://issues.apache.org/jira/browse/HBASE-15496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15204348#comment-15204348
]
Ted Yu commented on HBASE-15496:
--------------------------------
{code}
607 public boolean isUserScan() {
{code}
The above can be package private.
With the patch, is it possible that compaction produces OOME server side ?
> Throw RowTooBigException only for user scan/get
> -----------------------------------------------
>
> Key: HBASE-15496
> URL: https://issues.apache.org/jira/browse/HBASE-15496
> Project: HBase
> Issue Type: Improvement
> Components: Scanners
> Reporter: Guanghao Zhang
> Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-15496.patch
>
>
> When config hbase.table.max.rowsize, RowTooBigException may be thrown by
> StoreScanner. But region flush/compact should catch it or throw it only for
> user scan.
> Exceptions:
> org.apache.hadoop.hbase.regionserver.RowTooBigException: Max row size
> allowed: 10485760, but row is bigger than that
> at
> org.apache.hadoop.hbase.regionserver.StoreScanner.seekScanners(StoreScanner.java:355)
> at
> org.apache.hadoop.hbase.regionserver.StoreScanner.<init>(StoreScanner.java:276)
> at
> org.apache.hadoop.hbase.regionserver.StoreScanner.<init>(StoreScanner.java:238)
> at
> org.apache.hadoop.hbase.regionserver.compactions.Compactor.createScanner(Compactor.java:403)
> at
> org.apache.hadoop.hbase.regionserver.compactions.DefaultCompactor.compact(DefaultCompactor.java:95)
> at
> org.apache.hadoop.hbase.regionserver.DefaultStoreEngine$DefaultCompactionContext.compact(DefaultStoreEngine.java:131)
> at org.apache.hadoop.hbase.regionserver.HStore.compact(HStore.java:1211)
> at org.apache.hadoop.hbase.regionserver.HRegion.compact(HRegion.java:1952)
> at org.apache.hadoop.hbase.regionserver.HRegion.compact(HRegion.java:1774)
> or
> org.apache.hadoop.hbase.regionserver.RowTooBigException: Max row size
> allowed: 10485760, but the row is bigger than that.
> at
> org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:576)
> at
> org.apache.hadoop.hbase.regionserver.StoreFlusher.performFlush(StoreFlusher.java:132)
> at
> org.apache.hadoop.hbase.regionserver.DefaultStoreFlusher.flushSnapshot(DefaultStoreFlusher.java:75)
> at org.apache.hadoop.hbase.regionserver.HStore.flushCache(HStore.java:880)
> at
> org.apache.hadoop.hbase.regionserver.HStore$StoreFlusherImpl.flushCache(HStore.java:2155)
> at
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushCacheAndCommit(HRegion.java:2454)
> at
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2193)
> at
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2162)
> at
> org.apache.hadoop.hbase.regionserver.HRegion.flushcache(HRegion.java:2053)
> at org.apache.hadoop.hbase.regionserver.HRegion.flush(HRegion.java:1979)
> at
> org.apache.hadoop.hbase.regionserver.TestRowTooBig.testScannersSeekOnFewLargeCells(TestRowTooBig.java:101)
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)