chenxu commented on HBASE-23158:

I think [HBASE-23063|https://github.com/apache/hbase/pull/656] can also resolve 
this issue, since we restrict the number of return rows instead.

IMHO ,limit the number of Blocks each Multi can use is truly not suitable, 
since the mslab CHUNK, Bucket's ByteBuffer are all shared mem among diff RPC.

> If KVs are in memstore, small  batch get can come across 
> MultiActionResultTooLarge
> ----------------------------------------------------------------------------------
>                 Key: HBASE-23158
>                 URL: https://issues.apache.org/jira/browse/HBASE-23158
>             Project: HBase
>          Issue Type: Bug
>          Components: regionserver, rpc
>         Environment: [^TestMultiRespectsLimitsMemstore.patch]
>            Reporter: junfei liang
>            Priority: Minor
>         Attachments: TestMultiRespectsLimitsMemstore.patch
> to protect against big scan, we set   hbase.server.scanner.max.result.size  = 
> 10MB in our customer  hbase cluster,  however our clients  can meet 
> MultiActionResultTooLarge even in small batch get (for ex. 15 batch get,  and 
> row size is about 5KB ) .
> after  [HBASE-14978|https://issues.apache.org/jira/browse/HBASE-14978] hbase 
> take the data block reference into consideration, but the  block size is 64KB 
> (the default value ), even if all cells are from different block , the block 
> size retained is less than 1MB, so what's the problem ?
> finally  i found that HBASE-14978 also consider the cell in memstore, as 
> MSLAB is enabled default, so if the cell is from memstore, cell backend array 
> can be large (2MB as default), so even if a small batch can meet this error,  
> is this reasonable ?
> plus:
> when throw MultiActionResultTooLarge exception,  hbase client should retry 
> ignore rpc retry num,  however  if set retry num to zero, client  will fail 
> without retry in this case.
> see attachment  TestMultiRespectsLimitsMemstore  for details.

This message was sent by Atlassian Jira

Reply via email to