[ 
https://issues.apache.org/jira/browse/HBASE-11544?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14069807#comment-14069807
 ] 

Enis Soztutar commented on HBASE-11544:
---------------------------------------

bq. Scan#setMaxResultSize is 1/2 of my #1. When the results do not fit into 
that size the client will deliver partial rows to the caller, which the caller 
then has to deal with
This, and getMaxResultsPerColumnFamily() might actually break the atomicity of 
edits visibility today. We do not send the mvcc read point to the client, so it 
can result in partially observing single-row atomic updates. I am just raising 
this because we have to design for client-side read point tracking if we end up 
doing streaming, etc. 

> [Ergonomics] hbase.client.scanner.caching is dogged and will try to return 
> batch even if it means OOME
> ------------------------------------------------------------------------------------------------------
>
>                 Key: HBASE-11544
>                 URL: https://issues.apache.org/jira/browse/HBASE-11544
>             Project: HBase
>          Issue Type: Bug
>            Reporter: stack
>              Labels: noob
>
> Running some tests, I set hbase.client.scanner.caching=1000.  Dataset has 
> large cells.  I kept OOME'ing.
> Serverside, we should measure how much we've accumulated and return to the 
> client whatever we've gathered once we pass out a certain size threshold 
> rather than keep accumulating till we OOME.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to