[ 
https://issues.apache.org/jira/browse/PHOENIX-3784?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16020012#comment-16020012
 ] 

James Taylor commented on PHOENIX-3784:
---------------------------------------

This seems like an important one to get into 4.11.0 and a natural extension of 
the nice work [~gjacoby] did for PHOENIX-514. FYI, [~lhofhansl] [~apurtell].

> Chunk commit data using lower of byte-based and row-count limits
> ----------------------------------------------------------------
>
>                 Key: PHOENIX-3784
>                 URL: https://issues.apache.org/jira/browse/PHOENIX-3784
>             Project: Phoenix
>          Issue Type: Bug
>            Reporter: James Taylor
>             Fix For: 4.11.0
>
>
> We have a byte-based limit that determines how much data we send over at a 
> time when a commit occurs (PHOENIX-541), but we should also have a row-count 
> limit. We could check both the byte-based limit and the row-count limit and 
> ensure the batch size meets both constraints. This would help prevent too 
> many rows from being submitted to the server at one time and decrease the 
> likelihood of conflicting rows amongst batches. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

Reply via email to