[ 
https://issues.apache.org/jira/browse/PHOENIX-3784?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva updated PHOENIX-3784:
------------------------------------
    Attachment: PHOENIX-3784.patch

[~jamestaylor]
Can you please review?

> Chunk commit data using lower of byte-based and row-count limits
> ----------------------------------------------------------------
>
>                 Key: PHOENIX-3784
>                 URL: https://issues.apache.org/jira/browse/PHOENIX-3784
>             Project: Phoenix
>          Issue Type: Bug
>            Reporter: James Taylor
>            Assignee: Thomas D'Silva
>             Fix For: 4.11.0
>
>         Attachments: PHOENIX-3784.patch
>
>
> We have a byte-based limit that determines how much data we send over at a 
> time when a commit occurs (PHOENIX-541), but we should also have a row-count 
> limit. We could check both the byte-based limit and the row-count limit and 
> ensure the batch size meets both constraints. This would help prevent too 
> many rows from being submitted to the server at one time and decrease the 
> likelihood of conflicting rows amongst batches. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

Reply via email to