[
https://issues.apache.org/jira/browse/HBASE-15669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15249581#comment-15249581
]
Anoop Sam John commented on HBASE-15669:
----------------------------------------
bq.Shall I do that as part of another jira as I know there are many places
where we can do such kind off optimization for bulk loaded data replication
Ya that will be better and handle it in all places.
> HFile size is not considered correctly in a replication request
> ---------------------------------------------------------------
>
> Key: HBASE-15669
> URL: https://issues.apache.org/jira/browse/HBASE-15669
> Project: HBase
> Issue Type: Bug
> Components: Replication
> Affects Versions: 1.3.0
> Reporter: Ashish Singhi
> Assignee: Ashish Singhi
> Fix For: 2.0.0, 1.3.0, 1.4.0
>
> Attachments: HBASE-15669.patch
>
>
> In a single replication request from source cluster a RS can send either at
> most {{replication.source.size.capacity}} size of data or
> {{replication.source.nb.capacity}} entries.
> The size is calculated by considering the cells size in each entry which will
> get calculated wrongly in case of bulk loaded data replication, in this case
> we need to consider the size of hfiles not cell.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)