[ 
https://issues.apache.org/jira/browse/HBASE-14938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15071779#comment-15071779
 ] 

Anoop Sam John commented on HBASE-14938:
----------------------------------------

bq.No, for example when we have totalNoOfFiles=6000 we will execute both the 
loops.
No what I was suggesting is if we have one loop irrespective of total#files, 
that would have been better..  Any way just a suggestion/query that was.
Some thing coming like totalNoOfRequests =0 look bad..  May be just rename this 
var?  fullBatches =?
Just add some code level one liner comments what each loop doing.  

bq.Mostly not, this is only to control the max data size limit allowed in a ZK 
request (as mentioned in description).
Yes I was just wondering whether/when all this comes.  Ya ok it is good to 
limit any way.
But the retrieval part is the main area..  As u stated, we need zk jira issue 
to be fixed for that...   Pls file a jira for tracking that part.  If the 
replication is not happening for a while, this list under hfileref can grow big.


> Limit the number of znodes for ZK in bulk loaded hfile replication
> ------------------------------------------------------------------
>
>                 Key: HBASE-14938
>                 URL: https://issues.apache.org/jira/browse/HBASE-14938
>             Project: HBase
>          Issue Type: Improvement
>            Reporter: Ashish Singhi
>            Assignee: Ashish Singhi
>             Fix For: 2.0.0, 1.3.0
>
>         Attachments: HBASE-14938(1).patch, HBASE-14938-v1.patch, 
> HBASE-14938.patch
>
>
> In ZK the maximum allowable size of the data array is 1 MB. Until we have 
> fixed HBASE-10295 we need to handle this.
> Approach to this problem will be discussed in the comments section.
> Note: We have done internal testing with more than 3k nodes in ZK yet to be 
> replicated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to