[ 
https://issues.apache.org/jira/browse/HBASE-3871?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13063718#comment-13063718
 ] 

[email protected] commented on HBASE-3871:
------------------------------------------------------



bq.  On 2011-07-12 04:49:26, Michael Stack wrote:
bq.  > Patch looks fine to me but are you addressing Andrew's comment that 
perhaps futures not needed?  Good stuff.

CountDownLatch ctor is passed the total number of items (HFiles in our case). 
tryLoad() decides which HFile's to split, making number of items dynamic.
This is why I didn't use CountDownLatch.

With patch v2, we wouldn't spend much time waiting for any HFile to finish 
splitting.


- Ted


-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/704/#review1033
-----------------------------------------------------------


On 2011-07-09 01:54:46, Ted Yu wrote:
bq.  
bq.  -----------------------------------------------------------
bq.  This is an automatically generated e-mail. To reply, visit:
bq.  https://reviews.apache.org/r/704/
bq.  -----------------------------------------------------------
bq.  
bq.  (Updated 2011-07-09 01:54:46)
bq.  
bq.  
bq.  Review request for hbase and Michael Stack.
bq.  
bq.  
bq.  Summary
bq.  -------
bq.  
bq.  This JIRA complements HBASE-3721 by parallelizing HFile splitting which 
was done in the main thread.
bq.  
bq.  From Adam w.r.t. HFile splitting:
bq.  There's actually a good number of messages of that type (HFile no longer 
fits inside a single region), unfortunately I didn't take a timestamp on just 
when I was running with the patched jars vs the regular ones, however from the 
logs I can say that this is occurring fairly regularly on this system. The 
cluster I tested this on is our backup cluster, the mapreduce jobs on our 
production cluster output HFiles which are copied to the backup and then loaded 
into HBase on both. Since the regions may be somewhat different on the backup 
cluster I would expect it to have to split somewhat regularly.
bq.  
bq.  
bq.  This addresses bug HBASE-3871.
bq.      https://issues.apache.org/jira/browse/HBASE-3871
bq.  
bq.  
bq.  Diffs
bq.  -----
bq.  
bq.    
/src/main/java/org/apache/hadoop/hbase/mapreduce/LoadIncrementalHFiles.java 
1144493 
bq.  
bq.  Diff: https://reviews.apache.org/r/704/diff
bq.  
bq.  
bq.  Testing
bq.  -------
bq.  
bq.  TestHFileOutputFormat and TestLoadIncrementalHFiles passed with this patch.
bq.  
bq.  
bq.  Thanks,
bq.  
bq.  Ted
bq.  
bq.



> Speedup LoadIncrementalHFiles by parallelizing HFile splitting
> --------------------------------------------------------------
>
>                 Key: HBASE-3871
>                 URL: https://issues.apache.org/jira/browse/HBASE-3871
>             Project: HBase
>          Issue Type: Improvement
>          Components: mapreduce
>    Affects Versions: 0.90.2
>            Reporter: Ted Yu
>            Assignee: Ted Yu
>         Attachments: 3871.patch
>
>
> From Adam w.r.t. HFile splitting:
> There's actually a good number of messages of that type (HFile no longer fits 
> inside a single region), unfortunately I didn't take a timestamp on just when 
> I was running with the patched jars vs the regular ones, however from the 
> logs I can say that this is occurring fairly regularly on this system.  The 
> cluster I tested this on is our backup cluster, the mapreduce jobs on our 
> production cluster output HFiles which are copied to the backup and then 
> loaded into HBase on both.  Since the regions may be somewhat different on 
> the backup cluster I would expect it to have to split somewhat regularly.
> This JIRA complements HBASE-3721 by parallelizing HFile splitting which is 
> done in the main thread.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to