[ 
https://issues.apache.org/jira/browse/HADOOP-2731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12564981#action_12564981
 ] 

Bryan Duxbury commented on HADOOP-2731:
---------------------------------------

I've applied the latest patch and tested again. My import job finished with a 
minimum of errors. and all my mapfiles are significantly smaller. On top of 
that, splits are happening much more frequently - 2-4x as often I would say.

There may still be other issues lurking around this area of functionality, but 
I would commit this issue. 

+1

> [hbase] Under load, regions become extremely large and eventually cause 
> region servers to become unresponsive
> -------------------------------------------------------------------------------------------------------------
>
>                 Key: HADOOP-2731
>                 URL: https://issues.apache.org/jira/browse/HADOOP-2731
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: contrib/hbase
>            Reporter: Bryan Duxbury
>            Assignee: stack
>         Attachments: split-v10.patch, split-v11.patch, split-v12.patch, 
> split-v8.patch, split-v9.patch, split.patch
>
>
> When attempting to write to HBase as fast as possible, HBase accepts puts at 
> a reasonably high rate for a while, and then the rate begins to drop off, 
> ultimately culminating in exceptions reaching client code. In my testing, I 
> was able to write about 370 10KB records a second to HBase until I reach 
> around 1 million rows written. At that point, a moderate to large number of 
> exceptions - NotServingRegionException, WrongRegionException, region offline, 
> etc - begin reaching the client code. This appears to be because the 
> retry-and-wait logic in HTable runs out of retries and fails. 
> Looking at mapfiles for the regions from the command line shows that some of 
> the mapfiles are between 1 and 2 GB in size, much more than the stated file 
> size limit. Talking with Stack, one possible explanation for this is that the 
> RegionServer is not choosing to compact files often enough, leading to many 
> small mapfiles, which in turn leads to a few overlarge mapfiles. Then, when 
> the time comes to do a split or "major" compaction, it takes an unexpectedly 
> long time to complete these operations. This translates into errors for the 
> client application.
> If I back off the import process and give the cluster some quiet time, some 
> splits and compactions clearly do take place, because the number of regions 
> go up and the number of mapfiles/region goes down. I can then begin writing 
> again in earnest for a short period of time until the problem begins again.
> Both Marc Harris and myself have seen this behavior.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to