Yeah, I'm comparing apples-to-apples here.

This cluster is so messed up though that I'm not going to lose sleep.
It's almost certainly an environmental thing, not an HBase thing :)

-B

On Wed, Dec 22, 2010 at 2:49 PM, Lars George <[email protected]> wrote:
> Is this up on EC2 then you may know that write performance is a magnitude 
> slower than an a comparable dedicated cluster! Most EC2 cluster I have tested 
> (with and without EBS and various instance sizes etc.) only did about 2-3MB/s 
> - taken this into account can you do the math if they do even less right now?
>
> On Dec 22, 2010, at 21:27, Bradford Stephens <[email protected]> 
> wrote:
>
>> Very good points, I'm thinking it's environmental as well, but I
>> wanted to do a 'sanity check'. I don't really have any control over
>> this cluster. Upgrading to .89 reduced the load time from 24 hours to
>> 8, but I was expecting 2hrs based on past tests. I can live with it
>> since it's an initial bulk import, but I want HBase to look awesome
>> for these customers (since they have a lot of pull).
>>
>> Another cluster with nearly identical setup for HBase is blazing fast
>> (for EC2).
>>
>> (I'm not much of a sysadmin).
>>
>> Cheers,
>> B
>>
>>
>> On Wed, Dec 22, 2010 at 11:43 AM, Stack <[email protected]> wrote:
>>> I took a look at your regionserver log.  As per your above comment,
>>> boring.  More importantly, no blocking going on.
>>>
>>> You have 4 column families going on.  Do you have to have this amount
>>> of CFs?  This might explain some slow down.
>>>
>>> If it was faster last week and this week its slow though 'nothing' has
>>> changed, it smells environmental.
>>>
>>> It looks like you have hooked your Map to TOF... so you should have a
>>> nice little write buffer in HTable going on (You might check).
>>>
>>> For sure, you are not swapping?  You have any monitoring of this
>>> cluster going on?  Setting swappyness to zero from 60 is probably a
>>> bit radical.  You want some swap if memory pressure.  60 is too loose.
>>>  If you look at those killed map tasks... why they die?  Because
>>> processes were killed by the kernel?
>>>
>>> St.Ack
>>>
>>>
>>> On Tue, Dec 21, 2010 at 10:10 PM, Bradford Stephens
>>> <[email protected]> wrote:
>>>> Unfortunately, changing swappiness didn't seem to help.
>>>>
>>>> On Tue, Dec 21, 2010 at 4:28 PM, Andrew Purtell <[email protected]> 
>>>> wrote:
>>>>>> Yes, a good point. Swappiness is set to 60 -- suppose I should set it to 
>>>>>> 0?
>>>>>
>>>>> Yes.
>>>>>
>>>>> Best regards,
>>>>>
>>>>>    - Andy
>>>>>
>>>>> Problems worthy of attack prove their worth by hitting back.
>>>>>  - Piet Hein (via Tom White)
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> Bradford Stephens,
>>>> Founder, Drawn to Scale
>>>> drawntoscalehq.com
>>>> 727.697.7528
>>>>
>>>> http://www.drawntoscalehq.com --  The intuitive, cloud-scale data
>>>> solution. Process, store, query, search, and serve all your data.
>>>>
>>>> http://www.roadtofailure.com -- The Fringes of Scalability, Social
>>>> Media, and Computer Science
>>>>
>>>
>>
>>
>>
>> --
>> Bradford Stephens,
>> Founder, Drawn to Scale
>> drawntoscalehq.com
>> 727.697.7528
>>
>> http://www.drawntoscalehq.com --  The intuitive, cloud-scale data
>> solution. Process, store, query, search, and serve all your data.
>>
>> http://www.roadtofailure.com -- The Fringes of Scalability, Social
>> Media, and Computer Science
>



-- 
Bradford Stephens,
Founder, Drawn to Scale
drawntoscalehq.com
727.697.7528

http://www.drawntoscalehq.com --  The intuitive, cloud-scale data
solution. Process, store, query, search, and serve all your data.

http://www.roadtofailure.com -- The Fringes of Scalability, Social
Media, and Computer Science

Reply via email to