No.
What tuning did you do?
Why such a small cluster?

Sorry, but when you start off with a bad hardware configuration, you can get 
Hadoop/HBase to work, but performance will always be sub-optimal.



Sent from my iPhone

On Feb 1, 2012, at 6:52 AM, "Tim Robertson" <[email protected]> wrote:

> Hi all,
> 
> We have a 3 node cluster (CD3u2) with the following hardware:
> 
> RegionServers (+DN + TT)
>  CPU: 2x Intel(R) Xeon(R) CPU E5630 @ 2.53GHz (quad)
>  Disks: 6x250G SATA 5.4K
>  Memory: 24GB
> 
> Master (+ZK, JT, NN)
>  CPU: Intel(R) Xeon(R) CPU X3363 @ 2.83GHz, 2x6MB (quad)
>  Disks: 2x500G SATA 7.2K
>  Memory: 8GB
> 
> Memory wise, we have:
> Master:
>  NN: 1GB
>  JT: 1GB
>  HBase master: 6GB
>  ZK: 1GB
> RegionServers:
>  RegionServer: 6GB
>  TaskTracker: 1GB
>  11 Mappers @ 1GB each
>  7 Reducers @ 1GB each
> 
> HDFS was empty, and I ran randomWrite and scan both with number
> clients of 50 (seemed to spawn 500 Mappers though...)
> 
> randomWrite:
> 12/02/01 13:27:47 INFO mapred.JobClient:     ROWS=52428500
> 12/02/01 13:27:47 INFO mapred.JobClient:     ELAPSED_TIME=84504886
> 
> scan:
> 12/02/01 13:42:52 INFO mapred.JobClient:     ROWS=52428500
> 12/02/01 13:42:52 INFO mapred.JobClient:     ELAPSED_TIME=8158664
> 
> Would I be correct in thinking that this is way below what is to be
> expected of this hardware?
> We're setting up ganglia now to start debugging, but any suggestions
> on how to diagnose this would be greatly appreciated.
> 
> Thanks!
> Tim

Reply via email to