Hi,

This is far too little RAM and underpowered for CPU also. The rule of thumb is 
1GB RAM for system, 1GB RAM for each Hadoop daemon (HDFS, jobtracker, 
tasktracker, etc.), 1GB RAM for Zookeeper, 1GB RAM (but more if you want 
performance/caching) for HBase region servers; and 1 hardware core for each 
concurrent daemon progress. You won't go wrong with dual quad core. If you are 
running other processes colocated aside Hadoop and HBase daemons, you need to 
account for their heap in RAM and added CPU load also. Too few resources and 
heap swapped out will pause GC too long, or threads will be starved for CPU, 
and you'll see no end of trouble. To get perspective, consider a typical 
production deployment of this system involves a dedicated 2N+1 Zookeeper 
ensemble (N ~= 1...3), and a Hadoop+HBase stack on 10s or even 100s of nodes. 

   - Andy




________________________________
From: "[email protected]" <[email protected]>
To: [email protected]
Sent: Friday, August 21, 2009 3:45:11 PM
Subject: Re: HBase-0.20.0 multi read


I have 3 PC cluster.(pc1 , pc2 , pc3)
Hadoop master (pc1), 2 slaves (pc2,pc3)

HBase and ZK running on pc1, two region servers (pc2,pc3)

pc1 : Intel core2 , 2.4GHz , RAM 1G

pc2 : Intel core2 , 2.4GHz , RAM 1G

pc3 : Intel core2 , 1.86GHZ, RAM 2G

[...]



      

Reply via email to