It is typical to install HBase overlapping a Hadoop/HDFS installation. So you would best to do:
- master node runs: -- namenode, hbase master, zookeeper, jobtracker (map reduce master) - slave nodes runs: -- datanode, regionserver, tasktracker better to do this with 1 master 7 slaves than to segregate the hosts. You end up sharing resources better and evenly. -ryan On Mon, May 31, 2010 at 7:49 PM, Anthony Ikeda < [email protected]> wrote: > I’m in the process of configuring our machines for a HBase deployment. > Based upon the documentation I’ve read so far, a ZooKeeper Quorum is > required with Hadoop running (of course). > > > > However, to what degree do I need to separate the servers? > > > > At this point I have a total of 12 servers with the possible configuration: > > > > 4 x Hadoop (1 Master, 3 Slaves) > > 4 x HBase > > 4 x ZooKeeper > > > > Should the HBase be installed with the Hadoop instances? > > i.e.: > > 8 x Hadoop and HBase (giving me 8 instances of Hadoop and HBase as opposed > to 4 of each) > > 4 x ZooKeeper > > > > Or is it typical practice for HBase to be installed on an environment > separate to Hadoop? > > > > > > Anthony Ikeda > > Java Analyst/Programmer > > Cardlink Services Limited > > Level 4, 3 Rider Boulevard > > Rhodes NSW 2138 > > > > Web: www.cardlink.com.au | Tel: + 61 2 9646 9221 | Fax: + 61 2 9646 9283 > > [image: logo_cardlink1] > > > > ********************************************************************** > This e-mail message and any attachments are intended only for the use of > the addressee(s) named above and may contain information that is privileged > and confidential. If you are not the intended recipient, any display, > dissemination, distribution, or copying is strictly prohibited. If you > believe you have received this e-mail message in error, please immediately > notify the sender by replying to this e-mail message or by telephone to (02) > 9646 9222. Please delete the email and any attachments and do not retain the > email or any attachments in any form. > ********************************************************************** >
