[ https://issues.apache.org/jira/browse/HBASE-1961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12777346#action_12777346 ]
Andrew Purtell commented on HBASE-1961: --------------------------------------- This is what you should see out of launch-hbase-cluster: {noformat} $ ./launch-hbase-cluster cluster-0 4 3 Creating/checking security groups Security group cluster-0-master exists, ok Security group cluster-0 exists, ok Security group cluster-0-zookeeper exists, ok Starting ZooKeeper quorum ensemble. Starting an AMI with ID ami-c644a7af (arch i386) in group cluster-0-zookeeper Waiting for instance i-bf75efd7 to start: ............. Started ZooKeeper instance i-bf75efd7 as ip-10-212-154-223.ec2.internal Public DNS name is ec2-174-129-186-94.compute-1.amazonaws.com. Starting an AMI with ID ami-c644a7af (arch i386) in group cluster-0-zookeeper Waiting for instance i-6d6af005 to start: ............. Started ZooKeeper instance i-6d6af005 as ip-10-212-154-34.ec2.internal Public DNS name is ec2-67-202-48-84.compute-1.amazonaws.com. Starting an AMI with ID ami-c644a7af (arch i386) in group cluster-0-zookeeper Waiting for instance i-076af06f to start: ........... Started ZooKeeper instance i-076af06f as ip-10-212-154-160.ec2.internal Public DNS name is ec2-174-129-153-78.compute-1.amazonaws.com. ZooKeeper quorum is ip-10-212-154-223.ec2.internal,ip-10-212-154-34.ec2.internal,ip-10-212-154-160.ec2.internal. Initializing the ZooKeeper quorum ensemble. ec2-174-129-186-94.compute-1.amazonaws.com hbase-ec2-init-zookeeper-remote.sh 100% 1201 1.2KB/s 00:00 starting zookeeper, logging to /mnt/hbase/logs/hbase-root-zookeeper-ip-10-212-154-223.out ec2-67-202-48-84.compute-1.amazonaws.com hbase-ec2-init-zookeeper-remote.sh 100% 1201 1.2KB/s 00:00 starting zookeeper, logging to /mnt/hbase/logs/hbase-root-zookeeper-ip-10-212-154-34.out ec2-174-129-153-78.compute-1.amazonaws.com hbase-ec2-init-zookeeper-remote.sh 100% 1201 1.2KB/s 00:00 starting zookeeper, logging to /mnt/hbase/logs/hbase-root-zookeeper-ip-10-212-154-160.out Testing for existing master in group: cluster-0 Starting master with AMI ami-f244a79b (arch x86_64) Waiting for instance i-bf6af0d7 to start............... Started as ip-10-245-101-219.ec2.internal Master is ec2-72-44-33-230.compute-1.amazonaws.com, ip is 72.44.33.230, zone is us-east-1d. Starting 4 AMI(s) with ID ami-f244a79b (arch x86_64) in group cluster-0 in zone us-east-1d i-3f6bf157 i-316bf159 i-336bf15b i-356bf15d {noformat} And then if you log on to the master a few minutes later: {noformat} $ ssh -i id_rsa_root r...@ec2-72-44-33-230.compute-1.amazonaws.com __| __|_ ) Fedora 8 _| ( / 64-bit ___|\___|___| Welcome to an EC2 Public Image :-) Base --[ see /etc/ec2/release-notes ]-- [r...@ip-10-245-101-219 ~]# jps -l 1358 org.apache.hadoop.hdfs.server.namenode.NameNode 1567 org.apache.hadoop.hbase.master.HMaster 1820 sun.tools.jps.Jps 1434 org.apache.hadoop.hdfs.server.datanode.DataNode [r...@ip-10-245-101-219 ~]# hbase shell HBase Shell; enter 'help<RETURN>' for list of supported commands. Version: 0.20.1, r822817, Wed Oct 7 11:55:42 PDT 2009 hbase(main):001:0> status 'simple' 4 live servers ip-10-242-133-139.ec2.internal:60020 1258079538311 requests=0, regions=2, usedHeap=26, maxHeap=987 ip-10-242-97-203.ec2.internal:60020 1258079557660 requests=0, regions=0, usedHeap=37, maxHeap=987 ip-10-245-101-187.ec2.internal:60020 1258079561915 requests=0, regions=0, usedHeap=37, maxHeap=987 ip-10-245-111-47.ec2.internal:60020 1258079556528 requests=0, regions=0, usedHeap=37, maxHeap=987 0 dead servers {noformat} > HBase EC2 scripts > ----------------- > > Key: HBASE-1961 > URL: https://issues.apache.org/jira/browse/HBASE-1961 > Project: Hadoop HBase > Issue Type: New Feature > Environment: Amazon AWS EC2 > Reporter: Andrew Purtell > Assignee: Andrew Purtell > Priority: Minor > Fix For: 0.21.0, 0.20.3 > > Attachments: ec2-contrib.tar.gz > > > Attached tarball is a clone of the Hadoop EC2 scripts, modified significantly > to start up a HBase storage only cluster on top of HDFS backed by instance > storage. > Tested with the HBase 0.20 branch but should work with trunk also. Only the > AMI create and launch scripts are tested. Will bring up a functioning HBase > cluster. > Do "create-hbase-image c1.xlarge" to create an x86_64 AMI, or > "create-hbase-image c1.medium" to create an i386 AMI. Public Hadoop/HBase > 0.20.1 AMIs are available: > i386: ami-c644a7af > x86_64: ami-f244a79b > launch-hbase-cluster brings up the cluster: First, a small dedicated ZK > quorum, specifiable in size, default of 3. Then, the DFS namenode (formatting > on first boot) and one datanode and the HBase master. Then, a specifiable > number of slaves, instances running DFS datanodes and HBase region servers. > For example: > {noformat} > launch-hbase-cluster testcluster 100 5 > {noformat} > would bring up a cluster with 100 slaves supported by a 5 node ZK ensemble. > We must colocate a datanode with the namenode because currently the master > won't tolerate a brand new DFS with only namenode and no datanodes up yet. > See HBASE-1960. By default the launch scripts provision ZooKeeper as > c1.medium and the HBase master and region servers as c1.xlarge. The result is > a HBase cluster supported by a ZooKeeper ensemble. ZK ensembles are not > dynamic, but HBase clusters can be grown by simply starting up more slaves, > just like Hadoop. > hbase-ec2-init-remote.sh can be trivially edited to bring up a jobtracker on > the master node and task trackers on the slaves. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.