[ 
https://issues.apache.org/jira/browse/HBASE-1961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12779051#action_12779051
 ] 

stack commented on HBASE-1961:
------------------------------

Andrew, thats amazing that you have the ascent to java license inline... and 
how it installs everything, java, hadoop, hbase, and... ganglia included.

I got this:

{code}
Unable to read instance meta-data for product-codes
Creating bundle manifest...
ec2-bundle-vol complete.
ERROR: Error talking to S3: Server.AccessDenied(403): Only the bucket owner can 
access this property
Done
Client.InvalidManifest: HTTP 403 (Forbidden) response for URL 
http://s3.amazonaws.com:80/hbase-images/hbase-0.20.1-x86_64.manifest.xml: check 
your S3 ACLs are correct.
Terminate with: ec2-terminate-instances i-971a79ff
{code}

I should have changed this, S3_BUCKET, in hbase-ec2-env it seems so its a 
bucket I have access to..  Thats no prob.

I tried running $ ./bin/hbase-ec2 launch-cluster stackcluster 3 3...and all 
went well till zk nodes came up:

{code}
Starting ZooKeeper quorum ensemble.
Starting an AMI with ID ami-c644a7af (arch i386) in group stackcluster-zookeeper
Waiting for instance i-6b1c7f03 to start: ................. Started ZooKeeper 
instance i-6b1c7f03 as ip-10-245-59-97.ec2.internal
    Public DNS name is ec2-72-44-33-220.compute-1.amazonaws.com.
Starting an AMI with ID ami-c644a7af (arch i386) in group stackcluster-zookeeper
Waiting for instance i-471c7f2f to start: ....................... Started 
ZooKeeper instance i-471c7f2f as ip-10-245-58-191.ec2.internal
    Public DNS name is ec2-67-202-46-119.compute-1.amazonaws.com.
Starting an AMI with ID ami-c644a7af (arch i386) in group stackcluster-zookeeper
Waiting for instance i-c51c7fad to start: ..................... Started 
ZooKeeper instance i-c51c7fad as ip-10-244-206-65.ec2.internal
    Public DNS name is ec2-174-129-119-249.compute-1.amazonaws.com.
ZooKeeper quorum is 
ip-10-245-59-97.ec2.internal,ip-10-245-58-191.ec2.internal,ip-10-244-206-65.ec2.internal.
Initializing the ZooKeeper quorum ensemble.
    ec2-72-44-33-220.compute-1.amazonaws.com
lost connection
    ec2-67-202-46-119.compute-1.amazonaws.com
lost connection
    ec2-174-129-119-249.compute-1.amazonaws.com
lost connection
...
{code}

They seem up in the console but the above seems to have stopped the script 
going on to start the regionservers?

I tried it twice and got same lost connection both times.

Terminate cluster is sweet the way it asks you if you want to shut down all.




> HBase EC2 scripts
> -----------------
>
>                 Key: HBASE-1961
>                 URL: https://issues.apache.org/jira/browse/HBASE-1961
>             Project: Hadoop HBase
>          Issue Type: New Feature
>         Environment: Amazon AWS EC2
>            Reporter: Andrew Purtell
>            Assignee: Andrew Purtell
>            Priority: Minor
>             Fix For: 0.21.0, 0.20.3
>
>         Attachments: ec2-contrib.tar.gz
>
>
> Attached tarball is a clone of the Hadoop EC2 scripts, modified significantly 
> to start up a HBase storage only cluster on top of HDFS backed by instance 
> storage. 
> Tested with the HBase 0.20 branch but should work with trunk also. Only the 
> AMI create and launch scripts are tested. Will bring up a functioning HBase 
> cluster. 
> Do "create-hbase-image c1.xlarge" to create an x86_64 AMI, or 
> "create-hbase-image c1.medium" to create an i386 AMI.  Public Hadoop/HBase 
> 0.20.1 AMIs are available:
>     i386: ami-c644a7af
>     x86_64: ami-f244a79b
> launch-hbase-cluster brings up the cluster: First, a small dedicated ZK 
> quorum, specifiable in size, default of 3. Then, the DFS namenode (formatting 
> on first boot) and one datanode and the HBase master. Then, a specifiable 
> number of slaves, instances running DFS datanodes and HBase region servers.  
> For example:
> {noformat}
>     launch-hbase-cluster testcluster 100 5
> {noformat}
> would bring up a cluster with 100 slaves supported by a 5 node ZK ensemble.
> We must colocate a datanode with the namenode because currently the master 
> won't tolerate a brand new DFS with only namenode and no datanodes up yet. 
> See HBASE-1960. By default the launch scripts provision ZooKeeper as 
> c1.medium and the HBase master and region servers as c1.xlarge. The result is 
> a HBase cluster supported by a ZooKeeper ensemble. ZK ensembles are not 
> dynamic, but HBase clusters can be grown by simply starting up more slaves, 
> just like Hadoop. 
> hbase-ec2-init-remote.sh can be trivially edited to bring up a jobtracker on 
> the master node and task trackers on the slaves.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to