[
https://issues.apache.org/jira/browse/HBASE-1961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12778484#action_12778484
]
Andrew Purtell commented on HBASE-1961:
---------------------------------------
bq. Has hardcoded JAVA_VERSION in hbase-ec2-env.sh. Is that intentional?
Yes. That's used when building the AMI only. Also you can see the location of
HBase and JVM packages is hardcoded, and is currently a bucket of mine in S3.
We should host HBase tarballs somewhere on ASF systems instead. I put up JVM
packages into S3 because Sun's java download URLs are not stable (or at least
have not been in the past).
bq. The -h is not passed to ec2-describe-instances
Argument handling by the scripts is basically as-is from parent.
bq. So it seems like we require people to fill in keys into the hbase-ec2-env.sh
The requirement is:
* Fill in AWS_ACCOUNT_ID
* Fill in AWS_ACCESS_KEY_ID
* Fill in AWS_SECRET_ACCESS_KEY
* Fill in KEY_NAME
* Make sure a file named id_rsa_${KEY_NAME} exists in EC2_KEYDIR
* Define EC2_PRIVATE_KEY defined in the environment. This is usually done when
setting up the API tools. Probably we should add a line for that in
hbase-ec2-env.sh to call that out.
The user must also put the private key into EC2_KEYDIR as pk*.pem and their
cert in there as cert*.pem.
All should go into the readme and up on the wiki. I'll add this to the readme
now.
> HBase EC2 scripts
> -----------------
>
> Key: HBASE-1961
> URL: https://issues.apache.org/jira/browse/HBASE-1961
> Project: Hadoop HBase
> Issue Type: New Feature
> Environment: Amazon AWS EC2
> Reporter: Andrew Purtell
> Assignee: Andrew Purtell
> Priority: Minor
> Fix For: 0.21.0, 0.20.3
>
> Attachments: ec2-contrib.tar.gz
>
>
> Attached tarball is a clone of the Hadoop EC2 scripts, modified significantly
> to start up a HBase storage only cluster on top of HDFS backed by instance
> storage.
> Tested with the HBase 0.20 branch but should work with trunk also. Only the
> AMI create and launch scripts are tested. Will bring up a functioning HBase
> cluster.
> Do "create-hbase-image c1.xlarge" to create an x86_64 AMI, or
> "create-hbase-image c1.medium" to create an i386 AMI. Public Hadoop/HBase
> 0.20.1 AMIs are available:
> i386: ami-c644a7af
> x86_64: ami-f244a79b
> launch-hbase-cluster brings up the cluster: First, a small dedicated ZK
> quorum, specifiable in size, default of 3. Then, the DFS namenode (formatting
> on first boot) and one datanode and the HBase master. Then, a specifiable
> number of slaves, instances running DFS datanodes and HBase region servers.
> For example:
> {noformat}
> launch-hbase-cluster testcluster 100 5
> {noformat}
> would bring up a cluster with 100 slaves supported by a 5 node ZK ensemble.
> We must colocate a datanode with the namenode because currently the master
> won't tolerate a brand new DFS with only namenode and no datanodes up yet.
> See HBASE-1960. By default the launch scripts provision ZooKeeper as
> c1.medium and the HBase master and region servers as c1.xlarge. The result is
> a HBase cluster supported by a ZooKeeper ensemble. ZK ensembles are not
> dynamic, but HBase clusters can be grown by simply starting up more slaves,
> just like Hadoop.
> hbase-ec2-init-remote.sh can be trivially edited to bring up a jobtracker on
> the master node and task trackers on the slaves.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.