Thanks Andrei for the quick reply!

I followed the destructions on:
http://code.google.com/speed/public-dns/docs/using.html

Same result though I'm afraid.
Tim

On Fri, Jan 27, 2012 at 4:11 PM, Andrei Savu <[email protected]> wrote:
> Hi Tim,
>
> And welcome to Apache Whirr! Let me give you some advices that could
> help you get this to work for you.
>
> I think the boostrap fails for you because Whirr fails at doing
> reverse DNS resolution for Amazon public IPs. Can you try
> switching to Google public dns servers? (8.8.8.8 & 8.8.4.4).
>
> PS: That article is a bit old but you've figured out all the needed changes!
>
> PS: you can also find me on IRC @ #whirr for a more interactive
> discussion (asavu)
>
> -- Andrei Savu / andreisavu.ro
>
> On Fri, Jan 27, 2012 at 4:57 PM, Tim Robertson
> <[email protected]> wrote:
>> Hi all,
>>
>> I am trying to follow the instructions on:
>>  http://www.bigfastblog.com/run-the-latest-whirr-and-deploy-hbase-in-minutes
>>
>> I took Whirr from here today (note this is different from the
>> instructions which seem to point at a non existing incubator path):
>>  http://svn.apache.org/repos/asf/whirr/trunk/
>>
>> It almost works for me, but the NameNode, JobTracker and HBase master
>> don't seem to start (Zookeeper does).
>> On the slaves I correctly have DataNode, TaskTracker and RegionServer
>> services running.
>>
>> I suspect something is going on with my key file or permissions, but I
>> am too naive to work out what is happening, so hopeful of some
>> guidance.
>>
>> SSH'ing onto the master and trying to start Hadoop manually by running
>> a start-all.sh as my user I get the following:
>>
>> tim@domU-12-31-39-0C-90-D1:~$ /usr/local/hadoop-0.20.205.0/bin/start-all.sh
>> chown: changing ownership of `/var/log/hadoop/logs': Operation not permitted
>> starting namenode, logging to
>> /var/log/hadoop/logs/hadoop-tim-namenode-domU-12-31-39-0C-90-D1.out
>> /usr/local/hadoop-0.20.205.0/bin/hadoop-daemon.sh: line 136:
>> /var/run/hadoop/hadoop-tim-namenode.pid: Permission denied
>> /usr/local/hadoop-0.20.205.0/bin/hadoop-daemon.sh: line 135:
>> /var/log/hadoop/logs/hadoop-tim-namenode-domU-12-31-39-0C-90-D1.out:
>> Permission denied
>> head: cannot open
>> `/var/log/hadoop/logs/hadoop-tim-namenode-domU-12-31-39-0C-90-D1.out'
>> for reading: No such file or directory
>> localhost: Warning: Permanently added 'localhost' (RSA) to the list of
>> known hosts.
>> localhost: chown: changing ownership of `/var/log/hadoop/logs':
>> Operation not permitted
>> localhost: starting datanode, logging to
>> /var/log/hadoop/logs/hadoop-tim-datanode-domU-12-31-39-0C-90-D1.out
>> localhost: /usr/local/hadoop-0.20.205.0/bin/hadoop-daemon.sh: line
>> 136: /var/run/hadoop/hadoop-tim-datanode.pid: Permission denied
>> localhost: /usr/local/hadoop-0.20.205.0/bin/hadoop-daemon.sh: line
>> 135: /var/log/hadoop/logs/hadoop-tim-datanode-domU-12-31-39-0C-90-D1.out:
>> Permission denied
>> localhost: head: cannot open
>> `/var/log/hadoop/logs/hadoop-tim-datanode-domU-12-31-39-0C-90-D1.out'
>> for reading: No such file or directory
>> localhost: chown: changing ownership of `/var/log/hadoop/logs':
>> Operation not permitted
>> localhost: starting secondarynamenode, logging to
>> /var/log/hadoop/logs/hadoop-tim-secondarynamenode-domU-12-31-39-0C-90-D1.out
>> localhost: /usr/local/hadoop-0.20.205.0/bin/hadoop-daemon.sh: line
>> 136: /var/run/hadoop/hadoop-tim-secondarynamenode.pid: Permission
>> denied
>> localhost: /usr/local/hadoop-0.20.205.0/bin/hadoop-daemon.sh: line
>> 135: 
>> /var/log/hadoop/logs/hadoop-tim-secondarynamenode-domU-12-31-39-0C-90-D1.out:
>> Permission denied
>> localhost: head: cannot open
>> `/var/log/hadoop/logs/hadoop-tim-secondarynamenode-domU-12-31-39-0C-90-D1.out'
>> for reading: No such file or directory
>> chown: changing ownership of `/var/log/hadoop/logs': Operation not permitted
>> starting jobtracker, logging to
>> /var/log/hadoop/logs/hadoop-tim-jobtracker-domU-12-31-39-0C-90-D1.out
>> /usr/local/hadoop-0.20.205.0/bin/hadoop-daemon.sh: line 136:
>> /var/run/hadoop/hadoop-tim-jobtracker.pid: Permission denied
>> /usr/local/hadoop-0.20.205.0/bin/hadoop-daemon.sh: line 135:
>> /var/log/hadoop/logs/hadoop-tim-jobtracker-domU-12-31-39-0C-90-D1.out:
>> Permission denied
>> head: cannot open
>> `/var/log/hadoop/logs/hadoop-tim-jobtracker-domU-12-31-39-0C-90-D1.out'
>> for reading: No such file or directory
>> localhost: chown: changing ownership of `/var/log/hadoop/logs':
>> Operation not permitted
>> localhost: starting tasktracker, logging to
>> /var/log/hadoop/logs/hadoop-tim-tasktracker-domU-12-31-39-0C-90-D1.out
>> localhost: /usr/local/hadoop-0.20.205.0/bin/hadoop-daemon.sh: line
>> 136: /var/run/hadoop/hadoop-tim-tasktracker.pid: Permission denied
>> localhost: /usr/local/hadoop-0.20.205.0/bin/hadoop-daemon.sh: line
>> 135: /var/log/hadoop/logs/hadoop-tim-tasktracker-domU-12-31-39-0C-90-D1.out:
>> Permission denied
>> localhost: head: cannot open
>> `/var/log/hadoop/logs/hadoop-tim-tasktracker-domU-12-31-39-0C-90-D1.out'
>> for reading: No such file or directory
>>
>> Running the same as sudo I get:
>>
>> tim@domU-12-31-39-0C-90-D1:~$ sudo
>> /usr/local/hadoop-0.20.205.0/bin/start-all.sh
>> starting namenode, logging to
>> /var/log/hadoop/logs/hadoop-root-namenode-domU-12-31-39-0C-90-D1.out
>> Warning: $HADOOP_HOME is deprecated.
>>
>> Error: JAVA_HOME is not set.
>> localhost: Warning: Permanently added 'localhost' (RSA) to the list of
>> known hosts.
>> localhost: Permission denied (publickey).
>> localhost: Permission denied (publickey).
>> starting jobtracker, logging to
>> /var/log/hadoop/logs/hadoop-root-jobtracker-domU-12-31-39-0C-90-D1.out
>> Warning: $HADOOP_HOME is deprecated.
>>
>> Error: JAVA_HOME is not set.
>> localhost: Permission denied (publickey).
>>
>> Below is the full log of the startup.
>>
>> Any help greatly appreciated!
>> Tim
>>
>>
>>
>>
>>
>> $ bin/whirr launch-cluster --config hbase-ec2.properties
>> Bootstrapping cluster
>> Configuring template
>> Configuring template
>> Starting 1 node(s) with roles [hadoop-datanode, hadoop-tasktracker,
>> hbase-regionserver]
>> Starting 1 node(s) with roles [zookeeper, hadoop-namenode,
>> hadoop-jobtracker, hbase-master]
>> Nodes started: [[id=us-east-1/i-45c80920, providerId=i-45c80920,
>> group=hbase, name=hbase-45c80920, location=[id=us-east-1b, scope=ZONE,
>> description=us-east-1b, parent=us-east-1, iso3166Codes=[US-VA],
>> metadata={}], uri=null, imageId=us-east-1/ami-da0cf8b3, os=[name=null,
>> family=ubuntu, version=10.04, arch=paravirtual, is64Bit=true,
>> description=ubuntu-images-us/ubuntu-lucid-10.04-amd64-server-20101020.manifest.xml],
>> state=RUNNING, loginPort=22, hostname=domU-12-31-39-0E-49-D1,
>> privateAddresses=[10.192.74.31], publicAddresses=[107.22.17.222],
>> hardware=[id=c1.xlarge, providerId=c1.xlarge, name=null,
>> processors=[[cores=8.0, speed=2.5]], ram=7168, volumes=[[id=null,
>> type=LOCAL, size=10.0, device=/dev/sda1, durable=false,
>> isBootDevice=true], [id=null, type=LOCAL, size=420.0, device=/dev/sdb,
>> durable=false, isBootDevice=false], [id=null, type=LOCAL, size=420.0,
>> device=/dev/sdc, durable=false, isBootDevice=false], [id=null,
>> type=LOCAL, size=420.0, device=/dev/sdd, durable=false,
>> isBootDevice=false], [id=null, type=LOCAL, size=420.0,
>> device=/dev/sde, durable=false, isBootDevice=false]],
>> supportsImage=And(ALWAYS_TRUE,Or(isWindows(),requiresVirtualizationType(paravirtual)),ALWAYS_TRUE,is64Bit()),
>> tags=[]], loginUser=ubuntu, userMetadata={Name=hbase-45c80920},
>> tags=[]]]
>> Nodes started: [[id=us-east-1/i-7bc8091e, providerId=i-7bc8091e,
>> group=hbase, name=hbase-7bc8091e, location=[id=us-east-1b, scope=ZONE,
>> description=us-east-1b, parent=us-east-1, iso3166Codes=[US-VA],
>> metadata={}], uri=null, imageId=us-east-1/ami-da0cf8b3, os=[name=null,
>> family=ubuntu, version=10.04, arch=paravirtual, is64Bit=true,
>> description=ubuntu-images-us/ubuntu-lucid-10.04-amd64-server-20101020.manifest.xml],
>> state=RUNNING, loginPort=22, hostname=domU-12-31-39-0C-90-D1,
>> privateAddresses=[10.215.147.31], publicAddresses=[75.101.188.54],
>> hardware=[id=c1.xlarge, providerId=c1.xlarge, name=null,
>> processors=[[cores=8.0, speed=2.5]], ram=7168, volumes=[[id=null,
>> type=LOCAL, size=10.0, device=/dev/sda1, durable=false,
>> isBootDevice=true], [id=null, type=LOCAL, size=420.0, device=/dev/sdb,
>> durable=false, isBootDevice=false], [id=null, type=LOCAL, size=420.0,
>> device=/dev/sdc, durable=false, isBootDevice=false], [id=null,
>> type=LOCAL, size=420.0, device=/dev/sdd, durable=false,
>> isBootDevice=false], [id=null, type=LOCAL, size=420.0,
>> device=/dev/sde, durable=false, isBootDevice=false]],
>> supportsImage=And(ALWAYS_TRUE,Or(isWindows(),requiresVirtualizationType(paravirtual)),ALWAYS_TRUE,is64Bit()),
>> tags=[]], loginUser=ubuntu, userMetadata={Name=hbase-7bc8091e},
>> tags=[]]]
>> Wrote instances file /Users/tim/.whirr/hbase/instances
>> Authorizing firewall ingress to [us-east-1/i-7bc8091e] on ports [2181]
>> for [192.38.28.12/32]
>> Authorizing firewall ingress to [us-east-1/i-7bc8091e] on ports
>> [50070] for [192.38.28.12/32]
>> Authorizing firewall ingress to [us-east-1/i-7bc8091e] on ports [8020,
>> 8021] for [75.101.188.54/32]
>> Authorizing firewall ingress to [us-east-1/i-7bc8091e] on ports
>> [50030] for [192.38.28.12/32]
>> Authorizing firewall ingress to [us-east-1/i-7bc8091e] on ports [8021]
>> for [75.101.188.54/32]
>> The permission '75.101.188.54/32-1-8021-8021' has already been
>> authorized on the specified group
>> Authorizing firewall
>> Authorizing firewall ingress to [us-east-1/i-7bc8091e] on ports
>> [60010, 60000] for [192.38.28.12/32]
>> Authorizing firewall ingress to [us-east-1/i-7bc8091e] on ports
>> [50030] for [192.38.28.12/32]
>> The permission '192.38.28.12/32-1-50030-50030' has already been
>> authorized on the specified group
>> Authorizing firewall ingress to [us-east-1/i-7bc8091e] on ports [8021]
>> for [75.101.188.54/32]
>> The permission '75.101.188.54/32-1-8021-8021' has already been
>> authorized on the specified group
>> Authorizing firewall ingress to [us-east-1/i-7bc8091e] on ports
>> [60030, 60020] for [192.38.28.12/32]
>> Starting to run scripts on cluster for phase configure on instances:
>> us-east-1/i-45c80920
>> Starting to run scripts on cluster for phase configure on instances:
>> us-east-1/i-7bc8091e
>> Running configure phase script on: us-east-1/i-45c80920
>> Running configure phase script on: us-east-1/i-7bc8091e
>> configure phase script run completed on: us-east-1/i-45c80920
>> Successfully executed configure script: [output=starting datanode,
>> logging to 
>> /var/log/hadoop/logs/hadoop-hadoop-datanode-domU-12-31-39-0E-49-D1.out
>> Warning: $HADOOP_HOME is deprecated.
>>
>> No directory, logging in with HOME=/
>> starting tasktracker, logging to
>> /var/log/hadoop/logs/hadoop-hadoop-tasktracker-domU-12-31-39-0E-49-D1.out
>> Warning: $HADOOP_HOME is deprecated.
>>
>> No directory, logging in with HOME=/
>> starting regionserver, logging to
>> /var/log/hbase/logs/hbase-hadoop-regionserver-domU-12-31-39-0E-49-D1.out
>> No directory, logging in with HOME=/
>> , error=, exitCode=0]
>> configure phase script run completed on: us-east-1/i-7bc8091e
>> Successfully executed configure script: [output=No directory, logging
>> in with HOME=/
>> No directory, logging in with HOME=/
>> No directory, logging in with HOME=/
>> No directory, logging in with HOME=/
>> starting jobtracker, logging to
>> /var/log/hadoop/logs/hadoop-hadoop-jobtracker-domU-12-31-39-0C-90-D1.out
>> Warning: $HADOOP_HOME is deprecated.
>>
>> No directory, logging in with HOME=/
>> starting master, logging to
>> /var/log/hbase/logs/hbase-hadoop-master-domU-12-31-39-0C-90-D1.out
>> No directory, logging in with HOME=/
>> , error=12/01/27 14:43:35 INFO ipc.Client: Retrying connect to server:
>> 75.101.188.54/75.101.188.54:8020. Already tried 1 time(s).
>> 12/01/27 14:43:36 INFO ipc.Client: Retrying connect to server:
>> 75.101.188.54/75.101.188.54:8020. Already tried 2 time(s).
>> 12/01/27 14:43:37 INFO ipc.Client: Retrying connect to server:
>> 75.101.188.54/75.101.188.54:8020. Already tried 3 time(s).
>> 12/01/27 14:43:38 INFO ipc.Client: Retrying connect to server:
>> 75.101.188.54/75.101.188.54:8020. Already tried 4 time(s).
>> 12/01/27 14:43:39 INFO ipc.Client: Retrying connect to server:
>> 75.101.188.54/75.101.188.54:8020. Already tried 5 time(s).
>> 12/01/27 14:43:40 INFO ipc.Client: Retrying connect to server:
>> 75.101.188.54/75.101.188.54:8020. Already tried 6 time(s).
>> 12/01/27 14:43:41 INFO ipc.Client: Retrying connect to server:
>> 75.101.188.54/75.101.188.54:8020. Already tried 7 time(s).
>> 12/01/27 14:43:42 INFO ipc.Client: Retrying connect to server:
>> 75.101.188.54/75.101.188.54:8020. Already tried 8 time(s).
>> 12/01/27 14:43:43 INFO ipc.Client: Retrying connect to server:
>> 75.101.188.54/75.101.188.54:8020. Already tried 9 time(s).
>> Bad connection to FS. command aborted. exception: Call to
>> 75.101.188.54/75.101.188.54:8020 failed on connection exception:
>> java.net.ConnectException: Connection refused
>> , exitCode=0]
>> Finished running configure phase scripts on all cluster instances
>> Completed configuration of hbase
>> Hosts: 75.101.188.54:2181
>> Completed configuration of hbase role hadoop-namenode
>> Namenode web UI available at http://75.101.188.54:50070
>> Wrote Hadoop site file /Users/tim/.whirr/hbase/hadoop-site.xml
>> Wrote Hadoop proxy script /Users/tim/.whirr/hbase/hadoop-proxy.sh
>> Completed configuration of hbase role hadoop-jobtracker
>> Jobtracker web UI available at http://75.101.188.54:50030
>> Completed configuration of hbase
>> Web UI available at http://75.101.188.54
>> Wrote HBase site file /Users/tim/.whirr/hbase/hbase-site.xml
>> Wrote HBase proxy script /Users/tim/.whirr/hbase/hbase-proxy.sh
>> Completed configuration of hbase role hadoop-datanode
>> Completed configuration of hbase role hadoop-tasktracker
>> Starting to run scripts on cluster for phase start on instances:
>> us-east-1/i-7bc8091e
>> Running start phase script on: us-east-1/i-7bc8091e
>> start phase script run completed on: us-east-1/i-7bc8091e
>> Successfully executed start script: [output=, error=, exitCode=0]
>> Finished running start phase scripts on all cluster instances
>> Started cluster of 2 instances
>> Cluster{instances=[Instance{roles=[hadoop-datanode,
>> hadoop-tasktracker, hbase-regionserver], publicIp=107.22.17.222,
>> privateIp=10.192.74.31, id=us-east-1/i-45c80920,
>> nodeMetadata=[id=us-east-1/i-45c80920, providerId=i-45c80920,
>> group=hbase, name=hbase-45c80920, location=[id=us-east-1b, scope=ZONE,
>> description=us-east-1b, parent=us-east-1, iso3166Codes=[US-VA],
>> metadata={}], uri=null, imageId=us-east-1/ami-da0cf8b3, os=[name=null,
>> family=ubuntu, version=10.04, arch=paravirtual, is64Bit=true,
>> description=ubuntu-images-us/ubuntu-lucid-10.04-amd64-server-20101020.manifest.xml],
>> state=RUNNING, loginPort=22, hostname=domU-12-31-39-0E-49-D1,
>> privateAddresses=[10.192.74.31], publicAddresses=[107.22.17.222],
>> hardware=[id=c1.xlarge, providerId=c1.xlarge, name=null,
>> processors=[[cores=8.0, speed=2.5]], ram=7168, volumes=[[id=null,
>> type=LOCAL, size=10.0, device=/dev/sda1, durable=false,
>> isBootDevice=true], [id=null, type=LOCAL, size=420.0, device=/dev/sdb,
>> durable=false, isBootDevice=false], [id=null, type=LOCAL, size=420.0,
>> device=/dev/sdc, durable=false, isBootDevice=false], [id=null,
>> type=LOCAL, size=420.0, device=/dev/sdd, durable=false,
>> isBootDevice=false], [id=null, type=LOCAL, size=420.0,
>> device=/dev/sde, durable=false, isBootDevice=false]],
>> supportsImage=And(ALWAYS_TRUE,Or(isWindows(),requiresVirtualizationType(paravirtual)),ALWAYS_TRUE,is64Bit()),
>> tags=[]], loginUser=ubuntu, userMetadata={Name=hbase-45c80920},
>> tags=[]]}, Instance{roles=[zookeeper, hadoop-namenode,
>> hadoop-jobtracker, hbase-master], publicIp=75.101.188.54,
>> privateIp=10.215.147.31, id=us-east-1/i-7bc8091e,
>> nodeMetadata=[id=us-east-1/i-7bc8091e, providerId=i-7bc8091e,
>> group=hbase, name=hbase-7bc8091e, location=[id=us-east-1b, scope=ZONE,
>> description=us-east-1b, parent=us-east-1, iso3166Codes=[US-VA],
>> metadata={}], uri=null, imageId=us-east-1/ami-da0cf8b3, os=[name=null,
>> family=ubuntu, version=10.04, arch=paravirtual, is64Bit=true,
>> description=ubuntu-images-us/ubuntu-lucid-10.04-amd64-server-20101020.manifest.xml],
>> state=RUNNING, loginPort=22, hostname=domU-12-31-39-0C-90-D1,
>> privateAddresses=[10.215.147.31], publicAddresses=[75.101.188.54],
>> hardware=[id=c1.xlarge, providerId=c1.xlarge, name=null,
>> processors=[[cores=8.0, speed=2.5]], ram=7168, volumes=[[id=null,
>> type=LOCAL, size=10.0, device=/dev/sda1, durable=false,
>> isBootDevice=true], [id=null, type=LOCAL, size=420.0, device=/dev/sdb,
>> durable=false, isBootDevice=false], [id=null, type=LOCAL, size=420.0,
>> device=/dev/sdc, durable=false, isBootDevice=false], [id=null,
>> type=LOCAL, size=420.0, device=/dev/sdd, durable=false,
>> isBootDevice=false], [id=null, type=LOCAL, size=420.0,
>> device=/dev/sde, durable=false, isBootDevice=false]],
>> supportsImage=And(ALWAYS_TRUE,Or(isWindows(),requiresVirtualizationType(paravirtual)),ALWAYS_TRUE,is64Bit()),
>> tags=[]], loginUser=ubuntu, userMetadata={Name=hbase-7bc8091e},
>> tags=[]]}], configuration={hbase.zookeeper.quorum=75.101.188.54:2181,
>> hadoop.rpc.socket.factory.class.default=org.apache.hadoop.net.SocksSocketFactory,
>> hadoop.socks.server=localhost:6666,
>> hbase.zookeeper.property.clientPort=2181}}
>> You can log into instances using the following ssh commands:
>> 'ssh -i /Users/tim/.ssh/id_rsa -o "UserKnownHostsFile /dev/null" -o
>> StrictHostKeyChecking=no [email protected]'
>> 'ssh -i /Users/tim/.ssh/id_rsa -o "UserKnownHostsFile /dev/null" -o
>> StrictHostKeyChecking=no [email protected]'

Reply via email to