On 7 May 2012 03:30, Deb Ghosh <[email protected]> wrote:
> Hi
>
> After execution of the whirr 0.7.1 launch-cluster command I am getting error
> on amazon-ec2 , I don't know how to resolve it ,
>
> the following error occurred
>
>
> Unable to find service hadoop, using default.
> Bootstrapping cluster
> Configuring template
> Configuring template
> Starting 1 node(s) with roles [hadoop-datanode, hadoop-tasktracker]
> Starting 1 node(s) with roles [hadoop-jobtracker, hadoop-namenode]
> Nodes started: [[id=us-east-1/i-b167a5d7, providerId=i-b167a5d7,
> group=myhadoopcluster, name=myhadoopcluster-b167a5d7,
> location=[id=us-east-1a, scope=ZONE, description=us-east-1a,
> parent=us-east-1, iso3166Codes=[US-VA], metadata={}], uri=null,
> imageId=us-east-1/ami-ad36fbc4, os=[name=null, family=ubuntu, version=10.04,
> arch=paravirtual, is64Bit=true,
> description=099720109477/ebs/ubuntu-images/ubuntu-lucid-10.04-amd64-server-20110930],
> state=RUNNING, loginPort=22, hostname=ip-10-77-18-166,
> privateAddresses=[10.77.18.166], publicAddresses=[23.20.149.118],
> hardware=[id=m1.large, providerId=m1.large, name=null,
> processors=[[cores=2.0, speed=2.0]], ram=7680, volumes=[[id=null,
> type=LOCAL, size=420.0, device=/dev/sdb, durable=false, isBootDevice=false],
> [id=null, type=LOCAL, size=420.0, device=/dev/sdc, durable=false,
> isBootDevice=false], [id=vol-209ac44f, type=SAN, size=null,
> device=/dev/sda1, durable=true, isBootDevice=true]],
> supportsImage=And(ALWAYS_TRUE,Or(isWindows(),requiresVirtualizationType(paravirtual)),ALWAYS_TRUE,is64Bit()),
> tags=[]], loginUser=ubuntu, userMetadata={Name=myhadoopcluster-b167a5d7},
> tags=[]]]
> Nodes started: [[id=us-east-1/i-b367a5d5, providerId=i-b367a5d5,
> group=myhadoopcluster, name=myhadoopcluster-b367a5d5,
> location=[id=us-east-1a, scope=ZONE, description=us-east-1a,
> parent=us-east-1, iso3166Codes=[US-VA], metadata={}], uri=null,
> imageId=us-east-1/ami-ad36fbc4, os=[name=null, family=ubuntu, version=10.04,
> arch=paravirtual, is64Bit=true,
> description=099720109477/ebs/ubuntu-images/ubuntu-lucid-10.04-amd64-server-20110930],
> state=RUNNING, loginPort=22, hostname=ip-10-202-45-93,
> privateAddresses=[10.202.45.93], publicAddresses=[23.20.230.14],
> hardware=[id=m1.large, providerId=m1.large, name=null,
> processors=[[cores=2.0, speed=2.0]], ram=7680, volumes=[[id=null,
> type=LOCAL, size=420.0, device=/dev/sdb, durable=false, isBootDevice=false],
> [id=null, type=LOCAL, size=420.0, device=/dev/sdc, durable=false,
> isBootDevice=false], [id=vol-249ac44b, type=SAN, size=null,
> device=/dev/sda1, durable=true, isBootDevice=true]],
> supportsImage=And(ALWAYS_TRUE,Or(isWindows(),requiresVirtualizationType(paravirtual)),ALWAYS_TRUE,is64Bit()),
> tags=[]], loginUser=ubuntu, userMetadata={Name=myhadoopcluster-b367a5d5},
> tags=[]]]
>
> Unable to start the cluster. Terminating all nodes.
> org.apache.whirr.net.DnsException: java.net.ConnectException: Connection
> refused
>     at org.apache.whirr.net.FastDnsResolver.apply(FastDnsResolver.java:83)
>     at org.apache.whirr.net.FastDnsResolver.apply(FastDnsResolver.java:40)
>     at org.apache.whirr.Cluster$Instance.getPublicHostName(Cluster.java:112)
>     at org.apache.whirr.Cluster$Instance.getPublicAddress(Cluster.java:94)
>     at
> org.apache.whirr.service.hadoop.HadoopCluster.getNamenodePublicAddress(HadoopCluster.java:35)
>     at
> org.apache.whirr.service.hadoop.HadoopJobTrackerClusterActionHandler.doBeforeConfigure(HadoopJobTrackerClusterActionHandler.java:51)
>     at
> org.apache.whirr.service.hadoop.HadoopClusterActionHandler.beforeConfigure(HadoopClusterActionHandler.java:87)
>     at
> org.apache.whirr.service.ClusterActionHandlerSupport.beforeAction(ClusterActionHandlerSupport.java:53)
>     at
> org.apache.whirr.actions.ScriptBasedClusterAction.execute(ScriptBasedClusterAction.java:100)
>     at
> org.apache.whirr.ClusterController.launchCluster(ClusterController.java:109)
>     at
> org.apache.whirr.cli.command.LaunchClusterCommand.run(LaunchClusterCommand.java:63)
>     at org.apache.whirr.cli.Main.run(Main.java:64)
>     at org.apache.whirr.cli.Main.main(Main.java:97)
> Caused by: java.net.ConnectException: Connection refused
>     at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
>     at
> sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
>     at org.xbill.DNS.TCPClient.connect(TCPClient.java:30)
>     at org.xbill.DNS.TCPClient.sendrecv(TCPClient.java:118)
>     at org.xbill.DNS.SimpleResolver.send(SimpleResolver.java:254)
>     at
> org.xbill.DNS.ExtendedResolver$Resolution.start(ExtendedResolver.java:95)
>     at org.xbill.DNS.ExtendedResolver.send(ExtendedResolver.java:358)
>     at org.apache.whirr.net.FastDnsResolver.apply(FastDnsResolver.java:69)
>     ... 12 more
> Unable to load cluster state, assuming it has no running nodes.
> java.io.FileNotFoundException:
> /home/debashisg/.whirr/myhadoopcluster/instances (No such file or directory)
>     at java.io.FileInputStream.open(Native Method)
>     at java.io.FileInputStream.<init>(FileInputStream.java:120)
>     at com.google.common.io.Files$1.getInput(Files.java:100)
>     at com.google.common.io.Files$1.getInput(Files.java:97)
>     at com.google.common.io.CharStreams$2.getInput(CharStreams.java:91)
>     at com.google.common.io.CharStreams$2.getInput(CharStreams.java:88)
>     at com.google.common.io.CharStreams.readLines(CharStreams.java:306)
>     at com.google.common.io.Files.readLines(Files.java:580)
>     at
> org.apache.whirr.state.FileClusterStateStore.load(FileClusterStateStore.java:54)
>     at
> org.apache.whirr.state.ClusterStateStore.tryLoadOrEmpty(ClusterStateStore.java:58)
>     at
> org.apache.whirr.ClusterController.destroyCluster(ClusterController.java:143)
>     at
> org.apache.whirr.ClusterController.launchCluster(ClusterController.java:118)
>     at
> org.apache.whirr.cli.command.LaunchClusterCommand.run(LaunchClusterCommand.java:63)
>     at org.apache.whirr.cli.Main.run(Main.java:64)
>     at org.apache.whirr.cli.Main.main(Main.java:97)
> Starting to run scripts on cluster for phase destroyinstances:
> Starting to run scripts on cluster for phase destroyinstances:
> Finished running destroy phase scripts on all cluster instances
> Destroying myhadoopcluster cluster
>
>
> Thanks
> Debashis


Again try 
http://mail-archives.apache.org/mod_mbox/whirr-user/201204.mbox/browser

Reply via email to