Thanks Paul. I guess i figured out the problem. If you look at my configuration , i havent specified the AMI.
When the hardwareid is m1.small then whirr goes and picks the AMI ami-5154b838 which is a 32 bit AMI. I looked at the /tmp/*/stderr.log - the hadoop install itself is failing because apt is not able to find the packages. Then i set the parm whirr.image-id=us-east-1/ami-35de095c (a 64 bit AMI) , which worked fine even for m1.small setting. Should we create a bug for this or document it some where so it is there for some one to google it ? On Sun, Aug 26, 2012 at 11:21 PM, Paul Baclace <[email protected]>wrote: > Compare the files /tmp/*/std*.out on your master node for failing and > succeeding differences. You probably have something requiring huge amounts > of memory before /etc/hadoop/ is created, so it works for large nodes. Or > something else... > > > Paul > > > > On 20120826 19:16 , Prabhuram Mohan wrote: > >> The /etc/hadoop folder is missing when i launch a CDH hadoop cluster with >> m1.small or m1.medium on the name node. >> However /etc/hadoop folder is present when using m1.large. >> >> Am i missing something ? Do anybody know why this happens ? here is my >> hadoop.properties file >> >> >> whirr.cluster-name=cdh002 >> whirr.cluster-user=${sys:user.**name <http://user.name> <http://user.name >> >} >> >> >> whirr.instance-templates=1 >> hadoop-namenode+hadoop-**jobtracker+ganglia-metad,2 >> hadoop-datanode+hadoop-**tasktracker+ganglia-monitor >> >> whirr.hadoop.install-function=**install_cdh_hadoop >> whirr.hadoop.configure-**function=configure_cdh_hadoop >> whirr.provider=aws-ec2 >> whirr.identity=${env:WHIRR_**AWS_ACCESS_KEY_ID} >> whirr.credential=${env:WHIRR_**AWS_SECRET_ACCESS_KEY} >> >> whirr.hardware-id=m1.small >> >> whirr.location-id=us-east-1 >> >> whirr.aws-ec2-spot-price=0.06 >> >> >> whirr.private-key-file=${sys:**user.home}/.ssh/id_rsa >> whirr.public-key-file=${whirr.**private-key-file}.pub >> >> >
