Compare the files /tmp/*/std*.out on your master node for failing and
succeeding differences. You probably have something requiring huge
amounts of memory before /etc/hadoop/ is created, so it works for large
nodes. Or something else...
Paul
On 20120826 19:16 , Prabhuram Mohan wrote:
The /etc/hadoop folder is missing when i launch a CDH hadoop cluster
with m1.small or m1.medium on the name node.
However /etc/hadoop folder is present when using m1.large.
Am i missing something ? Do anybody know why this happens ? here is my
hadoop.properties file
whirr.cluster-name=cdh002
whirr.cluster-user=${sys:user.name <http://user.name>}
whirr.instance-templates=1
hadoop-namenode+hadoop-jobtracker+ganglia-metad,2
hadoop-datanode+hadoop-tasktracker+ganglia-monitor
whirr.hadoop.install-function=install_cdh_hadoop
whirr.hadoop.configure-function=configure_cdh_hadoop
whirr.provider=aws-ec2
whirr.identity=${env:WHIRR_AWS_ACCESS_KEY_ID}
whirr.credential=${env:WHIRR_AWS_SECRET_ACCESS_KEY}
whirr.hardware-id=m1.small
whirr.location-id=us-east-1
whirr.aws-ec2-spot-price=0.06
whirr.private-key-file=${sys:user.home}/.ssh/id_rsa
whirr.public-key-file=${whirr.private-key-file}.pub