Hi,

When I try to create a cluster using the m2.xlarge instance type, the instance 
storage doesn't get mapped to HDFS. What AMI am I supposed to use for a cluster 
like this? Am I doing something wrong? Here's a stripped down example of what's 
breaking for me:

$ cat m2xlargeclust.properties
whirr.cluster-name=testm2xlarge
whirr.provider=aws-ec2
whirr.private-key-file=${sys:user.home}/.ssh/id_rsa_whirr
whirr.public-key-file=${whirr.private-key-file}.pub
whirr.cluster-user=${sys:user.name}
whirr.instance-templates=1 hadoop-namenode+hadoop-jobtracker,1 
hadoop-datanode+hadoop-tasktracker
whirr.templates.hadoop-namenode+hadoop-jobtracker.hardware-id=m1.large
whirr.templates.hadoop-namenode+hadoop-jobtracker.image-id=us-east-1/ami-bffa6fd6
whirr.hardware-id=m2.xlarge
whirr.image-id=us-east-1/ami-bffa6fd6
whirr.aws-ec2-spot-price=0.41
whirr.hadoop.version=0.20.2
whirr.hadoop.tarball.url=http://archive.apache.org/dist/hadoop/core/hadoop-${whirr.hadoop.version}/hadoop-${whirr.hadoop.version}.tar.gz

When I launch it I see this go by, which I think is probably part of the 
problem:

+ prepare_all_disks ''
++ echo ''
++ tr ';' '\n'
+ '[' '!' -e /data0 ']'
+ '[' -e /data ']'
+ mkdir /data0
+ ln -s /data0 /data

Once it starts up, the namenode UI says:

Configured Capacity      :       8.84 GB

If I ssh into the datanode, I can see this:

$ df
Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/sda1             10321208   1040064   8756868  11% /
none                   8675532       112   8675420   1% /dev
none                   8970348         0   8970348   0% /dev/shm
none                   8970348        56   8970292   1% /var/run
none                   8970348         0   8970348   0% /var/lock
none                   8970348         0   8970348   0% /lib/init/rw
/dev/sdb             423135208    203084 401438084   1% /mnt

and this:

$ cat /etc/hadoop/conf/hdfs-site.xml 
<configuration>
  <property>
    <name>dfs.block.size</name>
    <value>134217728</value>
  </property>
  <property>
    <name>dfs.datanode.du.reserved</name>
    <value>1073741824</value>
  </property>
  <property>
    <name>dfs.data.dir</name>
    <value></value>
  </property>
  <property>
    <name>dfs.name.dir</name>
    <value></value>
  </property>
  <property>
    <name>fs.checkpoint.dir</name>
    <value></value>
  </property>
</configuration>

Thanks in advance for any advice,

Chris

Reply via email to