On 06/28/2012 10:46 AM, David Rosenstrauch wrote:
On 06/27/2012 11:32 PM, Ben Kim wrote:
Hi
I got my topology script from
http://wiki.apache.org/hadoop/topology_rack_awareness_scripts
I checked that the script works correctly.

But, in the hadoop cluster, all my servers get assigned to the default
rack.
I'm using hadoop 1.0.3, but had experienced same problem with 1.0.0
version.


Yunhong was having the same problem in the past without any resolution.
http://mail-archives.apache.org/mod_mbox/hadoop-common-user/200807.mbox/%3cpine.lnx.4.64.0807031453070.28...@bert.cs.uic.edu%3E


*Benjamin Kim*
*benkimkimben at gmail*

We've used this script for rack awareness:

#!/bin/sh
HADOOP_CONF=/etc/hadoop/conf
while [ $# -gt 0 ] ; do
   nodeArg=$1
   exec< ${HADOOP_CONF}/topology.data
   result=""
   while read line ; do
     ar=( $line )
     if [ "${ar[0]}" = "$nodeArg" ] ; then
       result="${ar[1]}"
     fi
   done
   shift
   if [ -z "$result" ] ; then
     echo -n "/default/rack "
   else
     echo -n "$result "
   fi
done


The topology data file looks like so:

192.168.8.50    /dc1/rack1
192.168.8.70    /dc1/rack2
192.168.8.90    /dc1/rack3
...

HTH,

DR

By the way, don't forget you have to set the topology.script.file.name property in the core-site.xml on your NameNode. (And then restart your NameNode.) i.e.:

<property>
 <name>topology.script.file.name</name>
 <value>/etc/hadoop/conf/topology.sh</value>
</property>

HTH,

DR

Reply via email to