A couple of things...

Does your script have a default rack defined? So if it can't find your machine, 
you default to it being on rack_default ?
(You could use rack0, but then you have a problem will you know what's really 
in rack0 or what's kicking out the default value?)

The other issue is that you may have to put in your machine name, the fully 
qualified name and the IP address.
I'm not sure which is getting passed in so I have 3 lists that I maintain in 
the script.

HTH

-Mike


> Date: Fri, 2 Jul 2010 12:50:26 +1000
> Subject: problem with rack-awareness
> From: [email protected]
> To: [email protected]
> 
> hello,
> 
> I am trying to separate my 6 nodes onto 2 different racks.
> For test purpose, I wrote a bash file which smply returns "rack0" all the
> time. And I add property "topology.script.file.name" in core-site.xml.
> 
> When I restart by start-dfs.sh, the namenode could not find any datanode at
> all. All datanodes are lost somehow. If I remove "topology.script.file.name"
> from conf, things back to normal, i.e. all datanodes are under
> "default-rack".
> 
> I don't why datanode couldn't register to namenode when using rack. Any
> ideas?
                                          
_________________________________________________________________
Hotmail has tools for the New Busy. Search, chat and e-mail from your inbox.
http://www.windowslive.com/campaign/thenewbusy?ocid=PID28326::T:WLMTAGL:ON:WL:en-US:WM_HMP:042010_1

Reply via email to