You really want a backup namenode for the dfs, which is not supported at this time.
Multiple overlapping dfs clusters are possible, but they'll be totally independent, and must be configured with different locations for data, temp data etc. - not what you want. Yoram > -----Original Message----- > From: Jagadeesh [mailto:[EMAIL PROTECTED] > Sent: Wednesday, October 25, 2006 1:50 AM > To: [email protected] > Subject: Overlapping DFS clusters > > Hi, > > Is there any way I can run overlapping DFS clusters in Hadoop ? > > I would like to run multiple namenodes with same set of > slaves so that even > if one of the namenode fails, the other one will be still available. > > Thanks > Jugs > >
