There is also some work underway to add in HA and failover to the namenode.  
You might get more success if you send your note to hdfs-dev instead of 
common-dev.  One other thing that can sometimes get a discussion going is to 
just file a JIRA for it.  People interested in it are likely to start watching 
it, and you can often have a good conversation there about it.

--Bobby Evans

On 9/28/11 8:27 AM, "Ravi Prakash" <ravihad...@gmail.com> wrote:

Hi Mirko,

Its seems like a great idea to me!! The architects and senior developers
might have some more insight on this though.

I think part of the reason why the community might be lazy about
implementing this is because the Namenode being a single point of failure is
usually regarded as FUD. There are simple tricks (like writing the fsimage
and editslog to NFS) which can guard against some failure scenarios, and I
think most users of hadoop are satisfied with that.

I wouldn't be too surprised if there is already a JIRA for this. But if you
could come up with a patch, I'm hopeful the community would be interested in
it.

Cheers
Ravi

2011/9/27 Mirko Kämpf <mirko.kae...@googlemail.com>

> Hi,
> during the Cloudera Developer Training at Berlin I came up with an idea,
> regarding a lost name-node.
> As in this case all data blocks are lost. The solution could be, to have a
> table which relates filenames and block_ids on that node, which can be
> scaned
> after a name-node is lost. Or on every block could be a kind of a backlink
> to the filename and the total nr of blocks and/or a total hashsum attached.
> This would it make easy to recover with minimal overhead.
>
> Now I would like to ask the developer community, if there is any good
> reason
> not to do this?
> Before I start to figure out where to start an implementation of such a
> feature.
>
> Thanks,
> Mirko
>

Reply via email to