Hi.

By default SecondaryNameNode compacts the meta-data every 60 minutes,
correct?

So in your setup, SNN does it every 5 minutes? How reliable is this?

Regards.


2009/10/1 Jason Venner <jason.had...@gmail.com>

> If you are looking for moment by moment recovery, you need to have multiple
> directories, preferably on several devices, for your Namenode edit log
> (which is modified for each meta data change)
> and  also multiple directories for the FS image, which is updated every few
> minutes by the secondary Namenode.
>
> Having one of your directories on NFS will slow your Namenode down some, as
> all writes to all devices have to complete before a meta data operation is
> finished. I seem to recall that the writes are done in parallel. This does
> however give you fast failover.
>
> The secondary Namenode is a nice repository of 5 + minute old data in the
> event of a catastrophic failure or catastrophic user error such as a mass
> file removal.
>
>
> On Thu, Oct 1, 2009 at 6:15 AM, Stas Oskin <stas.os...@gmail.com> wrote:
>
> > Hi.
> >
> > I'm looking to spread the meta-data writing across several disks,
> including
> > NFS, to provide greater survivability.
> >
> > What make sense more - to write NameNode meta-data to NFS, or to write
> the
> > SecondaryNameNode meta-data to NFS, or a combination of them?
> >
> > Thanks.
> >
>
>
>
> --
> Pro Hadoop, a book to guide you from beginner to hadoop mastery,
> http://www.amazon.com/dp/1430219424?tag=jewlerymall
> www.prohadoopbook.com a community for Hadoop Professionals
>

Reply via email to