hey parick
i wanted to configure my cluster to write namenode metadata to multiple
directories as well:
<property>
<name>dfs.name.dir</name>
<value>/hadoop/var/name,/mnt/hadoop/var/name</value>
</property>
in my case, /hadoop/var/name is local directory, /mnt/hadoop/var/name is NFS
volume. i took down the cluster first, then copied over files from
/hadoop/var/name to /mnt/hadoop/var/name, and then tried to start up the
cluster. but the cluster won't start up properly...
here's the namenode log: http://pastebin.com/gmu0B7yd
any ideas why it wouldn't start up?
thx
On Thu, Oct 6, 2011 at 6:58 PM, patrick sang <[email protected]>wrote:
> I would say your namenode write metadata in local fs (where your secondary
> namenode will pull files), and NFS mount.
>
> <property>
> <name>dfs.name.dir</name>
> <value>/hadoop/name,/hadoop/nfs_server_name</value>
> </property>
>
>
> my 0.02$
> P
>
> On Thu, Oct 6, 2011 at 12:04 AM, shanmuganathan.r <
> [email protected]> wrote:
>
> > Hi Kai,
> >
> > There is no datas stored in the secondarynamenode related to the
> > Hadoop cluster . Am I correct?
> > If it correct means If we run the secondaryname node in separate machine
> > then fetching , merging and transferring time is increased if the cluster
> > has large data in the namenode fsimage file . At the time if fail over
> > occurs , then how can we recover the nearly one hour changes in the HDFS
> > file ? (default check point time is one hour)
> >
> > Thanks R.Shanmuganathan
> >
> >
> >
> >
> >
> >
> > ---- On Thu, 06 Oct 2011 12:20:28 +0530 Kai Voigt<[email protected]> wrote
> > ----
> >
> >
> > Hi,
> >
> > the secondary namenode only fetches the two files when a checkpointing is
> > needed.
> >
> > Kai
> >
> > Am 06.10.2011 um 08:45 schrieb shanmuganathan.r:
> >
> > > Hi Kai,
> > >
> > > In the Second part I meant
> > >
> > >
> > > Is the secondary namenode also contain the FSImage file or the two
> > files(FSImage and EdiltLog) are transferred from the namenode at the
> > checkpoint time.
> > >
> > >
> > > Thanks
> > > Shanmuganathan
> > >
> > >
> > >
> > >
> > >
> > > ---- On Thu, 06 Oct 2011 11:37:50 +0530 Kai Voigt&lt;[email protected]
> &gt;
> > wrote ----
> > >
> > >
> > > Hi,
> > >
> > > you're correct when saying the namenode hosts the fsimage file and
> the
> > edits log file.
> > >
> > > The fsimage file contains a snapshot of the HDFS metadata (a
> filename
> > to blocks list mapping). Whenever there is a change to HDFS, it will be
> > appended to the edits file. Think of it as a database transaction log,
> where
> > changes will not be applied to the datafile, but appended to a log.
> > >
> > > To prevent the edits file growing infinitely, the secondary namenode
> > periodically pulls these two files, and the namenode starts writing
> changes
> > to a new edits file. Then, the secondary namenode merges the changes from
> > the edits file with the old snapshot from the fsimage file and creates an
> > updated fsimage file. This updated fsimage file is then copied to the
> > namenode.
> > >
> > > Then, the entire cycle starts again. To answer your question: The
> > namenode has both files, even if the secondary namenode is running on a
> > different machine.
> > >
> > > Kai
> > >
> > > Am 06.10.2011 um 07:57 schrieb shanmuganathan.r:
> > >
> > > &gt;
> > > &gt; Hi All,
> > > &gt;
> > > &gt; I have a doubt in hadoop secondary namenode concept .
> Please
> > correct if the following statements are wrong .
> > > &gt;
> > > &gt;
> > > &gt; The namenode hosts the fsimage and edit log files. The
> > secondary namenode hosts the fsimage file only. At the time of checkpoint
> > the edit log file is transferred to the secondary namenode and the both
> > files are merged, Then the updated fsimage file is transferred to the
> > namenode . Is it correct?
> > > &gt;
> > > &gt;
> > > &gt; If we run the secondary namenode in separate machine , then
> > both machines contain the fsimage file . Namenode only contains the
> editlog
> > file. Is it true?
> > > &gt;
> > > &gt;
> > > &gt;
> > > &gt; Thanks R.Shanmuganathan
> > > &gt;
> > > &gt;
> > > &gt;
> > > &gt;
> > > &gt;
> > > &gt;
> > >
> > > --
> > > Kai Voigt
> > > [email protected]
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> >
> > --
> > Kai Voigt
> > [email protected]
> >
> >
> >
> >
> >
> >
> >
>