There is, every task gets run a temporary working directory. But in general
the output is cleaned after the task completes. If you want to save "side
data" you have to figure a workaround. This page should give you a few
pointers:
http://hadoop.apache.org/common/docs/r0.20.2/mapred_tutorial.html#Task+Side-Effect+Files

On Fri, May 11, 2012 at 2:36 PM, Vijith <[email protected]> wrote:

> Thanks Ferdy.
> So does this mean that there is no way nutch can connect to a flat file /
> database etc. while in deploy mode.
>
>
> On Fri, May 11, 2012 at 5:44 PM, Ferdy Galema <[email protected]
> >wrote:
>
> > When running hadoop in deploy mode the actual tasks are ran by the
> > mapreduce framework so you have to check the mapreduce "user" logs.
> Either
> > use the jobtracker interface or check them directly on the nodes in
> > HADOOP_HOME/logs/userlogs or something like that.
> >
> > On Fri, May 11, 2012 at 1:11 PM, Vijith <[email protected]> wrote:
> >
> > > I have tried with a seperate logger and a printWriter objects to do
> this.
> > > It works in local mode but not in deploy mode.
> > > I am running the nutch job file. Its running and generating the hadoop
> > log
> > > without any errors. But the files are not created in any of the nodes.
> > >
> > > On Fri, May 11, 2012 at 3:07 PM, Vijith <[email protected]> wrote:
> > >
> > > > Hi,
> > > >
> > > > How can I create a separate project specific log in addition to the
> > > > existing log.
> > > > I am running nutch in eploy mode.
> > > > Also I want some urls filtered by my urlfilter to be stored in an
> > > external
> > > > flat file. How can I achieve this.
> > > >
> > > > --
> > > > *Thanks & Regards*
> > > > *
> > > > *
> > > > *Vijith V*
> > > >
> > > >
> > > >
> > >
> > >
> > > --
> > > *Thanks & Regards*
> > > *
> > > *
> > > *Vijith V*
> > >
> >
>
>
>
> --
> *Thanks & Regards*
> *
> *
> *Vijith V*
>

Reply via email to