No... did not find any solution or reason...
regards
Sourabh

On Wed, Dec 1, 2010 at 12:40 AM, jitendra rajput <[email protected]>wrote:

> I also faced same problem when trying to run crawl jobs in parallel
> locally.
> Even stack trace was also same.
>
> @Sourabh, Did you find out any reason/solution for this ?
>
> Thanks
> Jitendra
>
> On Tue, Nov 30, 2010 at 11:17 AM, Sourabh Kasliwal
> <[email protected]>wrote:
>
> > Thanks for replying..
> > The stack trace that I have sent is at the lowest level...
> > There is no other location that stack trace points to....
> > regards
> > Sourabh
> >
> > On Mon, Nov 29, 2010 at 11:41 PM, Alex McLintock
> > <[email protected]>wrote:
> >
> > > Does it not say elsewhere in the stack trace? Can you show us more of
> the
> > > logs?
> > >
> > > On 29 November 2010 17:52, Sourabh Kasliwal <[email protected]>
> > > wrote:
> > >
> > > > Hi,
> > > > I am trying to run multiple nutch instances in parallel. But some
> times
> > > > during the execution one of the job fails during injection with the
> > > > exception:-
> > > >
> > > > java.io.IOException: Job failed!
> > > > at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1252)
> > > > at org.apache.nutch.crawl.Injector.inject(Injector.java:211)
> > > >
> > > > Or at
> > > > java.io.IOException: Job failed!
> > > > at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1252)
> > > > at org.apache.nutch.crawl.Generator.generate(Generator.java:523)
> > > > at org.apache.nutch.crawl.Generator.generate(Generator.java:430)
> > > >
> > > > Can somebody give some light on this that what can be the reason of
> job
> > > > failing due to IOException
> > > > regards
> > > > Sourabh
> > > >
> > >
> >
>
>
>
> --
> Thanks and regards
>
> Jitendra Singh
>

Reply via email to