No, we're on v1.0. At some point, once this is working, I will upgrade to
v1.3. But I need to get it working locally on v1.0 before we can make that
move.

Ultimately, for the sake of this discussion, we're using nutch v1.0.

On 18 December 2011 14:30, Lewis John Mcgibbney
<[email protected]>wrote:

> Hi Dean,
>
> So to clarify things, are you upgrading from 1.2 to 1.3?
>
> On Sun, Dec 18, 2011 at 2:27 PM, Dean Pullen <[email protected]>
> wrote:
> > Hi Lewis,
> >
> > v1 - due to our production servers (this project is already live)
> currently
> > using it.
> >
> > I'm preparing it locally on my dev machine so that we can upgrade to
> 1.3m,
> > but can't get my local dev machine running due to this problem!
> >
> > On 18 December 2011 14:22, Lewis John Mcgibbney
> > <[email protected]>wrote:
> >
> >> Hi Dean,
> >>
> >> What version are you on?
> >>
> >> On Sun, Dec 18, 2011 at 2:20 PM, Dean Pullen <[email protected]>
> >> wrote:
> >> > (can't access work email, so posting via this account!)
> >> >
> >> > I've tried absolutely everything to resolve this issue, and have
> scoured
> >> > the web over the weekend in an attempt to rectify this issue.
> >> >
> >> > Can no-one help?!
> >> >
> >> >
> >> > Previous email:
> >> >
> >> > Hi all,
> >> >
> >> > I've seen this on the mailing list archives but not a solution.
> >> >
> >> > When I perform:
> >> > ../bin/nutch inject /opt/nutch/filesystem/data/crawl/crawldb/
> >> > /opt/nutch/filesystem/data/seed/
> >> >
> >> > I'm getting:
> >> > Injector: org.apache.hadoop.mapred.InvalidInputException: Input path
> >> > does not exist:
> >> > hdfs://localhost:9000/opt/nutch/filesystem/temp/inject-temp-496643776
> >> >     at
> >> >
> >>
> org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:179)
> >> >     at
> >> >
> >>
> org.apache.hadoop.mapred.SequenceFileInputFormat.listStatus(SequenceFileInputFormat.java:39)
> >> >     at
> >> >
> >>
> org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:190)
> >> >     at
> org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:797)
> >> >     at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1142)
> >> >     at org.apache.nutch.crawl.Injector.inject(Injector.java:220)
> >> >     at org.apache.nutch.crawl.Injector.run(Injector.java:241)
> >> >     at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
> >> >     at org.apache.nutch.crawl.Injector.main(Injector.java:231)
> >> >
> >> > The temp directory is a HDFS directory, and exists. I have plenty of
> >> > disk space left.
> >> >
> >> > Anyone know the cause? Is it a permission thing?
> >> >
> >> > Thanks in advance,
> >> >
> >> > Dean.
> >>
> >>
> >>
> >> --
> >> Lewis
> >>
>
>
>
> --
> Lewis
>

Reply via email to