I tried to Index my  local file system according to the FAQ: 
http://wiki.apache.org/nutch/FAQ#head-c721b23b43b15885f5ea7d8da62c1c40a37878e6

But if I add the plugin into the nutch-site.xml file like this:

      <property>
        <name>plugin.includes</name>
       
<value>protocol-file|protocol-http|parse-(text|html)|index-basic|query-(basic|site|url)</value>
      </property>

There will be a Exception:

Injector: Converting injected urls to crawl db entries.
Exception in thread "main" java.io.IOException: Job failed!
        at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:357)
        at org.apache.nutch.crawl.Injector.inject(Injector.java:138)
        at org.apache.nutch.crawl.Crawl.main(Crawl.java:105)

Probably the reason is the bug described in:
 * http://issues.apache.org/jira/browse/NUTCH-384

As a hack I could use a (local) webserver to feed nutch with the files.
But maybe there is a better workaround to index from a local filesystem
with nutch 8.x?
Can you help me?

Additionally I have another question:
 * Is there a possibility to use a directory of the HDFS Filesystem as a
spool directory to index from?


Thanks

Christian Herta

Reply via email to