Urls should be a directory i.e. url/urls.txt 

-----Original Message-----
From: tonykingzhao [mailto:[EMAIL PROTECTED] 
Sent: Monday, March 20, 2006 12:27 PM
To: nutch-user
Subject: using nutch-0.8-dev error

 I'm using nutch-0.8-dev in linux . command bin/nutch crawl urls -dir
crawl.demo -depth 2 .urls is txt file. 
  error is 
060320 181841 parsing file:/root/nutch-0.8-dev/conf/hadoop-site.xml
java.io.IOException: No input directories specified in: Configuration:
defaults: hadoop-default.xml , mapred-default.xml ,
/tmp/hadoop/mapred/local/job_al4odz.xml/localRunnerfinal: hadoop-site.xml
        at
org.apache.hadoop.mapred.InputFormatBase.listFiles(InputFormatBase.java:84)
        at
org.apache.hadoop.mapred.InputFormatBase.getSplits(InputFormatBase.java:94)
        at
org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:70)
060320 181842  map 0%  reduce 0%
Exception in thread "main" java.io.IOException: Job failed!
        at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:310)
        at org.apache.nutch.crawl.Injector.inject(Injector.java:114)
        at org.apache.nutch.crawl.Crawl.main(Crawl.java:104)
 




tonykingzhao
2006-03-20





-------------------------------------------------------
This SF.Net email is sponsored by xPML, a groundbreaking scripting language
that extends applications into web and mobile media. Attend the live webcast
and join the prime developer group breaking into this new coding territory!
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=110944&bid=241720&dat=121642
_______________________________________________
Nutch-general mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/nutch-general

Reply via email to