You can check a couple of things to troubleshoot this.

1. Check logs/hadoop.log file. Do you see any lines containing the
string "fetching". Such lines should clearly show what URLs have been
fetched. If such lines are not present, it means your crawl did not
fetch anything for some reason. Also, read this log file carefully.
You might find clues about the problem.
2. One reason may be that all URLs are blocked in
conf/crawl-urlfilter.txt. Did you edit this file as per the tutorial?
If not, this is most certainly the problem. An easy way to allow all
URLs would be to replace the .- in the end with .+

Regards,
Susam Pal

On Thu, May 1, 2008 at 2:39 PM, ili chimad <[EMAIL PROTECTED]> wrote:
> Hi, i'm using "nutch 0.9" with "tomcat6" / Windows-Vista+cygwin for 2days only
>
>  before sending this mail i read many posts here but i didn't find this 
> problem,
>  after finishing the "crawl" step and deploy nutch project i get "no results" 
> 0-0 result ?
>  what ths it mean?
>  with bin/nutch crawl -dir crawl -depth 3 -topN 30 ==>
>  crawl directory size= 1,60 Mo
>  i copy/paste the file config from nutch tutorial 0.9?
>  please any suggestion :(
>
>  THANKS !!
>
>
>  __________________________________________________
>  Do You Yahoo!?
>  En finir avec le spam? Yahoo! Mail vous offre la meilleure protection 
> possible contre les messages non sollicités
>  http://mail.yahoo.fr Yahoo! Mail
>

Reply via email to