Hi all,
I have been trying to run a crawl on a couple of different domains using
nutch:
bin/nutch crawl urls -dir crawled -depth 3
Everytime I get the response:
Stopping at depth=x - no more URLs to fetch. Sometimes a page or two at the
first level get crawled and in most other cases, nothing
Any other rules in your filter that preceed that one?
(+^http://([a-z0-9]*\.)*blogspot.com/)
--
View this message in context:
http://old.nabble.com/Stopping-at-depth%3D0---no-more-URLs-to-fetch-tp26310955p26313305.html
Sent from the Nutch - User mailing list archive at Nabble.com.