You can limit the number of pages by using the -topN parameter. This limits the number of pages fetched in each round. Pages are prioritized by how well-linked they are. The maximum number of pages that can be fetched is topN*depth.

Doug

Olena Medelyan wrote:
Hi,

I'm using the crawl tool in nutch to crawl web starting from a set of URL seeds. The crawl normally finishes after the specified depth was reached. Is it possible to terminate after a pre-defined number of pages or a text data of a pre-defined size (e.g. 500 MB) has been crawled? Thank you for any hints!

Regards,
Olena

Reply via email to