You can limit the number of pages by using the -topN parameter. This
limits the number of pages fetched in each round. Pages are prioritized
by how well-linked they are. The maximum number of pages that can be
fetched is topN*depth.
Doug
Olena Medelyan wrote:
Hi,
I'm using the crawl tool in nutch to crawl web starting from a set of
URL seeds. The crawl normally finishes after the specified depth was
reached. Is it possible to terminate after a pre-defined number of pages
or a text data of a pre-defined size (e.g. 500 MB) has been crawled?
Thank you for any hints!
Regards,
Olena
-------------------------------------------------------
This SF.Net email is sponsored by xPML, a groundbreaking scripting language
that extends applications into web and mobile media. Attend the live webcast
and join the prime developer group breaking into this new coding territory!
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=110944&bid=241720&dat=121642
_______________________________________________
Nutch-general mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/nutch-general