Hi all,

This seems to be a basic question,
but I did not find any satisfactory answer in the wiki or even studying the
source code.
I want to be sure that in a next fetch job all urls found in a previous page
are going to be fetched, regardless of the amount of urls found.

I have page X, which links to all pages I need to have indexed. That given,
I guess a crawl job with depth 2 would suffice to have all pages fetched.
The first cycle would get the X page and the second and last cycle would get
all the others whose links were found in X, right ?
Instead of that, in the second fetch I see only a part of those links, being
necessary to keep crawling deeper to get all urls I need.

Thanks in advance,
Emmanuel de Castro Santana

Reply via email to