[ 
http://issues.apache.org/jira/browse/NUTCH-272?page=comments#action_12413959 ] 

Matt Kangas commented on NUTCH-272:
-----------------------------------

Thanks Doug, that makes more sense now. Running URLFilters.filter() during 
Generate seems very handy, albeit costly for large crawls. (Should have an 
option to turn off?)

I also see that URLFilters.filter() is applied in Fetcher (for redirects) and 
ParseOutputFormat, plus other tools.

Another possibie choke-point: CrawlDbMerger.Merger.reduce(). The key is URL, 
and they're sorted. You can veto crawldb additions here. Could you effectively 
count URLs/host here? (Not sure when distributed.) Would it require setting a 
Partitioner, like crawl.PartitionUrlByHost?

> Max. pages to crawl/fetch per site (emergency limit)
> ----------------------------------------------------
>
>          Key: NUTCH-272
>          URL: http://issues.apache.org/jira/browse/NUTCH-272
>      Project: Nutch
>         Type: Improvement

>     Reporter: Stefan Neufeind

>
> If I'm right, there is no way in place right now for setting an "emergency 
> limit" to fetch a certain max. number of pages per site. Is there an "easy" 
> way to implement such a limit, maybe as a plugin?

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira

Reply via email to