crawl-urlfilter.txt is "bin/nutch crawl" specific. If you want to use
each step separatelly - you ar ein fact doing "Whole Web crawling"
from tutorial - so you need to modify regex-urlfilter.txt instead.
Regards
Piotr

On 8/22/05, Michael Ji <[EMAIL PROTECTED]> wrote:
> 
> Hi,
> 
> When I use intranet crawling, such as, call
> "bin/nutch crawl ...", crawl-urlfilter.txt works---it
> filters out the urls that is not matched the domain I
> included;
> 
> actually, when I take a look at crawltool.java, the
> config files are read in Java Properties by
> 'NutchConf.get().addConfResource("crawl-tool.xml")'
> 
> But:
> 
> When I calling each steps explicitly by myself, such
> as,
> Loop
>    generate segment
>    fetch
>    updateDB
> 
> The crawl-urlfilter.txt doesn't work;
> 
> My question is:
> 
> 1) If I want to control the crawler's behavior in
> second case, should I call 'NutchConf.get()...' by
> myself?
> 
> 2) Where url-filter exactly works? In fetcher? So,
> after loaded from .xml and .txt, all the configuration
> data is kept in Properties for life time of nutch
> running?
> 
> thanks,
> 
> Michael Ji
> 
> 
> __________________________________________________
> Do You Yahoo!?
> Tired of spam?  Yahoo! Mail has the best spam protection around
> http://mail.yahoo.com
>


-------------------------------------------------------
SF.Net email is Sponsored by the Better Software Conference & EXPO
September 19-22, 2005 * San Francisco, CA * Development Lifecycle Practices
Agile & Plan-Driven Development * Managing Projects & Teams * Testing & QA
Security * Process Improvement & Measurement * http://www.sqe.com/bsce5sf
_______________________________________________
Nutch-developers mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/nutch-developers

Reply via email to