Here is my crawl-urlfilter.txt file.
Matt

# The url filter file used by the crawl command.

# Better for intranet crawling.
# Be sure to change MY.DOMAIN.NAME to your domain name.

# Each non-comment, non-blank line contains a regular expression
# prefixed by '+' or '-'.  The first matching pattern in the file
# determines whether a URL is included or ignored.  If no pattern
# matches, the URL is ignored.

# skip file:, ftp:, & mailto: urls
-^(file|ftp|mailto):

# skip image and other suffixes we can't yet parse
-\.(gif|GIF|jpg|JPG|ico|ICO|css|sit|eps|wmf|zip|ppt|mpg|xls|gz|rpm|tgz|mov|MOV|exe|png|PNG)$

# skip URLs containing certain characters as probable queries, etc.
[EMAIL PROTECTED]

# accept hosts in MY.DOMAIN.NAME
+^http://([a-z0-9]*\.)*corp.mydomain.com/

# skip everything else
-.


Stefan Neufeind wrote:

> Matthew Holt wrote:
>
>> Just fyi,.. both of the sites I am trying to crawl are under the same 
>> domain. The sub-domains just differ. Works for one, the other it o 
>> nly appears to fetch 6 or so pages then doesn't fetch anymore. Do you 
>> need any more information to solve the problem? I've tried everything 
>> and havent' had any luck.. Thanks.
>
>
> What does your crawl-urlfilter.txt look like?
>
>  Stefan
>


_______________________________________________
Nutch-general mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/nutch-general

Reply via email to