This is my regex-urlfilter.txt file.
-^http://([a-z0-9]*\.)+ +^http://([0-9]+\.)+ +. I want to allow only IP addresses and internal sites to be crawled and fetched. This means:- http://www.google.com should be ignored http://shoppingcenter should be crawled http://192.168.101.5 should be crawled. But when I see the logs, I find that http://someone.blogspot.com/ has also been crawled. How is it possible? Is my regex-urlfilter.txt wrong? Are there other URL filters? If so, in what order are the filters called?
