Hi Julien,
On 3/4/11 7:09 PM, Julien Nioche wrote:
Thanks for reporting the problem Jurgen. and sorry that you felt you
were being ignored. The few active developers Nutch has contribute
during their spare time, the reason why you did not get any comments
on this, is that no one had an instant answer or time to investigate
in more details. You definitely raised an important issue which is
worth investigating.
thanks for taking the time to reply and checking my settings!
To answer your first email : the javascript parser is notoriously
noisy and generates all sorts of monstrosities. It used to be
activated by default but this won't be the case as of the forthcoming
1.3 release.
I see. Monstrosities describes it quite well :)
I have not been able to reproduce the issue with the dot though. I
Any particular URL on your site that you had this problem with?
No, its not on particular URLs, but all over the place. However,
I just checked and it seems to happen with Nutch 0.9 and 1.0,
here is an example:
216.24.131.152 - - [26/Feb/2011:00:53:44 +0900] "GET
/assignments/tags/advertisement/. HTTP/1.0" 404 820 "-" "Lijit
Crawler/Nutch-0.9 (Reports crawler; http://www.lijit.com/robot/crawler;
info(a)lijit(d)com)"
216.24.131.152 - - [26/Feb/2011:00:55:03 +0900] "GET
/assignments/tags/assignments_design/. HTTP/1.0" 404 820 "-" "Lijit
Crawler/Nutch-0.9 (Reports crawler; http://www.lijit.com/robot/crawler;
info(a)lijit(d)com)"
216.24.131.152 - - [26/Feb/2011:00:55:56 +0900] "GET
/assignments/tags/assignments_commercial-photography/. HTTP/1.0" 404 820
"-" "Lijit Crawler/Nutch-0.9 (Reports crawler;
http://www.lijit.com/robot/crawler; info(a)lijit(d)com)"
216.24.131.152 - - [26/Feb/2011:00:56:19 +0900] "GET
/assignments/tags/apartment_rental/. HTTP/1.0" 404 820 "-" "Lijit
Crawler/Nutch-0.9 (Reports crawler; http://www.lijit.com/robot/crawler;
info(a)lijit(d)com)"
216.24.131.152 - - [26/Feb/2011:00:57:09 +0900] "GET
/assignments/tags/assignments_church/. HTTP/1.0" 404 820 "-" "Lijit
Crawler/Nutch-0.9 (Reports crawler; http://www.lijit.com/robot/crawler;
info(a)lijit(d)com)"
216.24.131.152 - - [26/Feb/2011:00:57:26 +0900] "GET
/assignments/tags/assignments_corporate/. HTTP/1.0" 404 820 "-" "Lijit
Crawler/Nutch-0.9 (Reports crawler; http://www.lijit.com/robot/crawler;
info(a)lijit(d)com)"
216.24.131.152 - - [26/Feb/2011:00:57:44 +0900] "GET
/assignments/tags/assignments_cd-cover/. HTTP/1.0" 404 820 "-" "Lijit
Crawler/Nutch-0.9 (Reports crawler; http://www.lijit.com/robot/crawler;
info(a)lijit(d)com)"
216.24.131.152 - - [26/Feb/2011:00:58:16 +0900] "GET
/assignments/tags/amateur_assignments/. HTTP/1.0" 404 820 "-" "Lijit
Crawler/Nutch-0.9 (Reports crawler; http://www.lijit.com/robot/crawler;
info(a)lijit(d)com)"
216.24.131.152 - - [26/Feb/2011:00:58:18 +0900] "GET
/assignments/tags/assignments_event/. HTTP/1.0" 404 820 "-" "Lijit
Crawler/Nutch-0.9 (Reports crawler; http://www.lijit.com/robot/crawler;
info(a)lijit(d)com)"
216.24.131.152 - - [26/Feb/2011:00:59:16 +0900] "GET
/assignments/tags/agent/. HTTP/1.0" 404 820 "-" "Lijit Crawler/Nutch-0.9
(Reports crawler; http://www.lijit.com/robot/crawler; info(a)lijit(d)com)"
By default, Nutch does respect robots.txt and the community as a
whole encourages server-politeness and reasonable use however we
can't prevent people from using ridiculous settings (e.g. high number
of threads per host, low time gap between calls) or modifying the
code to bypass the robots checking (see my comment below)
Understand.
I have checked your robots.txt and it looks correct. I tried parsing
http://www.shakodo.com with the user-agents you specified, Nutch
fully respected robots.txt and the content has not been fetched
Thanks a lot for the confirmation!
That's indeed a possibility
And now also confirmed. I might add another disallow: /badrobot/ trap
in my robots.txt to see if I get more violations.
Doesn't this violate your license?
not as far as I know. The Apache license allows people to modify the
code, most people do that for positive reasons and unfortunately we
can't prevent people from bypassing the robots check.
Too bad, but you can use a hammer to put a nail into the wall (useful)
or to put a nail into somebodies head (not so useful - with exceptions).
Another option is to see if the companies you want to block use
constantly the same IP range and configure your servers so that they
prevent access to these IPs. You could file a complain with the
company hosting the crawl, I know that Amazon are pretty reactive
with EC2 and would take measures to make sure their users do the
right things
They are already blocked with most existing IPs I could find, plus
I reported them to their ISPs, but they seem to have better arguments
(i.e. they pay their ISPs) than I have.
Anyway, thanks a lot for checking and coming back to me with info,
very much appreciated! I will not add Nutch 1.3 to my "disallow" rule
set! :)
Thanks,
Juergen
--
Shakodo - The road to profitable photography: http://www.shakodo.com/