In article <[EMAIL PROTECTED]>, Nicolás Lichtmaier wrote:
>>  having googling
>> sucking down pretty much the entire contents of the bugzilla database at
>> frequent intervals could well be a significant burden on the bugzilla
>> server(s)...
> 
> You are talking about how you think Google works. Google doesn't do 
> that. It will refetch more often the pages that it thinks are important 
> according to its "page ranking", i.e.: a few. Besides, search engines 
> who care enough to voluntarily skip sites who ask it politely, care also 
> to space requests in order to not impact too much on the server load.

i'll admit i don't know the details, and what i wrote above was probably
an exaggeration... however, if google doesn't update the pages
frequently, then you have the other criticism that it will be out of
date...

> But it isn't so! All the web is being indexed all the time, and I don't 
> hear a lot of webmaster shouting that their servers are in flames. I 
> think this is very much like FUD, with no factual backing.

i haven't experienced it (our webserver gets a couple of hundred hits
from google each month, out of a total of over 150,000), but i have seen
webmasters shouting about their servers being in flames, as a result of
search engines sucking on dynamic content which was intensive to
generate...

i can't comment on the position with respect to the bugzilla.mozilla.org
server, i'm just guessing as to why they might have put that robots.txt
in - i'm sure it wasn't for no reason at all...

-- 
michael

Reply via email to