Yesterday finally I moved trac to the new server and enabled it.
It quickly started to show huge load.

With the help of Phillip Pollard we tracked it down to running
out of memory and filling up the swap disk due to many hits
on /trac by search engines.

We have not found a "real" solution as I was too tired and had to
go to sleep. Configuring Apache to Deny access to the /trac
directory for the search engines I noticed proved to eliminate
the problem.

Most of them are not a great loss as they hardly brought in
any visitors but I'd like to enable at least Google at some point.
I set up robots.txt so even Google will only hit the important pages
but I am not sure if I can tell Google how to re-read robots.txt
before it goes on?

I even had to set Deny on an individual IP address where an
unrestricted wget -r  was trying to fetch the whole site.
That was surprising, but I have the IP address.

There are few more things I need to set up (e.g. cron jobs) but if
you notice something is not working or is misbehaving, please let
me know.

Sorry that it took so long.

regards
   Gabor

-- 
Gabor Szabo
http://szabgab.com/
_______________________________________________
Padre-dev mailing list
Padre-dev@perlide.org
http://mail.perlide.org/mailman/listinfo/padre-dev

Reply via email to