On 27/06/12 07:34, Gabor Szabo wrote:
Yesterday finally I moved trac to the new server and enabled it.
It quickly started to show huge load.

With the help of Phillip Pollard we tracked it down to running
out of memory and filling up the swap disk due to many hits
on /trac by search engines.

We have not found a "real" solution as I was too tired and had to
go to sleep. Configuring Apache to Deny access to the /trac
directory for the search engines I noticed proved to eliminate
the problem.

Most of them are not a great loss as they hardly brought in
any visitors but I'd like to enable at least Google at some point.
I set up robots.txt so even Google will only hit the important pages
but I am not sure if I can tell Google how to re-read robots.txt
before it goes on?

I even had to set Deny on an individual IP address where an
unrestricted wget -r  was trying to fetch the whole site.
That was surprising, but I have the IP address.

There are few more things I need to set up (e.g. cron jobs) but if
you notice something is not working or is misbehaving, please let
me know.

Sorry that it took so long.

regards
    Gabor

here is something that has been bugging me :)

why don't we have a title on wiki site, so after looking at trac site's.

with reference to http://trac.edgewall.org/wiki/TracInterfaceCustomization

[header_logo]
alt = Padre, Perl Application Development and Refactoring Environment
src = site/padre_logo_text.png
height = 64
width = 300


trac
├── htdocs
│ └── padre_logo_text.png

regards
bowtie

ps if someone wants to make a better composite image and text, that's ok by me :)


<<attachment: padre_logo_text.png>>

_______________________________________________
Padre-dev mailing list
Padre-dev@perlide.org
http://mail.perlide.org/mailman/listinfo/padre-dev

Reply via email to