Hi, > > after my installation yesterday I saw today that a web crawler took a look > on my server: > > 66.249.72.234 -[05/Oct/2011 02:54:59 +0200] GET / HTTP/1.1 200 1364 > > Oh - seems I forgot robots.txt :-) > > My idea was to avoid web crawlers from browsing by general - this could be > implemented with a single file called "robots.txt" in the standard htdocs > folder (/var/www/htdocs): > > User-agent: * > Disallow: / >
that file must should exists depending of the system administrator needs, it's not something that Monkey distribution should care... regards, -- Eduardo Silva http://edsiper.linuxchile.cl http://www.monkey-project.com _______________________________________________ Monkey mailing list [email protected] http://lists.monkey-project.com/listinfo/monkey
