Probably the best thing would be to point you to Google:
http://www.google.com/webmasters/faq.html#nocrawl
On Sunday, Feb 16, 2003, at 14:43 US/Pacific, Matt Sergeant wrote:
On Sunday, Feb 16, 2003, at 20:31 Europe/London, Michael Nachbaur
wrote:
I was trying to avoid having to SSH into my webserver on the weekend.
;-)
The following should work on almost all search engines; you could
also hand out HTTP headers ("Pragma: no-cache") or use a robots.txt,
but the <meta/> tags are easier to use.
<meta http-equiv="Pragma" content="no-cache"/>
<meta http-equiv="Cache-Control" content="no-cache"/>
Well axkit already sets those headers for all XSP pages, which
includes all the wiki pages:
$ lwp-request -m HEAD -e http://axkit.org/wiki/view/AxKit/DefaultPage
200 OK
Cache-Control: no-cache
Connection: close
Date: Sun, 16 Feb 2003 22:42:09 GMT
Pragma: no-cache
Server: Apache/1.3.26 (Unix) AxKit/1.6_02 mod_perl/1.27
Vary: Accept-Encoding,User-Agent
Content-Type: text/html; charset=UTF-8
Expires: Sun, 16 Feb 2003 22:42:09 GMT
Client-Date: Sun, 16 Feb 2003 22:42:05 GMT
Client-Response-Num: 1
So I guess the only other option is robots.txt. Can you do wildcard
matches in robots.txt?
Matt.
---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]