Hi all,

OK, I extended "robots.txt" a little now. Not only "ticker.picolisp.com"
is disallowed now, but also everything below /21000/ on "picolisp.com".
Let's see if that persuades Google to stop the traversal.

BTW, I'm handling robots.txt in the following way: "robots.txt" is
actually not a file, but a _directory_, containing a single file named
"default". This file (i.e. "robots.txt/default") looks like this now:

   (prinl "User-Agent: *")

   (prinl "Disallow:"
         ((= *Host '`(chop "ticker.picolisp.com")) " /")
         ((= *Host '`(chop "picolisp.com")) " /21000/") ) )

It uses the fact that "lib/http.l" picks a file named "default" if a
directory is given in the URL.

Before, "robots.txt/default" was simply:

   (prinl "User-Agent: *")

   (prinl "Disallow:"
      (and (= *Host '`(chop "ticker.picolisp.com")) " /") )

- Alex
UNSUBSCRIBE: mailto:picolisp@software-lab.de?subject=Unsubscribe

Reply via email to