On Sun, 15 Aug 2010 21:12:52 +0200, Nic Roets wrote:

> AFAIK, robots.txt only applies to recursive downloads. Given that file
> names follow simple patterns and timestamp files exist, it is really not
> necessary to run recursive spiders. That said, wget and curl can be told
> to ignore robots.txt.

robots.txt can selectively allow wget and curl as well; this would be the 
cleaner solution.


_______________________________________________
dev mailing list
[email protected]
http://lists.openstreetmap.org/listinfo/dev

Reply via email to