Hello Anthony,

AFAIK, robots.txt only applies to recursive downloads. Given that file
names follow simple patterns and timestamp files exist, it is really
not necessary to run recursive spiders. That said, wget and curl can
be told to ignore robots.txt.

Regards,
Nic

On Sun, Aug 15, 2010 at 6:39 PM, Anthony <[email protected]> wrote:
> I see http://planet.openstreetmap.org/robots.txt now has User-agent: *
> and Disallow: /
>
> Are we allowed to download the minute-replicate files as they become
> available?  If not, what's the point of having them?
>
> _______________________________________________
> dev mailing list
> [email protected]
> http://lists.openstreetmap.org/listinfo/dev
>

_______________________________________________
dev mailing list
[email protected]
http://lists.openstreetmap.org/listinfo/dev

Reply via email to