On 26 Aug 2005, at 7:46 pm, Mark Pilgrim wrote:
2. If a user gives a feed URL to a program *and then the program finds
all the URLs in that feed and requests them too*, the program needs to
support robots.txt exclusions for all the URLs other than the original
URL it was given.

...

(And before you say "but my aggregator is nothing but a podcast
client, and the feeds are nothing but links to enclosures, so it's
obvious that the publisher wanted me to download them" -- WRONG!  The
publisher might want that, or they might not ...

So you're saying browsers should check robots.txt before downloading images?

Graham

Reply via email to