Dennis Heuer <[EMAIL PROTECTED]> writes:
> I've checked that on a different site and it worked. However: My
> mainpoint (why I called this a (design) bug) is still valid. When I
> target a page and say -Apdf it is clear that only the pdf links are
> valid choices. The options -rl1 should not be neccessary.
Well it's not, strictly speaking. -r means "download recursively",
and the default maximum recursion depth is five levels, for both HTTP
and FTP. HTML files are special-cased because they are the only
method of retrieving the links necessary for traversing the site.
To phrase it another way: if you use FTP, you would expect something
like:
wget -r ftp://server/dir/ -A.pdf
to recursively download all PDF's in the directory and the directories
below it. The design decision was to have something like:
wget -r http://server/dir/ -A.pdf
behave the same. Otherwise, how would you tell Wget to "crawl the
whole site and download only the PDF's"?