> HTTP does not provide a dirlist command, so wget parses html to find other
files
> it should download. Note: HTML not XML. I suspect that is the problem.

If wget wouldn't download the rest, I'd say that too. But 1st the dir gets
created, the xml is dloaded (in some other directory some *.gif too) so
wget "senses" the directory. If I issue the wget -m site/dir then all of the
rest comes down, (index.html?D=A and others too) so wget is able to
get everything but not at once. So there would be no technical limitation
for wget to make it happen in one step. So it is either a missing feature
(shall I say, a "bug" as wget can't do the mirror which it could've) or I
was unable to find some switch which makes it happen at once.

G.


Reply via email to