Hello,
I'm trying to download a robots.txt protexted directory and I'm having the
following problem:
- wget downloads the files but delete them after they are downloaded with
the following :message (translated from french):
Destroyed file because it must be rejected
How can I prevent this ?
I onced used the skip robots directive in the wgetrc file.
But I can't find it anymore in wget 1.9.1 documentation.
Did it disapeared from the doc or from the program ?
Please answer me, as I'm not subscribed to this list
I'm trying to download a directory recursively with
wget -rL http://site/dir/script.php
wget retrieves all the pages looking like
http://site/dir/script.php?param1=value1
but not the following :
http://site/dir/script.php?param1=value1page=pageno
What's wrong ?
Please reply me directly as I'm