On Sun, Feb 05, 2012 at 01:49:00PM -0800, Rich Shepard wrote:
> When I navigate to that page I can see all the files and select each one
> to download individually from within the browser. I thought there was a
> command line tool that would download all (or all specified) files from a
> supplied URL.

wget can do that for you.  My preferred wget command line:

  wget -nc -k -r -np YOUR_URL_HERE

With that command line, wget will download the document pointed to in
the specified URL, plus (potentially) any documents linked from it.

The options in that command line tell wget to:

  "-nc": Not overwrite files that have already been downloaded. This
is handy if you have to halt the download and start over again for
some reason.

  "-k": Convert links in the documents downloaded so that they point
to each other, and not, potentially, to the server they were
downloaded from. This is handy if the documents contain
fully-qualified URLs (e.g.,
http://appl-ecosys.com/photos/my_dog_spot.jpg) as opposed to relative
URLs (e.g., ../photos/my_dog_spot.jpg).

  "-r": Download recursively (i.e., follow links to documents in
sub-directories).

  "-np": Don't follow any links to directories above the one
specified. E.g., if the URL provided on the command line is
http://appl-ecosys.com/good_times/sunday.html, and that document
contains a link to http://appl-ecosys/photos/my_dog_spot.jpg, the
photo of your dog Spot will not be downloaded. This option is critical
in my experience. Without it, you could come back to find wget has
downloaded half the Internet while you weren't looking.


-- 
Paul
_______________________________________________
PLUG mailing list
[email protected]
http://lists.pdxlinux.org/mailman/listinfo/plug

Reply via email to