On Sun, 6 May 2007 21:44:16 -0500 (CDT)
Steven M. Schweda wrote:

> From: R Kimber
> 
> > If I have a series of files such as
> > 
> > http://www.stirling.gov.uk/elections07abcd.pdf
> > http://www.stirling.gov.uk/elections07efg.pdf
> > http://www.stirling.gov.uk/elections07gfead.pdf
> >  
> > etc
> > 
> > is there a single wget command that would download them all, or
> > would I need to do each one separately?
> 
>    It depends.  As usual, it might help to know your wget version and
> operating system, but in this case, a more immediate mystery would be
> what you mean by "them all", and how one would know which such files
> exist.

GNU Wget 1.10.2, Ubuntu 7.04

>    If there's a Web page which has links to all of them, then you
> could use a recursive download starting with that page.  Look through
> the output from "wget -h", paying particular attention to the sections
> "Recursive download" and "Recursive accept/reject".  If there's no
> such Web page, then how would wget be able to divine the existence of
> these files?

Yes there's a web page.  I usually know what I want.

But won't a recursive get get more than just those files? Indeed, won't
it get everything at that level? The accept/reject options seem to
assume you know what's there and can list them to exclude them.  I only
know what I want.  Not necessarily what I don't want. I did look at the
man page, and came to the tentative conclusion that there wasn't a
way (or at least an efficient way) of doing it, which is why I asked
the question.

- Richard
-- 
Richard Kimber
http://www.psr.keele.ac.uk/

Reply via email to