Hrvoje Niksic wrote:
> I think you have a point there -- -A shouldn't so blatantly invalidate
> -p. That would be IMHO the best fix to the problem you're
> encountering.
Frank mentionned that limitation in its first reply.
thomas <[EMAIL PROTECTED]> writes:
> i feel like the desired behavior is closer to -p than -r. it seems
> kind of unnatural to me that --accept totally overrides -p but on
> the other hand the current -A behavior is important in the context
> of -r.
I think you have a point there -- -A shouldn't
well that doesn't work in most real case situations since .html's are a minority now a days so you'd get all the dynamic pages (.php, urls with no extension, etc)i feel like the desired behavior is closer to -p than -r. it seems kind of unnatural to me that --accept totally overrides -p but on the
thomas <[EMAIL PROTECTED]> writes:
> i tried adding '-r -l1 - A.pdf' but that removes the html page and all the
> '-p' files.
How about -r -l1 -R.html? That would download the HTML and the linked
contents, but not other HTMLs.
i was having the exact same need with pdf links.i started with '-p -k' which is great for most pages but realized i usually also wanted links such as .pdf (but also mp3, doc, xls, zip, etc)i tried adding '-r -l1 -
A.pdf' but that removes the html page and all the '-p' files.i tried doing it in two
Mauro Tortonesi wrote:
> although i really dislike the name "--no-follow-excluded-html", i
> certainly agree on the necessity to introduce such a feature into
> wget.
>
> can we come up with a better name (and reach consensus on that)
> before i include this feature in wget 1.11?
I agree "no" shou
Tobias Tiederle wrote:
>> I just set up my compile environment for WGet again.
>> When I did regex support, I had the same problem with exclusion, so I
>> introduced a new parameter "--follow-excluded-html".
>> (Which is of course the default) but you can turn it off with
>> --no-follow-excluded-ht
Tobias Tiederle wrote:
Hi,
Jean-Marc MOLINA schrieb:
I just set up my compile environment for WGet again.
When I did regex support, I had the same problem with exclusion, so I
introduced a new parameter "--follow-excluded-html".
(Which is of course the default) but you can turn it off with
--no
Hi,
Jean-Marc MOLINA schrieb:
> I have an other opinion about that limitation. Could it be considered as a
> bug ? From the "Types of Files" section of the manual we can read : « Note
> that these two options do not affect the downloading of html files; Wget
> must load all the htmls to know where
Frank McCown wrote:
> I'm afraid wget won't do exactly what you want it to do. Future
> versions of wget may enable you to specify a wildcard to select which
> files you'd like to download, but I don't know when you can expect
> that behavior.
I have an other opinion about that limitation. Could
Frank McCown wrote:
> I'm afraid wget won't do exactly what you want it to do. Future
> versions of wget may enable you to specify a wildcard to select which
> files you'd like to download, but I don't know when you can expect
> that behavior.
The more I use wget, the more I like it, even if I us
Jean-Marc MOLINA wrote:
Hello,
I want to archive a HTML page and « all the files that are necessary to
properly display » it (Wget manual), plus all the linked images (). I tried most
options and features : recursive archiving, including and excluding
directories and file types... But I can't ma
Hello,
I want to archive a HTML page and « all the files that are necessary to
properly display » it (Wget manual), plus all the linked images (). I tried most
options and features : recursive archiving, including and excluding
directories and file types... But I can't make up the right options to
13 matches
Mail list logo