My question, using DOS wget, is:
If wget is stopped before finished, links are not converted to relative
links (which point to hard disk file:/);
as I couldn't find anything about that relating to wget on the web
(including not on your mailing list) and any wget option(s) I tried failed
to
On Wed, 6 Apr 2005, andi kete wrote:
My question, using DOS wget, is:
If wget is stopped before finished, links are not converted to relative
links (which point to hard disk file:/);
...
I would like to know whether you provide any utility program for that
(in my situation, using
Hi.
I am still having trouble getting the links converted.
I have used the command:
wget -k -r ftp://user:[EMAIL PROTECTED]/
It brought all of the pages down, but when you go to
any link, it will go to the page on my old web server.
Is it possible that this has something to do with DNS?
When
Guillaume Morin [EMAIL PROTECTED] writes:
For example if a link to the URL /foo?bar is seen then the correct
file is downloaded and saved with the name foo?bar. When viewing
the pages with Netscape the '?' character is seen to separate the
URL and the arguments. This makes the link fail.
Hi,
I am forwarding you Debian bug 65971
http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=65791repeatmerged=yes
I can reproduce this problem with 1.8.1
With the '-k' option or the 'convert_links = on' option in .wgetrc the
links in
the downloaded HTML pages are modified to be relative
Dan Harkless [EMAIL PROTECTED] writes:
--- src/recur.c Sun Dec 17 20:28:20 2000
+++ src/recur.c.new Sun Mar 25 20:25:12 2001
@@ -165,7 +165,18 @@
first_time = 0;
}
else
+{
+u = newurl ();
+err = parseurl (this_url, u, 0);
+if (err == URLOK)
Hrvoje Niksic [EMAIL PROTECTED] writes:
To be sure that *all* HTML files are handled, I think the addition
needs to be triggered from within retrieve_url, say by calling a
"register_html_file_for_conversion()". I think I'll provide such a
fix tonight.
Sounds good. Wonder if it should be