When I retrieve recursively a directory using a site with https protocol,
it searches for http://sitename/robots.txt but the site has only port
443 (https) open, so there is a connection refused error. Wget thinks
the site is down and aborts the transfer.
Wget should search for
After the https/robots.txt bug, doing a recursive wget to an https-only server
gives me this error: it searches for http://servername/index.html but there
is no server on port 80, so wget receives a Connection refused error and
quits. It should search for https://servername/index.html
On 1 Feb 2002 at 8:17, Daniel Stenberg wrote:
You may count this mail as advocating for HTTP 1.1 support, yes! ;-)
I did write down some minimal requirements for HTTP/1.1 support on
a scrap of paper recently. It's probably still buried under the
more recent strata of crap on my desk somewhere!
On Fri, 1 Feb 2002, Ian Abbott wrote:
The proper action (IMHO) would be to use a true HTTP/1.1 request and
thus most likely receive a chunked transfer-encoded data stream back,
Does PHP do that?
PHP does that. With the help of Apache of course.
Surely it wouldn't be much difference, as
Today I updated wget sources from cvs and tried to compile them and I
got the following:
gcc -I. -I. -DHAVE_CONFIG_H
-DSYSTEM_WGETRC=\/usr/home/alexis/etc/wgetrc\
-DLOCALEDIR=\/usr/home/alexis/share/locale\ -O2 -Wall -Wno-implicit -c
ftp.c
In file included from ftp.h:26,
On 01/02/2002 12:10:59 Mr.Fritz wrote:
After the https/robots.txt bug, doing a recursive wget to an https-only
server
gives me this error: it searches for http://servername/index.html but
there
is no server on port 80, so wget receives a Connection refused error and
quits. It should search for
Hello,
I'd like to use 'wget' to mirror a remote ftp directory, but it requires
a username and password to access the server. I don't see any mention
of command-line options for supplying this information for an FTP
server, only for an HTTP server. Is this a bug, or a feature, or am I
just