Hello,
I use a extra file with a long list of http entries. I included this
file with the -i option.
After 154 downloads I got an error message: Segmentation fault.
With wget 1.7.1 everything works well.
Is there a new limit of lines?
Regards,
Dieter Drossmann
no built-in line limit, what you're seeing is a bug.
I cannot see anything wrong inspecting the code, so you'll have to
help by providing a gdb backtrace. You can get it by doing this:
* Compile Wget with `-g' by running `make CFLAGS=-g' in its source
directory (after configure, of course.)
* Go
I can verify this in the cvs version.
it appears to be isolated to the recursive behavior.
/a
On Mon, 15 Sep 2003, Dawid Michalczyk wrote:
Hello,
I'm having problems getting the exit status code to work correctly in
the following scenario. The exit code should be 1 yet it is 0
Hello,
I'm having problems getting the exit status code to work correctly in the
following scenario. The exit code should be 1 yet it is 0
[EMAIL PROTECTED]:~$ wget -d -t2 -r -l1 -T120 -nd -nH -R
gif,zip,txt,exe,wmv,htmll,*[1-99] www.cnn.com/foo.html
DEBUG output created by Wget 1.8.2 on
Hello,
I think I found a bug in wget.
My GNU wget version is 1.82
My system GNU/Debian unstable
I use wget to replay our apache logfiles to a
test webserver to try different tuning parameters.
Wget fails to run through the logfile
and give out the error message that msec =0 failed
Boehn, Gunnar von [EMAIL PROTECTED] writes:
I think I found a bug in wget.
You did. But I believe your subject line is slightly incorrect. Wget
handles 0 length time intervals (see the assert message), but what it
doesn't handle are negative amounts. And indeed:
gettimeofday({1063461157
Dear Sir;
We are using wget-1.8.2 and it's very convinient for our routine
program. By the way, now we have a trouble with the return code
from wget in case of trying to use it with -r option, When wget with
-r option fails in a ftp connection, wget returns a code 0. If no -r
option, it
!
Regards
Klaus
--- Forwarded message follows ---
From: [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Date sent: Thu, 4 Sep 2003 12:53:39 +0200
Subject:Hostname bug in wget ...
Priority: normal
... or a silly sleepless
[EMAIL PROTECTED] writes:
I found a workaround for the problem described below.
Using option -nh does the job for me.
As the subdomains mentioned below are on the same IP
as the main domain wget seems not to compare their
names but the IP only.
I believe newer versions of Wget don't do
... or a silly sleepless webmaster !?
Hi,
Version
==
I use the GNU wget version 1.7 which is found on
OpenBSD Release 3.3 CD.
I use it on i386 architecture.
How to reproduce
==
wget -r coolibri.com
(adding the span hosts option did not improve)
Problem category
=
I recently upgraded to wget 1.8.2 from an unknown earlier version. In
doing recursive http retrievals, I have noticed inconsistent behavior.
If I specifiy a directory without the trailing slash in the url, the
--no-parent option is ignored, but if the trailing slash is present,
it works as
Recksiegel
[mailto:[EMAIL PROTECTED]
Sent: Monday, August 25, 2003 6:49 PM
To: [EMAIL PROTECTED]
Subject: Bug in total byte count for large downloads
Hi,
this may be known, but
[EMAIL PROTECTED]:/scratch/suse82 wget --help
GNU Wget 1.5.3, a non-interactive network retriever.
gave me
Hi all :))
After asking in the wget list (with no success), and after having
a look at the sources (a *little* look), I think that this is a bug,
so I've decided to report here.
Let's go to the matter: when I download, thru FTP, some
hierarchy, the spaces are translated as '%20
I use wget 1.8.2:
-r -nH -P /usr/file/somehost.com somehost.com http://somehost.com
Bug description:
If some script http://somehost.com/cgi-bin/rd.cgi return http header with
status
302 and redirect to http://anotherhost.com then
the first page of http://anotherhost.com/index.html accepted
I use wget 1.8.2:
-r -nH -P /usr/file/somehost.com somehost.com http://somehost.com
Bug description:
If some script http://somehost.com/cgi-bin/rd.cgi return http header with
status
302 and redirect to http://anotherhost.com then
the first page of http://anotherhost.com/index.html accepted
Hi everyone!
I'm using wget to check if some files are downloadable, I also use to
determine the size of the file. Yesterday I noticed that wget
ignores --spider option for ftp addresses.
It had to show me the filesize and other parameters, but it began to
download the file :( That's too bad. Can
Hi All,
I'm using Wget 1.8.2 on a Redhat 9.0 box equipped with
Athlon 550 MHz cpu, 128 MB Ram.
I've encountered a strange issue, which seem really a bug, using the
timestamping option.
I'm trying to retrieve the http://www.nic.it/index.html page.
The HEAD HTTP method returns that page is 2474
1.8.2 timestamping bug
Hi All,
I'm using Wget 1.8.2 on a Redhat 9.0 box equipped with
Athlon 550 MHz cpu, 128 MB Ram.
I've encountered a strange issue, which seem really a bug, using the
timestamping option.
I'm trying to retrieve the http://www.nic.it/index.html page.
The HEAD HTTP method
although this is a windows bug, it effects this program.
when leeching files with the name prn com1 eg. prn.html
wget will freeze up becuse windows will not allow it to save a
file with that name.
A possable soultion, saving the file as prn_.html
just a suggestion.
-pionig
you're right, the include-directories option operates much the same way
(my guess in the interest of speed) as the rest of the accept/reject
options.
which (others have also noticed) is a little flakey.
/a
On Fri, 13 Jun 2003, wei ye wrote:
Did you test your patch? I patched it on my source
no, i think your original idea of getting rid of the code that removes the
trailing slash is a better idea. i think this would fix it but keep the
degenerate case of root directory (whatever that's about):
Index: src/init.c
===
RCS
Did you test your patch? I patched it on my source code and it doesn't work.
There are lot of files under http://biz.yahoo.com/edu/, but
the patched code only downloaded the index.html.
[EMAIL PROTECTED] src]$ ./wget -r --domains=biz.yahoo.com -I /edu/
http://biz.yahoo.com/edu/
[EMAIL
'/' of include-directories '/r/'.
It's a minor bug, but I hope it could be fix in next version.
Thanks!
static int cmd_directory_vector(...) {
...
if (len 1)
{
if ((*t)[len - 1] == '/')
(*t)[len - 1] = '\0';
}
...
}
=
Wei
oh, i understand your problem. your request seems reasonable. i was
trying to see if anyone had an idea why it seemed to be more of a
feature than a bug.
On Thu, 12 Jun 2003, wei ye wrote:
Please take a look this example:
$ \rm -rf biz.yahoo.com
$ ls biz.yahoo.com
$ wget -r --domains
/.
Is it an expected result or a bug?
Thanks alot!
--- Aaron S. Hawley [EMAIL PROTECTED] wrote:
above the code segment you submitted (line 765 of init.c) the
comment:
/* Strip the trailing slashes from directories. */
here are the manual notes on this option:
(from Recursive Accept/Reject Options
of a
feature than a bug.
On Thu, 12 Jun 2003, wei ye wrote:
Please take a look this example:
$ \rm -rf biz.yahoo.com
$ ls biz.yahoo.com
$ wget -r --domains=biz.yahoo.com -I /r/ 'http://biz.yahoo.com/r/'
$ ls biz.yahoo.com/
r/ reports/research/
$
I want only '/r
Hi all :))
This is my first message on this list and as usual is a call for
help ;) Well, the question is that I don't know if this is a bug
(haven't look at the sources yet) and I can't find nothing in the
documentation.
So, prior to send a bug report, I want to make sure
'/' of include-directories '/r/'.
It's a minor bug, but I hope it could be fix in next version.
Thanks!
static int cmd_directory_vector(...) {
...
if (len 1)
{
if ((*t)[len - 1] == '/')
(*t)[len - 1] = '\0
I'm trying to crawl url with --include-directories='/r/'
parameter.
I expect to crawl '/r/*', but wget gives me '/r*'.
By reading the code, it turns out that cmd_directory_vector()
removed the trailing '/' of include-directories '/r/'.
It's a minor bug, but I hope it could be fix in next
I appear to have found a bug in wget 1.8.2, and I couldn't find any references
to it via google. Is this a real bug? I have trouble believing it can't have
been hit before; but on the other hand, I can't figure out any reason why it
should be occuring to me.
If I use wget -r http://myhost.com
://original/
WGET still browses the redirect site
And by the way - multiple dependcy files are downloaded from the redirect
site - so this is a mojor bug i think
Hello
Sorry, but I didn't find other mails
I have wget man page translated in russian (I only have to do spell check)
---
Nick Shafff
mailto:[EMAIL PROTECTED]
On Thu, 13 Mar 2003, Max Bowsher wrote:
David Balazic wrote:
So it is do it yourself , huh ? :-)
More to the point, *no one* is available who has cvs write access.
what if for the time being the task of keeping track of submissions for
wget was done with its debian package?
As I got no response on [EMAIL PROTECTED], I am resending my report here.
--
Hi!
I noticed that wget ( 1.8.2 ) does not conform 100% to RFC 1738 when
handling FTP URLs :
wget ftp://user1:[EMAIL PROTECTED]/x/y/foo
does this :
USER user1
PASS secret1
SYST
PWD ( let's say this returns
David Balazic wrote:
As I got no response on [EMAIL PROTECTED], I am resending my report
here.
One forwards to the other. The problem is that the wget maintainer is
absent, and likely to continue to be so for several more months.
As a result, wget development is effectively stalled.
Max.
David Balazic wrote:
Max Bowsher wrote:
David Balazic wrote:
As I got no response on [EMAIL PROTECTED], I am resending my report
here.
One forwards to the other. The problem is that the wget maintainer is
absent, and likely to continue to be so for several more months.
As a result,
Hi I have wget 1.8 and everything be ok but today when I want to download
file from ftp serwer wget show some error. I was do:
wget 'ftp://user:[EMAIL PROTECTED]:port/directory/file with space.extension'
port number was 1001
And wget display this:
Connecting to 68.65.247.59:1001... connected.
The bug appers if you use another output file and try to convert the url's
at the same time.
If you try to execute the following:
wget -k -O myFile http://www.stud.ntnu.no/index.html
The file will not convert, becuse wget do not locate the file index.html
since the output-file is not index.html
Hello.
In version wget 1.8.1 i got a segfault after executing:
$wget -c -r -k http://www.repairfaq.orghttp://www.repairfaq.org
The bug is probably with two https in command line. I've attached strace
output, but there's rather noting usefull. I have no source code of such
version of wget, so i'm
Hello again.
Matter about version wget 1.8.1
I downloaded source code of wget 1.8.1, so i can tell you more for now
about this bug :)
Here's more data:
(gdb) set args -c -r -k http://www.repairfaq.orghttp://www.repairfaq.org
(gdb) run
Starting program: /home/byrek/testy/wget-1.8.1/src/wget -c -r
1/ (serious)
#include config.h needs to be replaced by #include config.h in several source
files.
The same applies to strings.h.
2/
#ifdef WINDOWS should be replaced by #ifdef _WIN32.
With these two changes it is even possible to compile wget with MSVC[++] and Intel
C[++]. :-)
Jirka
Hi,
I'm trying to recursively retrieve the contents of a few subdirectories,
however I've discovered that
wget -r --directory-prefix=/files --no-host-directories --no-parent
http://myserver.com/files
works fine, but
wget -r --directory-prefix=/files --no-host-directories --no-parent
Mogliano V.to (TV) fax x39-041-5907472
-- ITALY
-Original Message-
From: Pete Stevens [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, February 18, 2003 2:53 PM
To: [EMAIL PROTECTED]
Subject: recursive retrieval bug/inconsistency
Hi,
I'm trying to recursively retrieve the contents
Hello Friends,
I use wget 1.8.2 and 1.5.3. When I use wget 1.8.2 I get this output
wget -v -c --no-host-directories ftp://user:[EMAIL PROTECTED]/OmniTracker/file.txt
--10:16:18--
ftp://user:*password*@194.153.x.x/OmniTracker/file.txt
= `file.txt'
Connecting to
error-description
wget aborts with segmentation violation while i try to get some files
recursively.
wget -r -l1 http://somewhere/somewhat.htm
(gdb) where
#0 0x080532a2 in fnmatch ()
#1 0x08065788 in fnmatch ()
#2 0x0805e523 in fnmatch ()
#3 0x08060da7 in fnmatch ()
#4
Hello,
I have found the following bug with wget 1.8.1 (windows) :
I try to download picture of CD audio from this URL :
wget could get this picture from the web server, but can't write the
output file :
-
http://www.aligastore.com/query.dll/img?gcdFab=8811803124type=0
= `img
Hello!
I have found the following bug with wget 1.8.1 (windows) :
I try to download picture of CD audio from this URL :
wget could get this picture from the web server, but can't write the
output file :
-
http://www.aligastore.com/query.dll/img?gcdFab=8811803124type=0
Hello,
I'v downloaded wget-1.5.3 from http://ftp.gnu.org/gnu/wget into our
BSDI version 3.1 OS and used following commands:
% gunzip wget-1.5.3.tar.gz
% tar -xvf wget-1.5.3.tar
% cd wget-1.5.3
% ./configure
% ./make -f Makefile
% ./make install
But the following error message was displayed:
Hi!
Wget 1.5.3 uses /robots.txt to skip some parts of web-site. But it
doesn't use META NAME=ROBOTS CONTENT=NOFOLLOW tag, which serves
to the same purpose.
I believe that Wget must also parse and use META NAME='ROBOTS' ...
tags
WBR
Stas mailto:[EMAIL PROTECTED]
Gary Hargrave wrote:
--- Kalin KOZHUHAROV [EMAIL PROTECTED] wrote:
Well, I am sure it is wrong URL, but took some time till I pinpoint
it in RFC1808. Otherwise it would be very difficult to code URL
parser.
Ooops :-) It seems I was wrong...
BTW, did you try to click in your browser on that
Moin!
Problem: ssl-linked wget spans hosts even when it shouldn't when encountering
a https:// link:
-
Deciding whether to enqueue http://www.egalwashierstehterversuchtesnichtzuladen.de/;.
This is not the same hostname as
--- Kalin KOZHUHAROV [EMAIL PROTECTED] wrote:
Well, I am sure it is wrong URL, but took some time till I pinpoint it
in RFC1808. Otherwise it would be very difficult to code URL parser.
What you actually try to convince us is that you can omit the
net-location (i.e. usually comes in the
wget does not seem to handle relative links in web pages
of the form
http:page3.html
According to my understanding of rfc1808 this is a valid
URL. When recursively retrieving html pages wget ignores
these links with out displaying an error or warning.
Gary
I just realized, I didn't send this and some other post to the list, but
directly to the replier...
Gary Hargrave wrote:
wget does not seem to handle relative links in web pages
of the form
http:page3.html
According to my understanding of rfc1808 this is a valid
URL. When recursively
Hello!
While wget is used on dualcpu machine the assert(msecs=0) from
calc_rate() broke program execution with this:
wget: retr.c:262: calc_rate: Warunek `msecs = 0' nie zosta speniony.
(Polish locale - sorry)
We think that bug is in wtimer_elapsed() function. Probably it's a
problem
- Original Message -
From: Ken Senior [EMAIL PROTECTED]
There does not seem to be support to change disks when accessing a VMS
server via wget. Is this a bug or just a limitation?
Wget does plain old HTTP and FTP. I know nothing about VMS. Does it have
some strange syntax for discs
I downloaded a file using
wget -O tmp.out http://host/input
If I now try to resume the download using
wget -c -O tmp.out http://host/input
I get an error message.
What should have happened:
wget should get the size of tmp.out, and then
retrieve the file input starting with
Hi,
It seems that wget is not aware of CSS called by @import.
Just an example:
wget --page-requisites --span-hosts --html-extension --convert-links --backup-converted
http://linuxfr.org/2002/12/09/10606.html
will lose all the page formatting.
Has it been fixed ?
Some display errors (see picture).
I also noticed a bug using 'c' flag (continue) in conjunction with 'O'
flag (output file) - It doesn't resume, it starts from the begining.
Otherwise, great and needed tool. Thanx.
Razvan Petrescu
inline: wget-debian.jpg
On Wed, 2002-12-11 at 08:26, Daniel Stenberg wrote:
I find it mildly annoying that I have not seen this discussed or even
mentioned in here.
Or am I just ignorant?
No, you aren't.
See
http://archives.neohapsis.com/archives/vulnwatch/2002-q4/0102.html
...
wget (CVE: CAN-2002-1344)
Hi all!
I am not 100% sure why this is so, but it is reproducable on my several
linux systems. So:
1. Create a new directory and cd to it (mkdir /tmp/mydir /tmp/mydir)
2. Run wget with an ftp site to get a dir (wget --recursive
ftp://ftp.gnu.org/pub/gnu/xinfo*) for example
3. See the time of
Hi there ,
I am using wget 1.7 on linux 2.4.X . Whenever I download a page
from a website i don't see the Elapsed time for the download. Do i need to
set something for this . i had a previous binary for solaris version 1.4.5
which by default showed Elapsed time.
rgds
Vikul
Hi
I don't know, if this is a bug, but when i use wget with the -p option I do
not get the content of files (images ...) when the page is redirected
Example: try
wget -p http://www.linuxtoday.com
and
wget -p http://linuxtoday.com
In the first case I do not get any of the images, in the second
Dear Sir:
I tried to use "wget" download data from ftp site but got error message
as following:
> wget ftp://ftp.ngdc.noaa.gov/pub/incoming/RGON/anc_1m.OCT
Screen show:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hello,
I am running in to something in wget that, if not a bug, may be possibly
an oversight. If this is covered anywhere in the documentation I
apologise, but I have been through it and I can't find any mention of
this behaviour.
Trying to connect
version: 1.8.1
in file: html-url.c
in function:
tag_handle_meta()
{
... skipped ...
char *p, *refresh = find_attr (tag, content, attrind);
int timeout = 0;
for (p = refresh; ISDIGIT (*p); p++)
... skipped ...
}
BUG description:
find_attr() MAY return NULL
(*p); p++)
... skipped ...
}
BUG description:
find_attr() MAY return NULL, but this NOT checked in code listed
above,
JUST USING POINTERS WITHOUT NULL CHECKING, do you understand me??? :)
For example:
Wget CRASH when trying grab URL from this MALFORMED BUT POSSIBLE tag:
meta http
I have found that -k option does not work on downloaded ftp files.
The key problem seems to be that register_download is never called on
ftp files downloaded as local_file is never set for calls to ftp_loop
like they are on calls to http_loop.
So, I added local_file as a parameter to ftp_loop
I believe I have found a bug in wget. The file size seems to be limited to
MAXINT. Has this already been reported? Is this already being worked on?
Do you want me to look into supplying a fix?
Regards,
Andrew Marlow.
I've just had a recursive wget do something unexpected : it
spanned hosts even though I didn't give the -H option. The command
was :
wget -r -l20 http://www.modcan.com/page2.html
http://www.modcan.com/pg2_main.html contains a link to
www.paypal.com, and that link was followed.
That was Wget
Using wget 1.8.2:
$ wget --page-requisites http://news.com.com
...fails to retrieve most of the files that are required to properly
render the HTML document, because they are forbidden by
http://news.com.com/robots.txt .
I think that use of --page-requisites implies that wget is being used
Greetings:
There does not appear to be a way to refuse redirects in wget. This
is a problem because certain sites use local click-count CGIs which
return redirects to advertisers. A common form is
http://desired.web.site/clickcount.cgi?http://undesired.advertiser.site/,
which produces a
There is a bug in wget1.8.2 when username or password contains symbol ''.
I think you should change code in file src/url.c from
int
url_skip_uname (const char *url)
{
const char *p;
/* Look for '' that comes before '/' or '?'. */
p = (const char *)strpbrk (url, /?);
if (!p || *p
On Tue, 17 Sep 2002, Nikolay Kuzmin wrote:
There is a bug in wget1.8.2 when username or password contains symbol ''.
I think you should change code in file src/url.c from
I disagree. The name and password fields must never contain a letter, as it
is a reserved letter in URL strings. If your
I found this problem when fetching files recursively:
What if the filenames of linked files from a www-page contains the
[]-characters? They are treated as some kind of patterns, and not just the
way they are. Clearly not desirable! Since wget just fetches the filenames
from the www-page,
Mats Andrén wrote:
I found this problem when fetching files recursively:
What if the filenames of linked files from a www-page contains the
[]-characters? They are treated as some kind of patterns, and not just
the way they are. Clearly not desirable! Since wget just fetches the
I'm using GNU Wget 1.5.3 - there seem to be a bug when I use a path with
urlencoded paramters in it.
wget will expand \n and \r (%0A and %0D) from the urlencoded string, and
sends a wrong request to the server.
bye, adrian dabrowski
If Openssl is broken, e.g. no certs installed, this will cause wget
not to work.
Do not know what version, but my version worked without installed certs.
Also bevore my Patch was not even any cert routine, only ssl encapsulation
I know it's not perfect but i worked on request on an
IE had a bug reported:
http://online.securityfocus.com/archive/1/286895/2002-08-08/2002-08-14/1
http://www.theregister.co.uk/content/4/26620.html
The problem exists in wget.
Openssl doesn't install the certs in the proper directory by default.
Use openssl ca to find the directory - the path
it is
either a missing feature (shall I say, a bug as wget can't do the
mirror which it could've) or I was unable to find some switch which
makes it happen at once.
Hmm, now I see. The vast majority of websites are configured to deny directory
viewing. That is probably why wget doesn't bother to try
Hallo,
I have a problem to download this link.
http://linuxland.itam.nsc.ru/cgi-bin/download/c.cgi?eng/ps/RedHatBible.pdf.gz
But browser works well.
My wget version is 1.7
Regards,
Thushi.
I have run across this problem too. It is because with Linux 2.4.18 (and
other
versions??) in certain circumstances, gettimeofday() is broken and will
jump
backwards. See http://kt.zork.net/kernel-traffic/kt20020708_174.html#1.
Is there any particular reason for this assert? If there is,
It seems wget uses a 32 bit integer for the bytes downloaded:
[...]
FINISHED --17:11:26--
Downloaded: 1,047,520,341 bytes in 5830 files
cave /home/suse8.0# du -s
5230588 .
cave /home/suse8.0#
As it's a once per download variable I'd say it's not that performance
critical...
Hartwig, Thomas wrote:
I got a assert exit of wget in retr.c in the function calc_rate
because msecs is 0 or lesser than 0 (in spare cases).
I don't know how perhaps because I have a big line to the server or
the wrong OS. To get worked with this I patched retr.c setting
msecs = 1 if equal
Hello,
I am sure I have found a bug in wget 1.8.2 and earlier.
Symptom: exit status is 0 when recursive ftp fails due to
failed login.
An ftp server may refuse login for temporary reasons, such as
maximum logins, too busy, etc.. I don't mean that wget will
detect such reasons, only
Hi, i have a problem and would
really like you to help me. i`m using wget for downloading list of file
urlsvia http proxy. When proxy server goes
offline - wget doesn`t retry downloading of files. Can you fix that or can you
tell me how can i fix that ?
:15003/Dragon
=
`dragon.004
Connecting to 195.108.41.140:3128... failed: Connection
refused.
FINISHED
--01:19:23--
Downloaded: 150,000,000 bytes in 10 files
- Original Message -
From:
Kempston
To: [EMAIL PROTECTED]
Sent: Monday, July 08, 2002 12:50
AM
Subject: WGET BUG
or use version
1.8.2
I see. As I said, I couldn't get it to work on that day and the NEWS file
doesn't list this bug. I was able to test this now with 1.8.2 and see that
it works. However, shouldn't it grab this header and change the file name,
anyway?
Content-Disposition: inline; filename=147945
Your message of Thu, 20 Jun 2002 15:49:52 +0200:
I supposed people would read the index.html. Since this is becoming
something of a faq I've now I've put a 00Readme.txt on the ftp server and a
Readme.txt in the binary archives, we'll see if that helps.
It should :-)
Kai
--
Kai Schätzl,
-5907073
-- I-31021 Mogliano V.to (TV) fax x39-041-5907472
-- ITALY
-Original Message-
From: Cédric Rosa [mailto:[EMAIL PROTECTED]]
Sent: Friday, June 21, 2002 4:37 PM
To: [EMAIL PROTECTED]
Subject: Bug with wget ? I need help.
Hello,
First, scuse my english but I'm french
this problem ?
Date: Fri, 21 Jun 2002 16:37:02 +0200
To: [EMAIL PROTECTED]
From: Cédric Rosa [EMAIL PROTECTED]
Subject: Bug with wget ? I need help.
Hello,
First, scuse my english but I'm french.
When I try with wget (v 1.8.1) to download an url which is behind a router,
the software wait for ever even
Cédric Rosa wrote:
Hello,
First, scuse my english but I'm french.
When I try with wget (v 1.8.1) to download an url which is behind a router,
the software wait for ever even if I've specified a timeout.
With ethereal, I've seen that there is no response from the server (ACK
never
thanks for your help :)
I'm installing version 1.9 to check. I think this update may solve my
problem.
Cedric Rosa.
- Original Message -
From: Hack Kampbjørn [EMAIL PROTECTED]
To: Cédric Rosa [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Sent: Friday, June 21, 2002 7:27 PM
Subject: Re: Bug
Hello,
I got this feature request:
http://bugs.debian.org/149075
- Forwarded message from Erno Kuusela [EMAIL PROTECTED] -
hello,
it would be really useful to be able to set the tcp window size
for wget, since the default window size can be much too small
for long latency links. also
[EMAIL PROTECTED] wrote:
I was using wget to suck a website, and found an interesting problem
some of the URLs it found contained a question mark, after which it
responded with cannot write to '... insert file/URL here?more
text ...' (invalid argument).
And - it didn't save
I was using wget to suck a website, and found an interesting problem
some of the URLs it found contained a question mark, after which it
responded with cannot write to '... insert file/URL here?more
text ...' (invalid argument).
And - it didn't save any of those URLs to files (on
6),
in addition, to do wget-1.8.1-sol26-sparc-local.gz.
Every wget versions dumped core file before connection!
Environment related to gcc etc on my Solaris2.6 System
is so wrong ??? or what, Is this wget's bug ???
Please let me know, when you get time.
I would greately appreciate any help
yo
I don't know why Wget dumps core on startup. Perhaps a gettext
problem? I have seen reports of failure on startup on Solaris, and it
strikes me that Wget could have picked up wrong or inconsistent
gettext.
Try unsetting the locale-related evnironment variables and seeing if
Wget works then.
Henrik van Ginhoven [EMAIL PROTECTED] writes:
problem, I agree. On large networks some evil-minded person could
write a tiny cron-script that ran once every 5 minutes or so to
parse ps-output looking for nothing but passwords,
Note that the standard workaround for this problem, which is now
Note that the standard workaround for this problem, which is now even
documented in the manual, is to use the `-i -' option. For example:
wget -i -
http://user:[EMAIL PROTECTED]/directory/file
^D
But I agree that's just a workaround. I'm now more open to the idea
of introducing a prompting
401 - 500 of 678 matches
Mail list logo