Hi,
I was trying to test the error messages of my server
(apache: ErrorDocument 404 )
but unfortunately could not download those error messages generated by
my server with wget since the server sends a 404 code if a page is
missing (that's what I wanted to test), wget does not save the
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hrvoje Niksic wrote:
> "Hopkins, Scott" <[EMAIL PROTECTED]> writes:
>
>> Interesting. Compiled that code and I get the following when running
>> the resulting binary.
>>
>> /var/opt/prj/wget$ strdup_test
>> 20001448
>
> As I suspected. S
"Hopkins, Scott" <[EMAIL PROTECTED]> writes:
> Interesting. Compiled that code and I get the following when running
> the resulting binary.
>
> /var/opt/prj/wget$ strdup_test
> 20001448
As I suspected. Such an obvious strdup bug would likely have been
detected sooner.
> I appear t
Interesting. Compiled that code and I get the following when running
the resulting binary.
/var/opt/prj/wget$ strdup_test
20001448
I appear to have a functioning wget binary with the strdup change to
config.h, but I'm curious what you think the other causes of this
problem could
"Hopkins, Scott" <[EMAIL PROTECTED]> writes:
> Worked perfect. Thanks for the help.
Actually, I find it surprising that AIX's strdup would have such a
bug, and that it would go undetected. It is possible that the problem
lies elsewhere and that the change is just masking the real bug.
str
-
From: Micah Cowan [mailto:[EMAIL PROTECTED]
Sent: Wednesday, January 16, 2008 3:53 PM
To: Hopkins, Scott
Cc: wget@sunsite.dk
Subject: Re: Error with wget on AIX5.3
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hopkins, Scott wrote:
> All,
>
> I recently compiled wget 1
ake connections
> out to remote sites to collect data. However, when it attempts to write
> to the response to disk, it generates the following error:
>
>
>
> Length: wget: strdup: Failed to allocate 1 bytes; memory
> exhausted.
>
>
>
>
response to disk, it generates the following error:
Length: wget: strdup: Failed to allocate 1 bytes; memory
exhausted.
Any thoughts?
Scott Hopkins
downloading recursively, wget will time-out as expected, but
>> without reporting any kind of an error. No non-zero exit status,
>> at least.
>
> This appears to still be the case. I consider this an important bug.
> ...but probably not a show-stopper. It _should_ be easy e
expected, but
> without reporting any kind of an error. No non-zero exit status,
> at least.
>
> So when I put this in a script;
>
> while :
> do
> wget -c -r -T 33 -np \
> 'http://www.tux.org/pub/people/kent-robotti/looplinux/rip/docs/' &&am
GNU Wget 1.10.2, and it's more an inconsitency than a bug.
Basically, if something goes wrong with the network connection
while downloading recursively, wget will time-out as expected, but
without reporting any kind of an error. No non-zero exit status,
at least.
So when I put this in a s
On 13/09/2007, Micah Cowan <[EMAIL PROTECTED]> wrote:
>
> Alex Owen wrote:
> >
> > I think it would be nice if the exit code of wget could be inspected
> > to determin if wget failed because of a 404 error or some other
> > reason.
>
> Hi Alex,
>
>
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Alex Owen wrote:
> Hello,
>
> If i run :
> wget http://server.domain/file
>
> How can I differentiate between a network problem that made wget fail
> of the server sending back a HTTP 404 error?
>
> ( I have a use c
Hello,
If i run :
wget http://server.domain/file
How can I differentiate between a network problem that made wget fail
of the server sending back a HTTP 404 error?
( I have a use case described in debian bug http://bugs.debian.org/422088 )
I think it would be nice if the exit code of wget
Micah Cowan wrote:
> Done. Lemme know if that works for you.
Looks good
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Micah Cowan wrote:
> Tony Lewis wrote:
>> Micah Cowen wrote:
>
>>> Actually, the wget directory is the trunk in that example, since it was
>>> checked out with
>>>
>>> $ svn co svn://addictivecode.org/wget/trunk wget
>> Checking out the code using "
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Tony Lewis wrote:
> Micah Cowen wrote:
>
>> Actually, the wget directory is the trunk in that example, since it was
>> checked out with
>>
>> $ svn co svn://addictivecode.org/wget/trunk wget
>
> Checking out the code using "trunk" is only one of th
Micah Cowen wrote:
> Actually, the wget directory is the trunk in that example, since it was
> checked out with
>
> $ svn co svn://addictivecode.org/wget/trunk wget
Checking out the code using "trunk" is only one of three examples. I used
the third example, checking out the entire source code re
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Tony Lewis wrote:
> On http://www.gnu.org/software/wget/wgetdev.html, step 1 of the summary is:
>
>1. Change to the topmost GNU Wget directory:
> % cd wget
>
> But you need to cd to either wget/trunk or the appropriate version
> subdirec
On http://www.gnu.org/software/wget/wgetdev.html, step 1 of the summary is:
1. Change to the topmost GNU Wget directory:
% cd wget
But you need to cd to either wget/trunk or the appropriate version
subdirectory of wget/branches.
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Mishari Al-Mishari wrote:
> Hi,
> when i run this command
> wget -p wwwladultfriendfinder.co
>
> I recvd the following error messages, eventhoug I was able to
> sucessfully download the page using the browser;
> Resolving wwwl
Hi,
when i run this command
wget -p wwwladultfriendfinder.co
I recvd the following error messages, eventhoug I was able to
sucessfully download the page using the browser;
Resolving wwwladultfriendfinder.com... 8.15.231.13
Connecting to wwwladultfriendfinder.com|8.15.231.13|:80... connected
ng gesendet, warte auf Antwort... 500 Internal Server Error
14:32:02 FEHLER 500: Internal Server Error.
##
I send the same datastring that I captured using wireshark:
id=1198\&diesel=1%2C099\&diesel2=0%2C000\&normal=1%2C329\&super=1%2C349\&super2=1%2C409\&
Hi all, where can I get a list of the different return codes and what
they mean? For example:
WGET: http://example.domain.tld Cant get file: content.piz and save
locally as: /var/www/html/content.piz Return code: wget exited with
value 2
What is return value 2? What does it indicate? T
From: Daniele Annesi
> I think it is a Bug:
> using wget for multiple files :
> es.
> wget ftp://user:[EMAIL PROTECTED]/*.zip
> in the time of each file the seconds are set to 00
That's not an error report. An error report would tell the reader
which version of wget you
I think it is a Bug:
using wget for multiple files :
es.
wget ftp://user:[EMAIL PROTECTED]/*.zip
in the time of each file the seconds are set to 00
best regards
daniele annesi
In the man page and info page for the wget program, it talks about the
option
--no-verbose
But the wget program actually wants (and specifies in the output from
--help)
--non-verbose
Alternatively, I guess you could say the code is broken.
Topher Eliot
[EMAIL PROTECTED]
Sent: Monday, December 11, 2006 12:24 PM
To: [EMAIL PROTECTED]
Subject: ERROR 500 problem
Hi, I hope someone can shed light on this problem.
I am trying to crawl a particular site, and am getting strange results.
I had crawled it successfully in the past but lately I only am able to
crawl about 14
):
HTTP request sent, awaiting response... 500 Internal Server Error
11:04:18 ERROR 500: Internal Server Error.
This error happens intermitently on some pages during the first 1400 pages, i.e.
http://www.ryland.com/find-your-new-home/29-northern-kentucky/1115-claiborne/11777-shenandoah.html
But then
From: Ian
As usual, it might help to know which wget version you're using
("wget -V") and on which system type you're using it.
> The documentation section 7.2 states:
_Which_ documentation section 7.2?
> wget -r -l1 --no-parent -A.gif http://www.server.com/dir/
I don't norm
Hello, I am not sure this is a serious bug, but for some reason I
cannot get
this example to work..
The documentation section 7.2 states:
-
You want to download all the gifs from a directory on an http server.
You
tried wget http://www.server.com/dir/*.gif,
One discovers that wget secretly (not documented) throws away the
content of the response if there was an error (404, 503, etc.).
So there needs to be a --save-even-if-error switch.
1. I could not ping any host even google.com, but I can browse the internet
without any problem. I dont know much about networking, so I dont know whats
wrong with my DNS.
It looks like you are getting to the internet thorough proxy then. Can
you send the exact output of ping google.com command?
On 11/2/06, Sathyadevi Udayakumar <[EMAIL PROTECTED]> wrote:
Could you tell me what could be issue here, thanks
It has something to do with your name resolution. Have you tried
- Pinging the host you are trying to download from
- Accessing it by IP address
- Just connecting to it on port 80 t
Hi,
I have a problem while running wget in my windows 2003 server machine. It
gave me 'unknown host' error and so I had set the variable http_proxy in my
cmd line and it worked for a simple wget www.google.com cmd. But when I
tried to another website where I gave my username and p
On Mon, 18 Sep 2006, Mauro Tortonesi wrote:
sorry, it's my fault. i was supposed to automate the testing process by
writing a "glue" script which runs all the tests in sequence, but i never
did it. i never got any feedback on the testing suite, so i thought nobody
except me was using it, and i f
Ryan Barrett ha scritto:
hi all. is anyone successfully running the perl unit tests? i have perl
5.8.0
and libwww-perl 5.65 happily installed, but i'm getting this error:
heaven:~/wget/tests> ./Test1.px
Can't locate object method "new" via package "HTTPTest" a
hi all. is anyone successfully running the perl unit tests? i have perl 5.8.0
and libwww-perl 5.65 happily installed, but i'm getting this error:
heaven:~/wget/tests> ./Test1.px
Can't locate object method "new" via package "HTTPTest" at ./Test1.px line 38.
th
Steven M. Schweda ha scritto:
Are you certain that the FTP _server_ can handle file offsets greater
than 4GB in the REST command?
i agree with steven here. it's very likely to be a server-side problem.
--
Aequam memento rebus in arduis servare mentem...
Mauro Tortonesi
Seems this wasn't sent to the list ;-)
- Weitergeleitete Nachricht von Petr Kras <[EMAIL PROTECTED]> -
Datum: Wed, 6 Sep 2006 14:57:32 +0200
Von: Petr Kras <[EMAIL PROTECTED]>
Antwort an: Petr Kras <[EMAIL PROTECTED]>
Betreff: Re: REST - error for files b
Jochen Roderburg <[EMAIL PROTECTED]> writes:
> Petr Kras schrieb:
>> When transfer is broken and restoration is required
>> it doesnt work for files greater than 4GB (not checked for 2GB)
>> and brake is behind 4GB (2GB) limit.
>
>> --13:58:54-- ftp://streamlib.pan.eu/Streams/TVDC_SS_01100.ts
>>
Petr Kras schrieb:
When transfer is broken and restoration is required
it doesnt work for files greater than 4GB (not checked for 2GB)
and brake is behind 4GB (2GB) limit.
--13:58:54-- ftp://streamlib.pan.eu/Streams/TVDC_SS_01100.ts
=> `/opt/streams/Stream1/TVDC_SS_01100.ts'
==> CW
From: Petr Kras
[...]
> ==> PORT ... done.==> REST 4998699942 ...
> REST failed, starting from scratch.
[...]
Something more might be learned from adding "-d" to your Wget command
line.
I don't use this continuation feature, but a quick look at the code
suggests that Wget 1.10.2 is us
When transfer is broken and restoration is required
it doesnt work for files greater than 4GB (not checked for 2GB)
and brake is behind 4GB (2GB) limit.
Downloading will start from zero.
version: GNU Wget 1.10.2
Petr Kras
-
Downloaded: 735,142 bytes in 24 files
looks great. But if
09:49:46 ERROR 404: WWWOFFLE Host Not Got.
flew off the screen, one will never know.
That's why you should say
Downloaded: 735,142 bytes in 24 files. 3 files not downloaded.
At 10:38 14/07/2006, I wrote:
Sometimes when using wget, I get an error like:
Connecting to www.somewebsite.com[xxx.xxx.xxx.xxx]:80... failed: No
such file or directory.
Giving up.
But if I increase the number of tries, wget will eventually download the page.
Please can you tell me why
Hi,
Sometimes when using wget, I get an error like:
Connecting to www.somewebsite.com[xxx.xxx.xxx.xxx]:80... failed: No
such file or directory.
Giving up.
But if I increase the number of tries, wget will eventually download the page.
Please can you tell me why this is, and also if there is
On Jul 11, 2006, at 2:13 PM, Hrvoje Niksic wrote:
It could be that your system expects UTF-8 in file names and rejects
what it figures are invalid UTF-8 sequences.
I think that's true: MacOS / HFS+ seems to expect file names to be
"decomposed unicode" in UTF-8. (I gather that means that ac
Jamie Zawinski <[EMAIL PROTECTED]> writes:
> If I specify -O, it is able to download the data; but if wget is
> picking the file name itself, it is unable to write the file
> ("invalid argument"). Neither --restrict-file-names=unix nor --
> restrict-file-names=windows affects it.
It could be tha
wget 1.10.2
MacOS 10.4.7 Intel
I'm trying to download a file whose URL contains Japanese characters.
If I specify -O, it is able to download the data; but if wget is
picking the file name itself, it is unable to write the file
("invalid argument"). Neither --restrict-file-names=unix nor --
bject: wget 403 forbidden error when no index.html.
I am trying to download a specific directory contents of a site and i
kep
getting the 403 forbidden when i run wget. The direcotry does not have
an
index.html and ofcourse any refrences to that path result a 403 page
displayed
in my browser. Is t
.
Tony
-Original Message-
From: news [mailto:[EMAIL PROTECTED] On Behalf Of Aditya Joshi
Sent: Friday, July 07, 2006 9:15 AM
To: wget@sunsite.dk
Subject: wget 403 forbidden error when no index.html.
I am trying to download a specific directory contents of a site and i kep
getting the 403
I am trying to download a specific directory contents of a site and i kep
getting the 403 forbidden when i run wget. The direcotry does not have an
index.html and ofcourse any refrences to that path result a 403 page displayed
in my browser. Is this why wget is not working. If so how to download c
- Original Message -
> From: [EMAIL PROTECTED]
> To: [EMAIL PROTECTED]
> Subject: Re: Out of Memory Error
> Date: Thu, 25 May 2006 16:05:15 -0500 (CDT)
>
>
> [EMAIL PROTECTED]:
>
> > [...] For very big runs, I wouldn't want to convert large amounts =
>
r/wget/bin'
infodir='/home/oscar/wget/info' mandir='/home/oscar/wget/man' manext='1'
make[1]: Entering directory `/usr/home/oscar/wget/wget-1.10.2/src'
..
make[1]: Entering directory `/usr/home/oscar/wget/wget-1.10.2/po'
file=./`echo bg | sed 's,.
[EMAIL PROTECTED] wrote:
I ran wget (1.9.1) on Debian GNU/Linux to find out how many links my site had,
and after "Queue count 66246, maxcount 66247" links, the wget process ran out of
memory. Is there a way to set the persistent state to disk instead of memory so
that all the system memory an
From: oscaruser
> [...] wget (1.9.1) [...]
Wget version 1.10.2 is the current release.
> [...] Is there a way to set the persistent state to disk instead of
> memory [...]
I believe that there's a new computing concept called "virtual
memory" which would handle this sort of thing automati
Folks,
I ran wget (1.9.1) on Debian GNU/Linux to find out how many links my site had,
and after "Queue count 66246, maxcount 66247" links, the wget process ran out
of memory. Is there a way to set the persistent state to disk instead of memory
so that all the system memory and cache is not slow
;width);*/
-in wget-1.10.2/src/hash.c: /*assert (ht->resize_threshold >= items);*/
Let me know if you need more info.
Regards,
Maxim
-Original Message-
From: Hrvoje Niksic [mailto:[EMAIL PROTECTED]
Sent: Friday, April 14, 2006 1:20 AM
To: Maxim Brandwajn
Cc: [EMAIL PROTECTED]
Subje
"Maxim Brandwajn" <[EMAIL PROTECTED]> writes:
> Hi guys, I keep getting this error at random files/times:
[...]
What version of Wget are you using?
Hi guys, I keep getting this error at random files/times:
wget: retr.c:537: calc_rate: Assertion `msecs >= 0' failed.
I think it is in the retr.c, there is a line such as:
“...calc_rate (long bytes, long msecs, int *units){ double dlrate; assert (msecs >= 0); assert
nicolas figaro wrote:
Hi,
there is a mistake in the french translation of wget --help (on linux
redhat).
in english :
wget --help | grep spider
--spider don't download anything
was translated in french this way :
wget --help | grep spider
--spider
Hi,
there is a mistake in the french translation of wget --help (on linux
redhat).
in english :
wget --help | grep spider
--spider don't download anything
was translated in french this way :
wget --help | grep spider
--spider ne pas télécharger n'
> WHY IS -t 2, -T 900 and the --cache=off options being ignored when I run
> this in a cron job?
If you're troubled by the message "Caching oiswww.eumetsat.int =>
81.2.148.34", then the explanation may be that "--cache=off" differs
from "--dns-cache=off". As "wget -h" says:
--no-cache
Title: Wget error when using cron
Part of my script is available below.
………..
# create filename of http://oiswww.eumetsat.int/%7Eidds/images/out/SDDI-20051110-1100-RGB-08-DUST-00-800.jpg
#set -A eumetdate "${mydate[0]}${mydate[1]}${mydate[2]}-${mydate[3]}00"
if [[ ${my
Dear Sir,
I am a big fan of your wget program. I use it to download some big gz files,
and tried to use -N to skip the same version. Although the local file is
exactly the same as that in FTP site, wget will keep downloading it by
saying that the file size is different. An example is as follows:
w
checking for C compiler default output file name... configure: error: C
compiler cannot create executables
See `config.log' for more details.
I have attached the config.log file.
Let me know if you need further info from my config.
My best regards,
Antonino
config.log
Description: Binary data
Hello,
When we get an index file for an ftp url (wget ftp://.) there is a
generation error when the time of the file is 00:00: the time column is
empty.
In the same manner, when the file is old (more than 6 month), the year
is normally replacing the time but it is also generated empty
It works fine from here (209.98.249.184, Wget 1.10.2a1, VMS Alpha
V7.3-2). If it hangs for you, it could be that firewall. It's easy
enough to block port 80 and pass "ping". Does any browser work? I
suspect not.
St
[EMAIL PROTECTED] wrote:
> Thanks for your reply. Only ping works for bbc.com and not wget.
When I issue the command "wget www.bbc.com", it successfully downloads the
following file:
http://www.bbc.co.uk/?ok";>
British Broadcasting Corporation
You might want to try "wget http://www.bb
& Regards,
Mabu Shaik
-Original Message-
From: Noèl Köthe [mailto:[EMAIL PROTECTED]
Sent: 11 November 2005 09:18
To: Shaik,M,Mabu,JRM3X C
Cc: wget@sunsite.dk
Subject: Re: Error connecting to target server
Am Mittwoch, den 09.11.2005, 17:38 + schrieb [EMAIL PROTECTED]:
> I hav
Am Mittwoch, den 09.11.2005, 17:38 + schrieb [EMAIL PROTECTED]:
> I have installed latest wget available and I tried to use wget
> utility. But I can establish connection to the target servers from my
> Solaris box. My server is behind the firewall, but when I do a ping
> bbc.com, it is fine.
Hi,
I have installed latest wget available and I tried to use
wget utility. But I can establish connection to the target servers from my Solaris
box. My server is behind the firewall, but when I do a ping bbc.com, it is
fine.
But when I do ping other URL, there is a problem.
Whi
Thanks for the information.
Could you tell me a site where I can find static
compiled version of wget, perhaps an rpm? Because I'm
not able to compile.
--- Noèl Köthe <[EMAIL PROTECTED]> wrote:
> Am Freitag, den 28.10.2005, 10:19 -0700 schrieb
> Viktor Várallyay:
>
> > Version number: GNU W
Am Freitag, den 28.10.2005, 10:19 -0700 schrieb Viktor Várallyay:
> Version number: GNU Wget 1.9.1
...
> Length: 1,651,513,344 [-318,930,944 to go]
> (unauthoritative)
>
> 100%[==>]
> 2,147,468,288 126.05K/sETA 00:00w
>get: progress.c:
Hi,
I'm Viktor Varallyay.
Could you tell me what was wrong? Or what did I wrong.
I wanted only continue the downloading.
Version number: GNU Wget 1.9.1
[EMAIL PROTECTED]:~/Documents/Install/SuSE10> ./download
--18:13:05--
ftp://ftp.nl.uu.net/pub/linux/suse/i386/10.0/iso/SUSE-10.0-EvalDVD
-i386
Am Dienstag, den 18.10.2005, 15:36 +0200 schrieb Bernd Eggink:
> I'm using wget 1.10.2. When I try to load this page:
>
> wget http://freshmeat.net/projects/pinfo/
>
> I get an "ERROR 403: Forbidden". Any ideas why? Firefox and
> lynx display that page wit
I'm using wget 1.10.2. When I try to load this page:
wget http://freshmeat.net/projects/pinfo/
I get an "ERROR 403: Forbidden". Any ideas why? Firefox and
lynx display that page without any problems.
Regards,
Bernd
--
Bernd Eggink
Regionales Rechenzentrum der Uni Hamburg
[
In the past, I have been confused as to whether the file which was
generating the error was on the server, or on my local system. If there
is a way to distinguish between the two, and be more explicit, that
would be a little more helpful.
I don't see any way wget could/should do anything e
ml: Permission denied
> Cannot write to `index.html' (Permission denied).
But what is Wget to do in such a case except report an error?
Hrvoje Niksic schrieb:
Kentaro Ozawa <[EMAIL PROTECTED]> writes:
When the local file does not have write permission, wget displays
permission error.
wget version is 1.10 and 1.10.1.
wget 1.9.1 does not have this problem.
I'm not sure I understand you correctly, but I can't
writes:
When the local file does not have write permission, wget displays
permission error.
wget version is 1.10 and 1.10.1.
wget 1.9.1 does not have this problem.
I'm not sure I understand you correctly, but I can't repeat what you
seem to describe. For example:
$ touch index.html
$ chm
Kentaro Ozawa <[EMAIL PROTECTED]> writes:
> When the local file does not have write permission, wget displays
> permission error.
> wget version is 1.10 and 1.10.1.
> wget 1.9.1 does not have this problem.
I'm not sure I understand you correctly, but I can't repe
Hi
When the local file does not have write permission, wget displays
permission error.
wget version is 1.10 and 1.10.1.
wget 1.9.1 does not have this problem.
I would like to use wget 1.10.1. wget 1.10.1 supports files larger than
2GB.
If this is bug, I desire the fact that this problem is
Linda Walsh <[EMAIL PROTECTED]> writes:
[...]
To answer the question raised in the subject: obviously, respecting
the "robots" file does not imply (even jokingly) that Wget's operator
is a robot, but that the program is an automated agent, aka crawler,
which once set up, analyzes HTML and download
That work[ed/s]...Thanks! I actually had started looking at robots.txt
as a possible problem and saw that the site blocked "robots". at the root.
I was looking at the manpage to try to find a simple switch to turn it
off, but didn't see one. Started thinking about "workarounds" like
having squid
Hi,
I'm mirroring a web site via http and when I use the -c option, I get a
206 "Partial content"
error in every file I download, when I run the script a 2nd time.
This is the command:
wget --no-cache -r -l 0 -L -np -k --tries=3
http://mpfwww.jpl.nasa.gov/MPF/index.html
Massimo Cora' schrieb:
while trying to download the latest Suse 9.3 dvd iso [about 4.2 Gb] I
got the following error:
Length: 4,488,353,792 (4.2G), 193,386,497 (184M) remaining
95% [+ ] 4,294,967,295 --.--K/s
File size limit exceeded
I
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hi,
while trying to download the latest Suse 9.3 dvd iso [about 4.2 Gb] I
got the following error:
[EMAIL PROTECTED]:/mnt/huge_dwl/isos$ wget -c
ftp://ftp.solnet.ch/mirror/SuSE/i386/9.3/iso/SUSE-9.3-Eval-DVD.iso
- --16:27:36--
ftp://ftp.solnet.ch
> >>
> >> > some time appear this error
> >> > assertion "ptr != NULL" failed: file "xmalloc.c", line 190
> >>
> >> What were you doing when the error appeared? Do you have
> the rest of
> >> Wget's outpu
"Tony Lewis" <[EMAIL PROTECTED]> writes:
> Hrvoje Niksic wrote:
>
>> 1. Does wget -4 http://... work?
>
> Yes
Then, as a workaround you can put inet4_only=yes to your ~/.wgetr.
>> What OS are you running this on?
>
> Red Hat Linux release 6.2 (Zoot)
We should probably find a way to disable IPv6
Hrvoje Niksic wrote:
> 1. Does wget -4 http://... work?
Yes
> 2. Does Wget work when you specify --disable-ipv6 to configure?
Yes
> What OS are you running this on?
Red Hat Linux release 6.2 (Zoot)
Tony
"Tony Lewis" <[EMAIL PROTECTED]> writes:
> I got a "Name or service not known" error from wget 1.10 running on
> Linux. When I installed an earlier version of wget, it worked just
> fine. It also works just fine on version 1.10 running on Windows.
>
I got a "Name or
service not known" error from wget 1.10 running on Linux. When I installed an
earlier version of wget, it worked just fine. It also works just fine on
version 1.10 running on Windows. Any ideas?
Here's the output on
Linux:
wget --versionGNU Wget 1.9-be
Василевский Сергей <[EMAIL PROTECTED]> writes:
> some time appear this error
> assertion "ptr != NULL" failed: file "xmalloc.c", line 190
What were you doing when the error appeared? Do you have the rest of
Wget's output?
some time appear this error
assertion "ptr != NULL" failed: file "xmalloc.c", line 190
This worked perfectly, THANKS!
- Original Message -
From: "Hrvoje Niksic" <[EMAIL PROTECTED]>
To: "Tanton Gibbs" <[EMAIL PROTECTED]>
Cc:
Sent: Friday, April 22, 2005 8:50 AM
Subject: Re: 404 error & redirect
"Tanton Gibbs" <[EMAIL
. This
> works fine if I'm using internet explorer, but wget gives me a 404
> error :-( For some reason, it is not following the internal
> redirect.
Unlike IE, Wget doesn't show error responses to the user, so it can't
follow the redirect embedded in HTML. (And Wget doesn
irror them. Therefore, I have set up an ErrorDocument in
apache that on 404 errors redirects to another page. The second page, then
determines the referring URI and serves up the correct rpm. This works
fine if I'm using internet explorer, but wget gives me a 404 error :-( For
some re
On Tuesday 12 April 2005 06:17 pm, Jeanne McIlvain wrote:
> Hi!
> I attempted to download wget onto my mac. I was disappointed to find
> that it would not work. I thought that I read it was applicable to
> macs, but am I wrong? Please let me know, Thank you so much.
> - please respond to [EMA
1 - 100 of 286 matches
Mail list logo