Wget 1.9.1 doesn't work with large files. The soon-to-be-released
1.10 does, though.
ftp://ftp.deepspace6.net/pub/ds6/sources/wget/wget-1.10-alpha2.tar.bz2
Sami Krank [EMAIL PROTECTED] writes:
Last two days I have been learning and debugging NTLM on the latest
version of wget from cvs (1.10-alpha2+cvs-dev). Mainly I have
debugged on linux, but noticed same problems occurs on win32
too. During testing I found four bugs, that needs to be fixed.
Daniel Stenberg [EMAIL PROTECTED] writes:
On Thu, 21 Apr 2005, Konrad Chan wrote:
After browsing the openssl newsgroup per Hrvoje's suggestion, I came
to a similar conclusion as well (cipher problem). However, I
couldn't find instructions on how to change the cipher for wget, I
tried all
Doug Kaufman [EMAIL PROTECTED] writes:
On Wed, 20 Apr 2005, Hrvoje Niksic wrote:
Herold Heiko [EMAIL PROTECTED] writes:
I am greatly surprised. Do you really believe that Windows users
outside an academic environment are proficient in using the compiler?
I have never seen a home Windows
I remembered why I never documented the SSL options. Because they are
badly named, accept weird values, and I wanted to fix them. I felt
(and still feel) that documenting them would make them official and
force us to keep supporting them forever.
Here is the list, extracted from `wget --help':
Mauro Tortonesi [EMAIL PROTECTED] writes:
there is another possible solution. reordering the addresses returned by
getaddrinfo so that IPv4 addresses are at the beginning of the list.
Will that cause problems in some setups? I thought there was an RFC
that mandated that the order of records
Leonid [EMAIL PROTECTED] writes:
Yes, wget 1.9.1 consideres failure to connect as a fatal error and
abandoned to re-try attempts. I have submitted several times a patch
for fixing this and similar problems. Presumably, it will be
inlcuded in the future wget 1.11 . If yoy need the fix now, you
Mauro Tortonesi [EMAIL PROTECTED] writes:
i totally agree with hrvoje here. in the worst case, we can add an
entry in the FAQ explaining how to compile wget with those buggy
versions of microsoft cc.
Umm. What FAQ? :-)
Mauro Tortonesi [EMAIL PROTECTED] writes:
well, to defend myself, i have to say that nc6 handles the -4 and -6
switches by simply setting the ai_family member of the hints struct
to be passed to getaddrinfo to PF_INET6 and PF_INET respectively,
instead of the PF_UNSPEC default. so, the
Herold Heiko [EMAIL PROTECTED] writes:
From my impressions of the Windows world, non-developers won't touch
source code anyway -- they will simply use the binary.
I feel I must dissent.
I am greatly surprised. Do you really believe that Windows users
outside an academic environment are
Mauro Tortonesi [EMAIL PROTECTED] writes:
On Wednesday 20 April 2005 04:58 am, Hrvoje Niksic wrote:
Mauro Tortonesi [EMAIL PROTECTED] writes:
i totally agree with hrvoje here. in the worst case, we can add an
entry in the FAQ explaining how to compile wget with those buggy
versions
Mauro Tortonesi [EMAIL PROTECTED] writes:
sorry, i forgot to tell you that when the -6 switch is used, nc6
also sets the IPV6_V6ONLY option for the PF_INET6 socket used in the
communication. that's why IPv4-compatible addresses are rejected.
Should Wget do the same? It seems to make sense to
[ Cc'ing the Wget mailing list ]
Konrad Chan [EMAIL PROTECTED] writes:
Hi, I was wondering if you could provide some assistance on how to
resolve this problem.
wget using SSL works except for this site. Any reason why and how to
resolve?
It seems this site is sending something that the
Mauro Tortonesi [EMAIL PROTECTED] writes:
Note, however, that `host www.deepspace6.net' returns only the IPv4
address.
not for me:
[...]
What package does your `host' come from? Mine is from the
bind9-host package.
This discussion from 2003 seems to question the practical usefulness of
Is this the intended behavior of the -4/-6 switches:
$ wget -4 http://\[:::127.0.0.1\]
--00:35:50-- http://[:::127.0.0.1]/
= `index.html'
failed: Name or service not known.
$ wget -6 http://\[:::127.0.0.1\]
--00:35:54-- http://[:::127.0.0.1]/
= `index.html'
Jörn Nettingsmeier [EMAIL PROTECTED] writes:
[3]
wget does not parse css stylesheets and consequently does not
retrieve url() references, which leads to missing background
graphics on some sites.
this feature request has not been commented on yet. do think it
might be useful ?
I think
Quoting rfc2047, section 6.8:
All line breaks or other characters not found in Table 1 [the
base64 alphabet] must be ignored by decoding software.
I would take that to mean that upon encountering, for example, the
character in the base64 stream, Wget should ignore it and proceed
to the
Jörn Nettingsmeier [EMAIL PROTECTED] writes:
wget does not parse css stylesheets and consequently does not
retrieve url() references, which leads to missing background
graphics on some sites.
this feature request has not been commented on yet. do think it
might be useful ?
I think it's very
Jörn Nettingsmeier [EMAIL PROTECTED] writes:
the same parser code might also work for urls in javascript. as it
is now, mouse-over effects with overlay images don't work, because
the second file is not retrieved. if we can come up with a good
heuristics to guess urls, it should work in both
Alan Thomas [EMAIL PROTECTED] writes:
I use Internet Explorer. I disabled Active Scripting and Scripting
of Java Applets, but I can still access this page normally (even
after a restart).
Then the problem is probably not JavaScript-related after all. A
debug log might help see where the
I've noticed this behavior with IPv6-enabled Wget 1.10:
--19:10:30-- http://www.deepspace6.net/
= `index.html'
Resolving www.deepspace6.net... 2001:1418:13:3::1, 192.167.219.83
Connecting to www.deepspace6.net|2001:1418:13:3::1|:80... failed: No route to
host.
Connecting to
Alan Thomas [EMAIL PROTECTED] writes:
The log file looks like:
17:54:41 URL:https://123.456.89.01/blabla.nsf/HOBART?opeNFRAMESET
[565/565] - 123.456.89.01/blabla.nsf/HOBART?opeNFRAMESET.html [1]
FINISHED --17:54:41--
Downloaded: 565 bytes in 1 files
That's not a debug log. You get a
Alan Thomas [EMAIL PROTECTED] writes:
That's probably it. Is there anything I can do to automatically get
the files with wget?
I don't think so. Wget know nothing about JavaScript.
The best way to verify this is to turn off JavaScript in your browser
and see if the site still works.
Herold Heiko [EMAIL PROTECTED] writes:
However there are still lots of people using Windows NT 4 or even
win95/win98, with old compilers, where the compilation won't work
without the patch. Even if we place a comment in the source file or
the windows/Readme many of those will be discouraged,
[EMAIL PROTECTED] (Steven M. Schweda) writes:
# if defined(_POSIX_TIMERS) _POSIX_TIMERS 0
That's fine, if you prefer:
ptimer.c:95:46: operator '' has no right operand
I suppose we should then use:
#ifdef _POSIX_TIMERS
# if _POSIX_TIMERS 0
... use POSIX timers ...
This doc makes
[EMAIL PROTECTED] (Larry Jones) writes:
Hrvoje Niksic writes:
I suppose we should then use:
#ifdef _POSIX_TIMERS
# if _POSIX_TIMERS 0
The usual solution to this problem is:
#if _POSIX_TIMERS - 0 0
Neat trick, thanks.
That gets the right answer regardless of whether
[EMAIL PROTECTED] (Steven M. Schweda) writes:
Mr, Jones is probably close to the right answer with:
#if _POSIX_TIMERS - 0 0
I was looking for a way to make null look like positive, but a
little more reading
(http://www.opengroup.org/onlinepubs/009695399/basedefs/unistd.h.html;)
Karsten Hopp [EMAIL PROTECTED] writes:
Does anybody know if the security vulnerabilities CAN-2004-1487 and
CAN-2004-1488 will be fixed in the new version ?
Yes on both counts.
There seems to be at least some truth in the reports (ignore the
insulting tone of the reports).
Alan Thomas [EMAIL PROTECTED] writes:
A website uses frames, and when I view it in Explorer, it has the URL
https://123.456.89.01/blabla.nsf/HOBART?opeNFRAMESET and a bunch of PDF
files in two of the frames.
When I try to recursively download this web site, I don`t get the
files.
[...]
Hrvoje Niksic [EMAIL PROTECTED] writes:
[EMAIL PROTECTED] writes:
If possible, it seems preferable to me to use the platform's C
library regex support rather than make wget dependent on another
library...
Note that some platforms don't have library support for regexps, so
we'd have
Alan Thomas [EMAIL PROTECTED] writes:
I am having trouble getting the files I want using a wildcard
specifier (-A option = accept list). The following command works fine to
get an individual file:
wget
Tony Lewis [EMAIL PROTECTED] writes:
PS) Jens was mistaken when he said that https requires you to log
into the server. Some servers may require authentication before
returning information over a secure (https) channel, but that is not
a given.
That is true. HTTPS provides encrypted
It occurred to me that the 1.10 NEWS file declares IPv6 to be
supported. However, as far as I know, IPv6 doesn't work under
Windows.
Though it seems that Winsock 2 (which mswindows.h is apparently trying
to support) implements IPv6, I have a nagging suspicion that just
including winsock2.h and
Andrzej [EMAIL PROTECTED] writes:
So, please, answer at least to that question now: will you
enhance/modify Wget somehow, so that in the next release it could do
it?
The next release is in the feature freeze, so it will almost certainly
not support this feature.
However, IMHO it makes a lot
Andrzej [EMAIL PROTECTED] writes:
The next release is in the feature freeze, so it will almost certainly
not support this feature.
However, IMHO it makes a lot of sense to augment -I/-D with paths.
I've never been really satisfied with the interaction of -D and -np
anyway.
Can you
[EMAIL PROTECTED] (Steven M. Schweda) writes:
#define VERSION_STRING 1.10-alpha1_sms1
Was there any reason to do this with a source module instead of a
simple macro in a simple header file?
At some point that approach made it easy to read or change the
version, as the script dist-wget
gu gu [EMAIL PROTECTED] writes:
On 4/13/05, Hrvoje Niksic [EMAIL PROTECTED] wrote:
That's strange. I've never seen a proxy that doesn't support the
former. Has this use of CONNECT become standard while I wasn't
looking? How does it allow you to establish FTP data connections?
Here
[EMAIL PROTECTED] writes:
If possible, it seems preferable to me to use the platform's C
library regex support rather than make wget dependent on another
library...
Note that some platforms don't have library support for regexps, so
we'd have to bundle anyway.
[EMAIL PROTECTED] (Steven M. Schweda) writes:
Also, am I missing something obvious, or should the configure script
(as in, To configure Wget, run the configure script provided with
the distribution.) be somewhere in the CVS source?
The configure script is auto-generated and is therefore not
gu gu [EMAIL PROTECTED] writes:
I have a http proxy, it's address is http://10.0.0.172 80 I think it
is a HTTP/1.1 proxy, beacuse I can support CONNECT method.
My problem is
GET ftp://ftp.gnu.org/pub/gnu/wget/wget-1.9.tar.gz HTTP/1.0
Sanjay Madhavan [EMAIL PROTECTED] writes:
wget 1.9.1 fails when trying to download a very large file.
The download stopped in between and attempting to resume shows a negative
sized balance to be downloaded.
e.g.ftp://ftp.solnet.ch/mirror/SuSE/i386/9.2/iso/SUSE-Linux-9.2-FTP-DVD.iso
martin grönemeyer [EMAIL PROTECTED] writes:
I found a problem while downloading a large file via http. If I disable
verbose output, it works fine.
Versions of Wget released so far don't support large files. Even
without verbose output, writing the file would probably throw an error
after the
Bryan [EMAIL PROTECTED] writes:
I may run into this in the future. What is the threshold for large
files failing on the -current version of wget???
The threshold is 2G (2147483648 bytes).
I'm not expecting to d/l anything over 200MB, but is that even too
large for it?
That's not too
Tobias Tiederle [EMAIL PROTECTED] writes:
let's say you have the following structure:
index.html
|-cool.html
| |-page1.html
| |-page2.html
| |- ...
|
|-crap.html
|-page1.html
|-page2.html
now you want to download the whole structure, but you want to
exclude the crap (with
not be supported (you can
always specify --without-ssl to avoid it), but this is easy enough to
fix, so here goes:
2005-04-08 Hrvoje Niksic [EMAIL PROTECTED]
* configure.in: When checking for OpenSSL headers, check for all
the ones that Wget is using.
Index: configure.in
Keith Moore [EMAIL PROTECTED] writes:
I downloaded the CVS version of wget today and tried to build it
under the latest (1.15-14) Cygwin.
Thanks for the report. Please note that ptimer.c has undergone
additional changes today, so you might want to update your source.
1. The first problem is
Keith Moore [EMAIL PROTECTED] writes:
FWIW - POSIX timers appear to be partially
supported. clock_gettime() is present, but there is no librt.a, so
it's in a nonstandard place (unless I am totally missing something).
Wget doesn't require clock_gettime to be exactly in librt.(so|a), but
it has
I've now fixed this by simply having Cygwin use the Windows high-res
timers, which are very precise.
When Cygwin is fixed, we can revert it to use POSIX timers, like god
intended.
It's a Makefile problem; just remove string_t.o from OBJS and it
should work.
Herold Heiko [EMAIL PROTECTED] writes:
In order to compile current cvs with msvc 3 patches are needed
(enclosed):
Thanks for testing!
1)
mswindows.c(118) : warning C4005: 'OVERFLOW' : macro redefinition
C:\PROGRA~1\MICROS~2\VC98\INCLUDE\math.h(415) : see previous
definition of
Tobias Tiederle [EMAIL PROTECTED] writes:
the only noteable output while compiling (your other two patches
applied) is:
\Vc7\PlatformSDK\Include\WinSock.h(689) : warning C4005: 'NO_ADDRESS' :
macro redefinition
host.c(59) : see previous definition of 'NO_ADDRESS'
I've now fixed
Jens Rösner [EMAIL PROTECTED] writes:
AFAIK, RegExp for (HTML?) file rejection was requested a few times,
but is not implemented at the moment.
But the shell-style globbing (which includes [Nn]ame) should still
work, even without regexps.
Is there a test server where one can try out NTLM authentication? I'm
working on adapting Daniel's code to Wget, and having a test server
would be of great help.
Behdad Esfahbod [EMAIL PROTECTED] writes:
If I use the 1.8.2 version, although I get 100 different log files,
but get only 14 index.html files.
And this was a bug, because those HTML files are likely to be both
overwritten and concurrently written to by, on average, 7.14 Wget
processes per
Behdad Esfahbod [EMAIL PROTECTED] writes:
Thanks. I tried the CVS version and the 1.8.2 version, on NFS,
using a loop like yours, couldn't reproduce the problem.
I am told that O_EXCL has worked just fine on NFS for many years now.
The open(2) man page on Linux is either outdated or assumes
Mister Jack [EMAIL PROTECTED] writes:
I've been suggested to use wget to retrieve a file by ftp like :
wget ftp://$USER:[EMAIL PROTECTED]/$URI -O $URI-$DATE
which I find nice, but my probleme is that my login contains a @ (
[EMAIL PROTECTED] is my login. Hostname is different from the ftp
Behdad Esfahbod [EMAIL PROTECTED] writes:
I am told that O_EXCL has worked just fine on NFS for many years
now. The open(2) man page on Linux is either outdated or assumes
ancient or broken NFS implementations.
Well, the network I'm on is the Computer Science department's
graduate students
Wget shouldn't alter the page contents, except for converted links.
Is the funny character in places which Wget should know about
(e.g. URLs in links) or in the page text? Could you page a minimal
excerpt from the page, before and after garbling done by Wget?
Alternately, could you post a URL
I'm not sure what causes this problem, but I suspect it does not come
from Wget doing something wrong. That Notepad opens the file
correctly is indicative enough.
Maybe those browsers don't understand UTF-8 (or other) encoding of
Unicode when the file is opened on-disk?
Behdad Esfahbod [EMAIL PROTECTED] writes:
Well, sorry if it's all nonsense now: Last year I sent the
following mail, and got a reply confirming this bug and that it
may be changed to use pid instead of a serial in log filename.
Recently I was doing a project and had the same problem, I found
JASON JESSO [EMAIL PROTECTED] writes:
I rename the GETALL to GETALLJJ as to avoid the
conflict. Now I get linker errors:
gcc -O2 -Wall -Wno-implicit -o wget cmpt.o connect.o
convert.o cookies.o ftp.o ftp-basic.o ftp-ls.o
ftp-opie.o hash.o headers.o host.o html-parse.o
html-url.o http.o
JASON JESSO [EMAIL PROTECTED] writes:
[...]
I found that GETALL conflicts with other headers on
the system.
We can easily rename GETALL to GLOB_GETALL or something like that and
will do so for the next release. Thanks for the report.
I now see the cause of the linking problem: Apache's fnmatch.h is
shadowing the system one. Either remove Apache's bogus fnmatch.h or
remove the code that defines SYSTEM_FNMATCH in sysdep.h.
I wonder if Apache does the fnmatch clobbering by default, or if the
system integrators botch things up.
Stephen Leaf [EMAIL PROTECTED] writes:
parameter option --stdout
this option would print the file being downloaded directly to stdout. which
would also mean that _only_ the file's content is printed. no errors,
verbosity.
usefulness?
wget --stdout http://server.com/file.bz2 | bzcat file
Jens Rösner [EMAIL PROTECTED] writes:
C:\wgetwget --proxy=on -x -r -l 2 -k -x -limit-rate=50k --tries=45
--directory-prefix=AsptDD
As Jens said, Wget 1.5.3 did not yet support bandwidth throttling.
Also please note that the option is named --limit-rate, not
-limit-rate.
As currently written, Wget really prefers to determine the file name
based on the URL, before the download starts (redirections are sort of
an exception here). It would be easy to add an option to change
index.html to index.xml or whatever you desire, but it would be
much harder to determine the
Martin Trautmann [EMAIL PROTECTED] writes:
is there a fix when file names are too long?
I'm afraid not. The question here would be, how should Wget know the
maximum size of file name the file system supports? I don't think
there's a portable way to determine that.
Maybe there should be a way
Martin Trautmann [EMAIL PROTECTED] writes:
On 2005-03-21 17:13, Hrvoje Niksic wrote:
Martin Trautmann [EMAIL PROTECTED] writes:
is there a fix when file names are too long?
I'm afraid not. The question here would be, how should Wget know the
maximum size of file name the file system
Tony Lewis [EMAIL PROTECTED] writes:
Hrvoje Niksic wrote:
I don't see how and why a web site would generate headers (not
bodies, to be sure) larger than 64k.
To be honest, I'm less concerned about the 64K header limit than I
am about limiting a header line to 4096 bytes.
The 4k limit does
Tony Lewis [EMAIL PROTECTED] writes:
Hrvoje Niksic wrote:
This patch imposes IMHO reasonable, yet safe, limits for reading server
responses into memory.
Your choice of default limits looks reasonable to me, but shouldn't
wget provide the user a way to override these limits
Roman Shiryaev [EMAIL PROTECTED] writes:
I usually download files using wget from one of ISP filesevers via
8Mbps ADSL under linux. And wget always shows me that download speed
is no more than ~830 kbytes/sec. Now I guess that this is a transfer
speed of really useful data only (i.e. wget
[EMAIL PROTECTED] writes:
I found some problem to download page from https://... URL, when I
have connection only through proxy. For NON http protocols I use
CONNECT method, but wget seems to not use it and access directly
https URLS. For http:// URL wget downloads fine.
Can you tell me, is
John Andrea [EMAIL PROTECTED] writes:
I'm trying to connect to a virtual host even though the DNS does not
point to that host. I believe this should work if I specify the ip
address of the host and then use the Host: header within the
request. A test with telnet tells me that this works.
Dan Jacobson [EMAIL PROTECTED] writes:
Is it still useful to mail to [EMAIL PROTECTED] I don't think
anybody's home. Shall the address be closed?
If you're referring to Mauro being busy, I don't see it as a reason to
close the bug reporting address.
Gabor Istvan [EMAIL PROTECTED] writes:
I would like to know if it is possible to mirror or recursively download
web sites that have links like ' .php?dir=./ ' within. If yes what are the
options to apply?
I don't see why that wouldn't work. Something like `wget -r URL'
should apply.
Martin Trautmann [EMAIL PROTECTED] writes:
I'm afraid that reading the URLs from an input file can't be passed
through a -D filter? What's a reasonable behavior of combining -i
and -D?
-D filters the URLs encountered with -r. Specifying an input file is
the same as specifying those URLs on
Thanks for the pointer. Note that a `--active-ftp' is not necessary
in the CVS version because every --option has the equivalent
--no-option. This means that people who don't want passive FTP can
specify `--no-passive-ftp', or `--passive-ftp=no'.
Brad Andersen [EMAIL PROTECTED] writes:
This option appears to be missing from wget --help, however,
it is in the documentation. It is not working in 1.9 or
1.9.1.
That option will first appear in Wget 1.10 and is currently available
in CVS. Where did you find
With today's prevalence of NAT, I believe that passive FTP should be
made default.
On the systems without NAT, both types should work, and on systems
that use NAT only passive FTP will work. This makes it the obvious
choice to be the default. I believe web browsers have been doing the
same for
Noèl Köthe [EMAIL PROTECTED] writes:
Am Mittwoch, den 23.02.2005, 23:13 +0100 schrieb Hrvoje Niksic:
The most requested feature of the last several years finally arrives
-- large file support. With this patch Wget should be able to
download files larger than 2GB on systems that support them
Belov, Charles [EMAIL PROTECTED] writes:
I would like to use wget 1.9.1 instead of the wget 1.8.x which is
installed on our server. I downloaded 1.9.1 from the Gnu ftp site,
and issued the command:
make -f Makefile.in wget191
You're not supposed to use Makefile.in directly. Run
Gisle Vanem [EMAIL PROTECTED] writes:
It doesn't seem the patches to support 2GB files works on
Windows. Wget hangs indefinitely at the end of transfer. E.g.
[...]
I seem to be unable to repeat this.
Does this happen with only with large files, or with all files on
large-file-enabled version
Steve Thompson [EMAIL PROTECTED] writes:
I have found in another context that the Windows C run-time library
can't handle files larger than 2GB in any context, when using fopen,
etc. The size of off_t is 4 bytes on IA32.
I know that, but stdio is not necessarily tied to off_t anyway --
except
Gisle Vanem [EMAIL PROTECTED] writes:
There is strace for Win-NT too. But I dare not install it to find
out.
Hmm, OK.
PS. it is quite annoying to get 2 copies of every message.
I'll try to remember to edit the headers to leave your private address
out.
Also, there should be a Reply-to:
Please note that Wget 1.9.x doesn't support downloading of 2G+ files.
To download large files, get the CVS version of Wget (see
http://wget.sunsite.dk for instructions.)
Gisle Vanem [EMAIL PROTECTED] writes:
Hrvoje Niksic wrote:
In other words, large files now work on Windows? I must admit, that
was almost too easy. :-)
Don't open the champagne bottle just yet :)
Too late, the bottle is already empty. :-)
Now could someone try this with Borland
Gisle Vanem [EMAIL PROTECTED] writes:
Another option was to simply set the (system) errno after the Winsock
operations, and have our own strerror that recognizes them. (That
assumes that Winsock errno values don't conflict with the system ones,
which I believe is the case.)
That assumption
Herold Heiko [EMAIL PROTECTED] writes:
Said that, in retr.c simplifying the int rdsize line did not solve, but I
tried the following, we have:
#ifndef MIN
# define MIN(i, j) ((i) = (j) ? (i) : (j))
#endif
int rdsize = exact ? MIN (toread - sum_read, dlbufsize) : dlbufsize;
Herold Heiko [EMAIL PROTECTED] writes:
Does solve, in fact I found some MS articles suggesting the same thing.
Attached patch does work around the problem by disabling optimization
selectively..
I was able to retrieve a 2.5GB file with ftp.
In other words, large files now work on Windows? I
Is there a way to get the functionality of open(..., O_CREAT|O_EXCL)
under Windows? For those who don't know, O_EXCL opens the file
exclusively, guaranteeing that the file we're opening will not be
overwritten. (Note that it's not enough to check that the file
doesn't exist before opening it; it
Herold Heiko [EMAIL PROTECTED] writes:
I tried a test compile just now, with Visual C++ 6 I get different
errors:
Thanks for checking it.
string_t.[ch] - iswblank doesn't seem to be available,
For now, just remove string_t from the Makefile. It's not used
anywhere yet.
Also, the large
Herold Heiko [EMAIL PROTECTED] writes:
http.c(503) : warning C4090: 'function' : different 'const'
qualifiers
[...]
I don't quite understand these warnings. Did they occur before?
Definitively, I trie with a rev from March 2004, same warnings.
Then we can ignore them for now.
I
Maciej W. Rozycki [EMAIL PROTECTED] writes:
Doesn't GCC work for this target?
It does, in the form of Cygwin and MingW. But Heiko was using MS
VC before, and we have catered to broken compilers before, so it
doesn't hurt to try.
Simone Piunno [EMAIL PROTECTED] writes:
On Tuesday 22 February 2005 00:10, Hrvoje Niksic wrote:
If wide chars were in that message, you could no longer print it with
printf, which means that a majority of gettext-using programs would be
utterly broken, Wget included. I imagine I would have
Maciej W. Rozycki [EMAIL PROTECTED] writes:
Is it possible to portably use open() and retain large file support?
Try the AC_SYS_LARGEFILE autoconf macro.
That's what I thought I was using. I was just afraid that open()
wasn't correctly encompassed by the large file API's, a fear that
proved
Maciej W. Rozycki [EMAIL PROTECTED] writes:
I wonder what is the difference between AC_FUNC_FSEEKO and
AC_CHECK_FUNCS(seeko). The manual doesn't seem to explain.
Well, that's what I have on my local system:
- Macro: AC_FUNC_FSEEKO
If the `fseeko' function is available, define
When opening files, Wget takes care (by default) to not overwrite an
existing file, and to tell the user where the file is to be saved.
However, the defense against overwriting may fail because Wget
determines the file name before attempting the download, but only
opens the file when the data
[EMAIL PROTECTED] (Steven M. Schweda) writes:
SunOS 5.9 /usr/include/fcntl.h:
[...]
/* large file compilation environment setup */
#if !defined(_LP64) _FILE_OFFSET_BITS == 64
#ifdef __PRAGMA_REDEFINE_EXTNAME
#pragma redefine_extnameopenopen64
Mauro Tortonesi [EMAIL PROTECTED] writes:
On Sunday 20 February 2005 06:31 pm, Hrvoje Niksic wrote:
string_t.c uses the function iswblank, which doesn't seem to exist
on Solaris 8 I tried to compile it on. (Compilation is likely
broken on other non-Linux platforms as well for the same reason
Maciej W. Rozycki [EMAIL PROTECTED] writes:
Actually if (sizeof(number) == 8) is much more readable than any
preprocessor clutter and yields exactly the same.
Agreed, in some cases. In others it yields to pretty annoying
compiler warnings.
401 - 500 of 1457 matches
Mail list logo