Mauro Tortonesi <[EMAIL PROTECTED]> writes:
> i totally agree with hrvoje here. in the worst case, we can add an
> entry in the FAQ explaining how to compile wget with those buggy
> versions of microsoft cc.
Umm. What FAQ? :-)
Leonid <[EMAIL PROTECTED]> writes:
> Yes, wget 1.9.1 consideres failure to connect as a fatal error and
> abandoned to re-try attempts. I have submitted several times a patch
> for fixing this and similar problems. Presumably, it will be
> inlcuded in the future wget 1.11 . If yoy need the fix now
Mauro Tortonesi <[EMAIL PROTECTED]> writes:
> well, this is up to us and the semantics we choose for the -4 and -6
> switches. theoretically, since an IPv6 connection to
> :::127.0.0.1 is semantically equivalent to an IPv4 connection to
> 127.0.0.1, it makes sense to reject IPv4-compatible ad
Mauro Tortonesi <[EMAIL PROTECTED]> writes:
> there is another possible solution. reordering the addresses returned by
> getaddrinfo so that IPv4 addresses are at the beginning of the list.
Will that cause problems in some setups? I thought there was an RFC
that mandated that the order of recor
Is this the intended behavior of the -4/-6 switches:
$ wget -4 http://\[:::127.0.0.1\]
--00:35:50-- http://[:::127.0.0.1]/
=> `index.html'
failed: Name or service not known.
$ wget -6 http://\[:::127.0.0.1\]
--00:35:54-- http://[:::127.0.0.1]/
=> `index.html
Mauro Tortonesi <[EMAIL PROTECTED]> writes:
>> Note, however, that `host www.deepspace6.net' returns only the IPv4
>> address.
>
> not for me:
[...]
What package does your `host' come from? Mine is from the
"bind9-host" package.
>> This discussion from 2003 seems to question the practical usefu
Alan Thomas <[EMAIL PROTECTED]> writes:
> It doesn`t seem to download the file when I use the debug option.
> It just quickly says "finished."
Does the debug log look like it's trying to parse links present in the
file? Is the file content type "text/html"?
Jörn Nettingsmeier <[EMAIL PROTECTED]> writes:
> the same parser code might also work for urls in javascript. as it
> is now, mouse-over effects with overlay images don't work, because
> the second file is not retrieved. if we can come up with a good
> heuristics to guess urls, it should work in b
Jörn Nettingsmeier <[EMAIL PROTECTED]> writes:
>wget does not parse css stylesheets and consequently does not
>retrieve url() references, which leads to missing background
>graphics on some sites.
>>>
>>>this feature request has not been commented on yet. do think it
>>>might be useful
Quoting rfc2047, section 6.8:
All line breaks or other characters not found in Table 1 [the
base64 alphabet] must be ignored by decoding software.
I would take that to mean that upon encountering, for example, the
character "<" in the base64 stream, Wget should ignore it and proceed
to the
Jörn Nettingsmeier <[EMAIL PROTECTED]> writes:
>>> [3]
>>>
>>> wget does not parse css stylesheets and consequently does not
>>> retrieve url() references, which leads to missing background
>>> graphics on some sites.
>
> this feature request has not been commented on yet. do think it
> might be u
Alan Thomas <[EMAIL PROTECTED]> writes:
> The log file looks like:
>
> 17:54:41 URL:https://123.456.89.01/blabla.nsf/HOBART?opeNFRAMESET
> [565/565] -> "123.456.89.01/blabla.nsf/HOBART?opeNFRAMESET.html" [1]
>
> FINISHED --17:54:41--
> Downloaded: 565 bytes in 1 files
That's not a debug log.
I've noticed this behavior with IPv6-enabled Wget 1.10:
--19:10:30-- http://www.deepspace6.net/
=> `index.html'
Resolving www.deepspace6.net... 2001:1418:13:3::1, 192.167.219.83
Connecting to www.deepspace6.net|2001:1418:13:3::1|:80... failed: No route to
host.
Connecting to www.deeps
Alan Thomas <[EMAIL PROTECTED]> writes:
> I use Internet Explorer. I disabled Active Scripting and Scripting
> of Java Applets, but I can still access this page normally (even
> after a restart).
Then the problem is probably not JavaScript-related after all. A
debug log might help see where the
Alan Thomas <[EMAIL PROTECTED]> writes:
> That's probably it. Is there anything I can do to automatically get
> the files with wget?
I don't think so. Wget know nothing about JavaScript.
The best way to verify this is to turn off JavaScript in your browser
and see if the site still works.
"Alan Thomas" <[EMAIL PROTECTED]> writes:
> A website uses frames, and when I view it in Explorer, it has the URL
> https://123.456.89.01/blabla.nsf/HOBART?opeNFRAMESET and a bunch of PDF
> files in two of the frames.
>
> When I try to recursively download this web site, I don`t get the
> files
"Karsten Hopp" <[EMAIL PROTECTED]> writes:
> Does anybody know if the security vulnerabilities CAN-2004-1487 and
> CAN-2004-1488 will be fixed in the new version ?
Yes on both counts.
> There seems to be at least some truth in the reports (ignore the
> insulting tone of the reports).
>
> http://
[EMAIL PROTECTED] (Steven M. Schweda) writes:
>Mr, Jones is probably close to the right answer with:
>
>> #if _POSIX_TIMERS - 0 > 0
>
> I was looking for a way to make null look like positive, but a
> little more reading
> ("http://www.opengroup.org/onlinepubs/009695399/basedefs/unistd.h.html"
[EMAIL PROTECTED] (Larry Jones) writes:
> Hrvoje Niksic writes:
>>
>> I suppose we should then use:
>>
>> #ifdef _POSIX_TIMERS
>> # if _POSIX_TIMERS > 0
>
> The usual solution to this problem is:
>
> #if _POSIX_TIMERS - 0 > 0
Neat tric
[EMAIL PROTECTED] (Steven M. Schweda) writes:
>> # if defined(_POSIX_TIMERS) && _POSIX_TIMERS > 0
>
>That's fine, if you prefer:
>
> ptimer.c:95:46: operator '&&' has no right operand
I suppose we should then use:
#ifdef _POSIX_TIMERS
# if _POSIX_TIMERS > 0
... use POSIX timers ...
>Thi
Herold Heiko <[EMAIL PROTECTED]> writes:
> However there are still lots of people using Windows NT 4 or even
> win95/win98, with old compilers, where the compilation won't work
> without the patch. Even if we place a comment in the source file or
> the windows/Readme many of those will be discour
Chris McKenzie <[EMAIL PROTECTED]> writes:
> for wget -c, a range is specified, ie.
>
> GET /dubai.jpg HTTP/1.0
> User-Agent: Wget/1.9.1
> Host: localhost:4400
> Accept: */*
> Connection: Keep-Alive
> Range: bytes=40-
>
> However, HTTP/1.0 (RFC 1945) does not specify a range although it is
> i
[EMAIL PROTECTED] (Steven M. Schweda) writes:
> gcc -I. -I. -I/opt/include -DHAVE_CONFIG_H
> -DSYSTEM_WGETRC=\"/usr/local/etc/wg
> etrc\" -DLOCALEDIR=\"/usr/local/share/locale\" -O2 -Wall -Wno-implicit -c
> ptimer
> .c
> ptimer.c:95:20: operator '>' has no left operand
> [...]
Thanks for repo
"Andrzej" <[EMAIL PROTECTED]> writes:
>> The next release is in the feature freeze, so it will almost certainly
>> not support this feature.
>>
>> However, IMHO it makes a lot of sense to augment -I/-D with paths.
>> I've never been really satisfied with the interaction of -D and -np
>> anyway.
>
"Andrzej " <[EMAIL PROTECTED]> writes:
> So, please, answer at least to that question now: will you
> enhance/modify Wget somehow, so that in the next release it could do
> it?
The next release is in the feature freeze, so it will almost certainly
not support this feature.
However, IMHO it makes
It occurred to me that the 1.10 NEWS file declares IPv6 to be
supported. However, as far as I know, IPv6 doesn't work under
Windows.
Though it seems that Winsock 2 (which mswindows.h is apparently trying
to support) implements IPv6, I have a nagging suspicion that just
including and defining HAV
"Tony Lewis" <[EMAIL PROTECTED]> writes:
> PS) Jens was mistaken when he said that https requires you to log
> into the server. Some servers may require authentication before
> returning information over a secure (https) channel, but that is not
> a given.
That is true. HTTPS provides encrypted
"Alan Thomas" <[EMAIL PROTECTED]> writes:
> I am having trouble getting the files I want using a wildcard
> specifier (-A option = accept list). The following command works fine to
> get an individual file:
>
> wget
> https://164.224.25.30/FY06.nsf/($reload)/85256F8A00606A1585256F900040A32F/$
Hrvoje Niksic <[EMAIL PROTECTED]> writes:
> [EMAIL PROTECTED] writes:
>
>> If possible, it seems preferable to me to use the platform's C
>> library regex support rather than make wget dependent on another
>> library...
>
> Note that some platforms d
[EMAIL PROTECTED] writes:
> If possible, it seems preferable to me to use the platform's C
> library regex support rather than make wget dependent on another
> library...
Note that some platforms don't have library support for regexps, so
we'd have to bundle anyway.
gu gu <[EMAIL PROTECTED]> writes:
> On 4/13/05, Hrvoje Niksic <[EMAIL PROTECTED]> wrote:
>> That's strange. I've never seen a proxy that doesn't support the
>> former. Has this use of CONNECT become standard while I wasn't
>> looking? Ho
[EMAIL PROTECTED] (Steven M. Schweda) writes:
> #define VERSION_STRING "1.10-alpha1_sms1"
>
> Was there any reason to do this with a source module instead of a
> simple macro in a simple header file?
At some point that approach made it easy to read or change the
version, as the script "dist
gu gu <[EMAIL PROTECTED]> writes:
> I have a http proxy, it's address is http://10.0.0.172 80 I think it
> is a HTTP/1.1 proxy, beacuse I can support CONNECT method.
>
> My problem is
> GET ftp://ftp.gnu.org/pub/gnu/wget/wget-1.9.tar.gz HTTP/1.0
>
[EMAIL PROTECTED] (Steven M. Schweda) writes:
> Also, am I missing something obvious, or should the configure script
> (as in, "To configure Wget, run the configure script provided with
> the distribution.") be somewhere in the CVS source?
The configure script is auto-generated and is therefore n
Bryan <[EMAIL PROTECTED]> writes:
> I may run into this in the future. What is the "threshold" for large
> files failing on the -current version of wget???
The threshold is 2G (2147483648 bytes).
> I'm not expecting to d/l anything over 200MB, but is that even too
> large for it?
That's not to
martin grönemeyer <[EMAIL PROTECTED]> writes:
> I found a problem while downloading a large file via http. If I disable
> verbose output, it works fine.
Versions of Wget released so far don't support large files. Even
without verbose output, writing the file would probably throw an error
after t
"Sanjay Madhavan" <[EMAIL PROTECTED]> writes:
> wget 1.9.1 fails when trying to download a very large file.
>
> The download stopped in between and attempting to resume shows a negative
> sized balance to be downloaded.
>
> e.g.ftp://ftp.solnet.ch/mirror/SuSE/i386/9.2/iso/SUSE-Linux-9.2-FTP-DV
Tobias Tiederle <[EMAIL PROTECTED]> writes:
> let's say you have the following structure:
>
> index.html
> |-cool.html
> | |-page1.html
> | |-page2.html
> | |- ...
> |
> |-crap.html
>|-page1.html
>|-page2.html
>
> now you want to download the whole structure, but you want to
> exclude
I've now fixed this by simply having Cygwin use the Windows high-res
timers, which are very precise.
When Cygwin is fixed, we can revert it to use POSIX timers, like god
intended.
Keith Moore <[EMAIL PROTECTED]> writes:
>> No, there isn't. Sorry.
[...]
> So, there's the answer. Cygwin's POSIX timer support is incomplete.
Then why on earth do they #define _POSIX_TIMERS?
It's easy enough to add a configure test for clock_getres, but it's
damn annoying to have to do so in t
Keith Moore <[EMAIL PROTECTED]> writes:
> FWIW - POSIX timers appear to be partially
> supported. clock_gettime() is present, but there is no librt.a, so
> it's in a nonstandard place (unless I am totally missing something).
Wget doesn't require clock_gettime to be exactly in librt.(so|a), but
it
Keith Moore <[EMAIL PROTECTED]> writes:
> I downloaded the CVS version of wget today and tried to build it
> under the latest (1.15-14) Cygwin.
Thanks for the report. Please note that ptimer.c has undergone
additional changes today, so you might want to update your source.
> 1. The first proble
h, err.h, pem.h, rand.h, des.h, md4.h, and md5.h.
0.9.5a is five years old and would normally not be supported (you can
always specify --without-ssl to avoid it), but this is easy enough to
fix, so here goes:
2005-04-08 Hrvoje Niksic <[EMAIL PROTECTED]>
* configure.in: When checki
Tobias Tiederle <[EMAIL PROTECTED]> writes:
> the only noteable output while compiling (your other two patches
> applied) is:
> \Vc7\PlatformSDK\Include\WinSock.h(689) : warning C4005: 'NO_ADDRESS' :
> macro redefinition
> host.c(59) : see previous definition of 'NO_ADDRESS'
I've now fixe
Herold Heiko <[EMAIL PROTECTED]> writes:
> In order to compile current cvs with msvc 3 patches are needed
> (enclosed):
Thanks for testing!
> 1)
> mswindows.c(118) : warning C4005: 'OVERFLOW' : macro redefinition
> C:\PROGRA~1\MICROS~2\VC98\INCLUDE\math.h(415) : see previous
> definition
It's a Makefile problem; just remove string_t.o from OBJS and it
should work.
As of two days ago, the NTLM authentication support is in CVS,
although it has undergone very little testing. (To quote Linus, "If
it compiles, it's perfect.")
If someone has access to an NTLM-authorizing web server, please try it
out with Wget and let us know how it goes. Unfortunately, this
ve
"Jens Rösner" <[EMAIL PROTECTED]> writes:
> AFAIK, RegExp for (HTML?) file rejection was requested a few times,
> but is not implemented at the moment.
But the shell-style globbing (which includes [Nn]ame) should still
work, even without regexps.
Daniel Stenberg <[EMAIL PROTECTED]> writes:
> I had friends providing the test servers for both host and proxy
> authentication when I've worked on NTLM code.
It's a shame that those test servers are no longer available. I don't
think it will be possible to finish the NTLM code without some sort
Is there a test server where one can try out NTLM authentication? I'm
working on adapting Daniel's code to Wget, and having a test server
would be of great help.
"Nijs, J. de" <[EMAIL PROTECTED]> writes:
> #
> C:\Grabtest\wget.exe -r --tries=3 http://www.xs4all.nl/~npo/ -o
> C:/Grabtest/Results/log
> #
> --16:23:02-- http://www.x
Behdad Esfahbod <[EMAIL PROTECTED]> writes:
>> I am told that O_EXCL has worked just fine on NFS for many years
>> now. The open(2) man page on Linux is either outdated or assumes
>> ancient or broken NFS implementations.
>
> Well, the network I'm on is the Computer Science department's
> graduat
Mister Jack <[EMAIL PROTECTED]> writes:
> I've been suggested to use wget to retrieve a file by ftp like :
> wget ftp://$USER:[EMAIL PROTECTED]/$URI -O $URI-$DATE
> which I find nice, but my probleme is that my login contains a @ (
> [EMAIL PROTECTED] is my login. Hostname is different from the ft
Behdad Esfahbod <[EMAIL PROTECTED]> writes:
> Thanks. I tried the CVS version and the 1.8.2 version, on NFS,
> using a loop like yours, couldn't reproduce the problem.
I am told that O_EXCL has worked just fine on NFS for many years now.
The open(2) man page on Linux is either outdated or assume
Behdad Esfahbod <[EMAIL PROTECTED]> writes:
> If I use the 1.8.2 version, although I get 100 different log files,
> but get only 14 index.html files.
And this was a bug, because those HTML files are likely to be both
overwritten and concurrently written to by, on average, 7.14 Wget
processes per
I'm not sure what causes this problem, but I suspect it does not come
from Wget doing something wrong. That Notepad opens the file
correctly is indicative enough.
Maybe those browsers don't understand UTF-8 (or other) encoding of
Unicode when the file is opened on-disk?
Wget shouldn't alter the page contents, except for converted links.
Is the funny character in places which Wget should know about
(e.g. URLs in links) or in the page text? Could you page a minimal
excerpt from the page, before and after garbling done by Wget?
Alternately, could you post a URL wher
Behdad Esfahbod <[EMAIL PROTECTED]> writes:
> Well, sorry if it's all nonsense now: Last year I sent the
> following mail, and got a reply confirming this bug and that it
> may be changed to use pid instead of a serial in log filename.
> Recently I was doing a project and had the same problem, I
I now see the cause of the linking problem: Apache's is
shadowing the system one. Either remove Apache's bogus or
remove the code that defines SYSTEM_FNMATCH in sysdep.h.
I wonder if Apache does the fnmatch clobbering by default, or if the
system integrators botch things up. If it is the forme
JASON JESSO <[EMAIL PROTECTED]> writes:
[...]
> I found that GETALL conflicts with other headers on
> the system.
We can easily rename GETALL to GLOB_GETALL or something like that and
will do so for the next release. Thanks for the report.
JASON JESSO <[EMAIL PROTECTED]> writes:
> I rename the GETALL to GETALLJJ as to avoid the
> conflict. Now I get linker errors:
>
> gcc -O2 -Wall -Wno-implicit -o wget cmpt.o connect.o
> convert.o cookies.o ftp.o ftp-basic.o ftp-ls.o
> ftp-opie.o hash.o headers.o host.o html-parse.o
> html-url.o h
"Jens Rösner" <[EMAIL PROTECTED]> writes:
>> C:\wget>wget --proxy=on -x -r -l 2 -k -x -limit-rate=50k --tries=45
>> --directory-prefix=AsptDD
As Jens said, Wget 1.5.3 did not yet support bandwidth throttling.
Also please note that the option is named "--limit-rate", not
"-limit-rate".
Stephen Leaf <[EMAIL PROTECTED]> writes:
> parameter option --stdout
> this option would print the file being downloaded directly to stdout. which
> would also mean that _only_ the file's content is printed. no errors,
> verbosity.
>
> usefulness?
> wget --stdout http://server.com/file.bz2 | bzc
As currently written, Wget really prefers to determine the file name
based on the URL, before the download starts (redirections are sort of
an exception here). It would be easy to add an option to change
"index.html" to "index.xml" or whatever you desire, but it would be
much harder to determine t
Martin Trautmann <[EMAIL PROTECTED]> writes:
> On 2005-03-21 17:13, Hrvoje Niksic wrote:
>> Martin Trautmann <[EMAIL PROTECTED]> writes:
>>
>> > is there a fix when file names are too long?
>>
>> I'm afraid not. The question here would be,
Martin Trautmann <[EMAIL PROTECTED]> writes:
> is there a fix when file names are too long?
I'm afraid not. The question here would be, how should Wget know the
maximum size of file name the file system supports? I don't think
there's a portable way to determine that.
Maybe there should be a w
"Tony Lewis" <[EMAIL PROTECTED]> writes:
> Hrvoje Niksic wrote:
>
>> I don't see how and why a web site would generate headers (not
>> bodies, to be sure) larger than 64k.
>
> To be honest, I'm less concerned about the 64K header limit than I
>
"Tony Lewis" <[EMAIL PROTECTED]> writes:
> As I said, I think the proposed limits are reasonable, but what if
> they are not for a given user mirroring some website?
I don't see how and why a web site would generate headers (not bodies,
to be sure) larger than 64k. It is highly doubtful that suc
"Tony Lewis" <[EMAIL PROTECTED]> writes:
> Hrvoje Niksic wrote:
>
>> This patch imposes IMHO reasonable, yet safe, limits for reading server
>> responses into memory.
>
> Your choice of default limits looks reasonable to me, but shouldn't
> wget
Dan Jacobson <[EMAIL PROTECTED]> writes:
> Is it still useful to mail to [EMAIL PROTECTED] I don't think
> anybody's home. Shall the address be closed?
If you're referring to Mauro being busy, I don't see it as a reason to
close the bug reporting address.
John Andrea <[EMAIL PROTECTED]> writes:
> I'm trying to connect to a virtual host even though the DNS does not
> point to that host. I believe this should work if I specify the ip
> address of the host and then use the Host: header within the
> request. A test with telnet tells me that this works
[EMAIL PROTECTED] writes:
> I found some problem to download page from https://... URL, when I
> have connection only through proxy. For NON http protocols I use
> CONNECT method, but wget seems to not use it and access directly
> https URLS. For http:// URL wget downloads fine.
>
> Can you tell m
Roman Shiryaev <[EMAIL PROTECTED]> writes:
> I usually download files using wget from one of ISP filesevers via
> 8Mbps ADSL under linux. And wget always shows me that download speed
> is no more than ~830 kbytes/sec. Now I guess that this is a transfer
> speed of really useful data only (i.e. wge
Martin Trautmann <[EMAIL PROTECTED]> writes:
> I'm afraid that reading the URLs from an input file can't be passed
> through a -D filter? What's a reasonable behavior of combining -i
> and -D?
-D filters the URLs encountered with -r. Specifying an input file is
the same as specifying those URLs
Gabor Istvan <[EMAIL PROTECTED]> writes:
> I would like to know if it is possible to mirror or recursively download
> web sites that have links like ' .php?dir=./ ' within. If yes what are the
> options to apply?
I don't see why that wouldn't work. Something like `wget -r URL'
should apply.
Thanks for the pointer. Note that a `--active-ftp' is not necessary
in the CVS version because every --option has the equivalent
--no-option. This means that people who don't want passive FTP can
specify `--no-passive-ftp', or `--passive-ftp=no'.
With today's prevalence of NAT, I believe that passive FTP should be
made default.
On the systems without NAT, both types should work, and on systems
that use NAT only passive FTP will work. This makes it the obvious
choice to be the default. I believe web browsers have been doing the
same for a
Brad Andersen <[EMAIL PROTECTED]> writes:
> This option appears to be missing from "wget --help", however,
> it is in the documentation. It is not working in 1.9 or
> 1.9.1.
That option will first appear in Wget 1.10 and is currently available
in CVS. Where did you find docume
"Belov, Charles" <[EMAIL PROTECTED]> writes:
> I would like to use wget 1.9.1 instead of the wget 1.8.x which is
> installed on our server. I downloaded 1.9.1 from the Gnu ftp site,
> and issued the command:
>
> make -f Makefile.in wget191
You're not supposed to use Makefile.in directly. Run `./
Noèl Köthe <[EMAIL PROTECTED]> writes:
> Am Mittwoch, den 23.02.2005, 23:13 +0100 schrieb Hrvoje Niksic:
>
>> The most requested feature of the last several years finally arrives
>> -- large file support. With this patch Wget should be able to
>> download files l
Gisle Vanem <[EMAIL PROTECTED]> writes:
> It doesn't seem the patches to support >2GB files works on
> Windows. Wget hangs indefinitely at the end of transfer. E.g.
[...]
I seem to be unable to repeat this.
Does this happen with only with large files, or with all files on
large-file-enabled ver
Gisle Vanem <[EMAIL PROTECTED]> writes:
> There is strace for Win-NT too. But I dare not install it to find
> out.
Hmm, OK.
> PS. it is quite annoying to get 2 copies of every message.
I'll try to remember to edit the headers to leave your private address
out.
> Also, there should be a "Reply-
Steve Thompson <[EMAIL PROTECTED]> writes:
> I have found in another context that the Windows C run-time library
> can't handle files larger than 2GB in any context, when using fopen,
> etc. The size of off_t is 4 bytes on IA32.
I know that, but stdio is not necessarily tied to off_t anyway --
ex
Gisle Vanem <[EMAIL PROTECTED]> writes:
> It doesn't seem the patches to support >2GB files works on
> Windows. Wget hangs indefinitely at the end of transfer.
Is there a way to trace what syscall Wget is stuck at? Under Cygwin I
can try to use strace, but I'm not sure if I'll be able to repeat
Gisle Vanem <[EMAIL PROTECTED]> writes:
>> Another option was to simply set the (system) errno after the Winsock
>> operations, and have our own strerror that recognizes them. (That
>> assumes that Winsock errno values don't conflict with the system ones,
>> which I believe is the case.)
>
> That
Gisle Vanem <[EMAIL PROTECTED]> writes:
> "Hrvoje Niksic" wrote:
>
>> In other words, large files now work on Windows? I must admit, that
>> was almost too easy. :-)
>
> Don't open the champagne bottle just yet :)
Too late, the bottle is already em
Please note that Wget 1.9.x doesn't support downloading of 2G+ files.
To download large files, get the CVS version of Wget (see
http://wget.sunsite.dk for instructions.)
Is there a way to get the functionality of open(..., O_CREAT|O_EXCL)
under Windows? For those who don't know, O_EXCL opens the file
"exclusively", guaranteeing that the file we're opening will not be
overwritten. (Note that it's not enough to check that the file
doesn't exist before opening it; i
Herold Heiko <[EMAIL PROTECTED]> writes:
> Does solve, in fact I found some MS articles suggesting the same thing.
> Attached patch does work around the problem by disabling optimization
> selectively..
> I was able to retrieve a 2.5GB file with ftp.
In other words, large files now work on Window
Herold Heiko <[EMAIL PROTECTED]> writes:
> Said that, in retr.c simplifying the int rdsize line did not solve, but I
> tried the following, we have:
> #ifndef MIN
> # define MIN(i, j) ((i) <= (j) ? (i) : (j))
> #endif
>
> int rdsize = exact ? MIN (toread - sum_read, dlbufsize) : dlbufsize
"Maciej W. Rozycki" <[EMAIL PROTECTED]> writes:
> Doesn't GCC work for this target?
It does, in the form of "Cygwin" and "MingW". But Heiko was using MS
VC before, and we have catered to broken compilers before, so it
doesn't hurt to try.
Herold Heiko <[EMAIL PROTECTED]> writes:
>> > http.c(503) : warning C4090: 'function' : different 'const'
>> > qualifiers
>> [...]
>>
>> I don't quite understand these warnings. Did they occur before?
>
> Definitively, I trie with a rev from March 2004, same warnings.
Then we can ignore them fo
Herold Heiko <[EMAIL PROTECTED]> writes:
> I tried a test compile just now, with Visual C++ 6 I get different
> errors:
Thanks for checking it.
> string_t.[ch] - iswblank doesn't seem to be available,
For now, just remove string_t from the Makefile. It's not used
anywhere yet.
> Also, the lar
"Maciej W. Rozycki" <[EMAIL PROTECTED]> writes:
>> I wonder what is the difference between AC_FUNC_FSEEKO and
>> AC_CHECK_FUNCS(seeko). The manual doesn't seem to explain.
>
> Well, that's what I have on my local system:
>
> - Macro: AC_FUNC_FSEEKO
> If the `fseeko' function is available,
"Maciej W. Rozycki" <[EMAIL PROTECTED]> writes:
>> Is it possible to portably use open() and retain large file support?
>
> Try the AC_SYS_LARGEFILE autoconf macro.
That's what I thought I was using. I was just afraid that open()
wasn't correctly encompassed by the large file API's, a fear that
Simone Piunno <[EMAIL PROTECTED]> writes:
> On Tuesday 22 February 2005 00:10, Hrvoje Niksic wrote:
>
>> If wide chars were in that message, you could no longer print it with
>> printf, which means that a majority of gettext-using programs would be
>> utterly broke
[EMAIL PROTECTED] (Steven M. Schweda) writes:
>SunOS 5.9 /usr/include/fcntl.h:
>
> [...]
> /* large file compilation environment setup */
> #if !defined(_LP64) && _FILE_OFFSET_BITS == 64
> #ifdef __PRAGMA_REDEFINE_EXTNAME
> #pragma redefine_extnameopen
When opening files, Wget takes care (by default) to not overwrite an
existing file, and to tell the user where the file is to be saved.
However, the defense against overwriting may fail because Wget
determines the file name before attempting the download, but only
opens the file when the data star
Simone Piunno <[EMAIL PROTECTED]> writes:
> On Monday 21 February 2005 16:18, Hrvoje Niksic wrote:
>
>> Also, gettext doesn't change behavior of low-level routines in a
>> fundamental way -- it's just a way of getting different strings.
>> On the ot
Mauro Tortonesi <[EMAIL PROTECTED]> writes:
> i don't know what's the correct procedure to add a new translation
> to a GNU project (hrvoje, do you have any ideas?),
I used to add translations for Croatian, both for Wget and for other
programs, so I should know, but I must admit that the details
501 - 600 of 1929 matches
Mail list logo