Re: More portability stuff [Re: gettext configuration]
Micah Cowan <[EMAIL PROTECTED]> writes: > I can't even begin to fathom why some system would fail to compile > in such an event: _XOPEN_SOURCE is a feature request, not a > guarantee that you'll get some level of POSIX. Yes, but sometimes the system headers are buggy. Or sometimes they work just fine with the system compiler, but not so well with GCC. I don't know which was the case at the time, but I remember that compilation failed with _XOPEN_SOURCE and worked without it. > Do you happen to remember the system? If I remember correctly, the system was a (by current standards) old version of Tru64. The irony. :-) > I'd rather always define it, except for the systems where we know it > fails, rather than just define it where it's safe. I agree that that would be a better default now that many other programs unconditionally define _XOPEN_SOURCE. At the time I only defined _XOPEN_SOURCE to get rid of compilation warnings under Linux and Solaris. After encountering the errors mentioned above, it seemed safer to only define it where doing so was known not to cause problems.
Re: More portability stuff [Re: gettext configuration]
-BEGIN PGP SIGNED MESSAGE- Hash: SHA256 Hrvoje Niksic wrote: > Micah Cowan <[EMAIL PROTECTED]> writes: > >>> Or getting the definition requires defining a magic preprocessor >>> symbol such as _XOPEN_SOURCE. The man page I found claims that the >>> function is defined by XPG4 and links to standards(5), which >>> explicitly documents _XOPEN_SOURCE. >> Right. But we set that unconditionally in , > > Only if you made it so. The config-post.h code only set it on systems > where that's known to be safe, currently Linux and Solaris. (The > reason was that some systems, possibly even Tru64, failed to compile > with _XOPEN_SOURCE set.) You're right, of course. It's not unconditional. I can't even begin to fathom why some system would fail to compile in such an event: _XOPEN_SOURCE is a feature request, not a guarantee that you'll get some level of POSIX. Do you happen to remember the system? I'd rather always define it, except for the systems where we know it fails, rather than just define it where it's safe. > Also note that Autoconf tests don't include sysdep.h, so the test > could still be failing. It would be worth investigating why curl's > Autoconf test passes and ours (probably) doesn't. I thought the Autoconf tests were testing merely for the function's linkability... but yeah, maybe adding setjmp.h to the checked headers will mean it's included for the function tests, or something. I'll play around. - -- Micah J. Cowan Programmer, musician, typesetting enthusiast, gamer... http://micah.cowan.name/ -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.6 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iD8DBQFHI4WW7M8hyUobTrERCGydAJ9AkzxbwkdvLKrmr1YZNpf+TB2JNACdEFIs j7b6sERHjEJzRyj4WnydKcM= =LUG6 -END PGP SIGNATURE-
Re: More portability stuff [Re: gettext configuration]
-BEGIN PGP SIGNED MESSAGE- Hash: SHA256 Daniel Stenberg wrote: > On Fri, 26 Oct 2007, Micah Cowan wrote: > >>> I very much doubt it does, since we check for it in the curl configure >>> script, and I can see the output from it running on Tru64 clearly state: >>> >>> checking for sigsetjmp... yes >> >> Note that curl provides the additional check for a macro version in >> the configure script, rather than in the source; we should probably do >> it that way as well. I'm not sure how that helps for this, though: if >> the above test is failing, then either it's a function (no macro) and >> configure isn't picking it up; or else it's not defined in . > > Yes right, I had forgot about that! But in the Tru64 case the extra > macro check isn't necessary. I don't remember exactly what system that > needs it, but I believe it was some older linux or bsd. My Ubuntu 7.04 system appears to need it. - -- Micah J. Cowan Programmer, musician, typesetting enthusiast, gamer... http://micah.cowan.name/ -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.6 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iD8DBQFHI4T+7M8hyUobTrERCC2zAJwMcwwmTPVmJmktWZHcAsaz9cYNpgCfd5uT 3gYhllL7gghU7WevsA1Yn0I= =fgDQ -END PGP SIGNATURE-
Re: Thoughts on Wget 1.x, 2.0 (*LONG!*)
On Fri, 26 Oct 2007, Micah Cowan wrote: The obvious solution to that is to use c-ares, which does exactly that: handle DNS queries asynchronously. Actually, I didn't know this until just now, but c-ares was split off from ares to meet the needs of the curl developers. :) We needed an asynch name resolver for libcurl so c-ares started out that way, but perhaps mostly because the original author didn't care much for our improvements and bug fixes. ADNS is a known alternative, but we couldn't use that due to license restrictions. You (wget) don't have that same problem with it. I'm not able to compare them though, as I never used ADNS... Of course, if we're doing asynchronous net I/O stuff, rather than reinvent the wheel and try to maintain portability for new stuff, we're better off using a prepackaged deal, if one exists. Luckily, one does; a friend of mine (William Ahern) wrote a package called libevnet that handles all of that; When I made libcurl grok a vast number of simultaneous connections, I went straight with libevent for my test and example code. It's solid and fairly easy to use... Perhaps libevnet makes it even easier, I don't know. Plus, there is the following thought. While I've talked about not reinventing the wheel, using existing packages to save us the trouble of having to maintain portable async code, higher-level buffered-IO and network comm code, etc, I've been neglecting one more package choice. There is, after all, already a Free Software package that goes beyond handling asynchronous network operations, to specifically handle asynchronous _web_ operations; I'm speaking, of course, of libcurl. I guess I'm not the man to ask nor comment this a lot, but look what I found: http://www.mail-archive.com/wget@sunsite.dk/msg01129.html I've always thought and I still believe that wget's power and most appreciated abilities are in the features it adds on top of the transfer, like HTML parsing, ftp list parsing and the other things you mentioned. Of course, going one single unified transfer library is perhaps not the best thing from a software eco-system perspective, as competition tends to drive innovation and development, but the more users of a free software/open source project we get the better it will become.
Re: More portability stuff [Re: gettext configuration]
On Fri, 26 Oct 2007, Micah Cowan wrote: I very much doubt it does, since we check for it in the curl configure script, and I can see the output from it running on Tru64 clearly state: checking for sigsetjmp... yes Note that curl provides the additional check for a macro version in the configure script, rather than in the source; we should probably do it that way as well. I'm not sure how that helps for this, though: if the above test is failing, then either it's a function (no macro) and configure isn't picking it up; or else it's not defined in . Yes right, I had forgot about that! But in the Tru64 case the extra macro check isn't necessary. I don't remember exactly what system that needs it, but I believe it was some older linux or bsd.
Re: More portability stuff [Re: gettext configuration]
Micah Cowan <[EMAIL PROTECTED]> writes: >> Or getting the definition requires defining a magic preprocessor >> symbol such as _XOPEN_SOURCE. The man page I found claims that the >> function is defined by XPG4 and links to standards(5), which >> explicitly documents _XOPEN_SOURCE. > > Right. But we set that unconditionally in , Only if you made it so. The config-post.h code only set it on systems where that's known to be safe, currently Linux and Solaris. (The reason was that some systems, possibly even Tru64, failed to compile with _XOPEN_SOURCE set.) Also note that Autoconf tests don't include sysdep.h, so the test could still be failing. It would be worth investigating why curl's Autoconf test passes and ours (probably) doesn't.