Re: svn commit: r1918258 - /apr/apr/trunk/file_io/unix/pipe.c
On Thu, Jun 13, 2024 at 08:35:37AM +0200, Ruediger Pluem wrote: > > > On 6/11/24 6:05 PM, jor...@apache.org wrote: > > Author: jorton > > Date: Tue Jun 11 16:05:22 2024 > > New Revision: 1918258 > > > > URL: http://svn.apache.org/viewvc?rev=1918258=rev > > Log: > > * file_io/unix/pipe.c (file_pipe_create): Use pipe2(,O_NONBLOCK) by > > default unless APR_FULL_BLOCK was used; unconditionally set the > > blocking state later. Saves two syscalls per invocation for both > > APR_READ_BLOCK and APR_WRITE_BLOCK, no [intended] functional change. > > I guess it is me being blind , but how do we save two syscalls for the > APR_READ_BLOCK and APR_WRITE_BLOCK case each? With the code before the patch > we started with a blocking pipe and needed to set one end of the pipe to non > blocking. > Now we start with a non blocking pipe and need to set the other end to > blocking. > Where did I get lost and miss the point? No, you got it correct, I obviously got confused counting syscalls at some point there, sorry! APR_FULL_BLOCK is the only case where using pipe2() saves syscalls, and for every other case it costs three syscalls regardless of whether pipe() or pipe2() is used. So r1918258 is a noop. I'll revert this, thanks for the review. Regards, Joe
default to inheriting O_NONBLOCK?
At the moment APR penalizes Linux[1] compared to FreeBSD by never passing SOCK_NONBLOCK in apr_socket_accept()'s accept4() calls. This is annoying for httpd, which always sets O_NONBLOCK for the accepted socket anyway - at the cost of two extra system calls. Thanks to r1551659 [2] if you build APR *pretending* that O_NONBLOCK is inherited, it actually *will* be inherited by apr_socket_accept(), by enabling use of SOCK_NONBLOCK. ./configure ... ac_cv_o_nonblock_inherited=yes Does anybody see any reason why we couldn't build APR like this by default (...for Linux)? httpd never checks APR_O_NONBLOCK_INHERITED at build time. I did a test httpd CI run, changing the APR builds like this where possible, and it didn't make any difference. [3] An alternative would be to add an apr_socket_accept2()/_ex() which allows passing flags, which httpd could use to force use of SOCK_NONBLOCK where available. Regards, Joe [1] Strictly: on Unix platforms which don't inherit O_NONBLOCK across accept() but do have the accept4() system call [2] https://github.com/apache/apr/commit/100101fd3f1e105144fec2cd7e81b8793cca2693 [3] https://github.com/apache/httpd/pull/445 - note that not all the jobs build APR from source, rather using the Ubuntu system version
Re: RFC: APR 2.0 modularization
On Tue, Apr 23, 2024 at 10:43:15AM +0200, Ivan Zhakov wrote: > On Mon, 22 Apr 2024 at 17:01, Joe Orton wrote: > > Open > > questions on where to draw the line, as to what goes in a separate > > library, and much more I'm sure. > > > Well, it's a good question. I was thinking about something like that: > "everything that can be used in libapr-*-2 should be in libapr-[core]-2". > I mean in **general** we should avoid dependency like app -> libapr-ABC -> > libapr-XYZ -> libapr. Another way to draw the line is to move *everything* > that incurs external dependency to a separate library. Avoiding multiple leves of dependencies within APR makes sense to me, though it will mean the "core" libapr will end up quite a bit bigger than libapr in 1.x. I doubt there are a more than a couple of binaries in our main downstream consumers which use only libapr without libaprutil already today, so it's not a big deal. Regards, Joe
Re: RFC: APR 2.0 modularization
On Sat, Apr 20, 2024 at 12:38:20PM +0200, Ivan Zhakov wrote: > I would suggest separating APR in different libraries, while keeping them > in one project/source tree. > > Something like that: > - apr-2 > - apr-dbd-2 > - apr-dbm-2 > - apr-xlate-2 (?) > - apr-crypto-2 > - apr-xml-2 > - apr-ldap-2 > - apr-memcache-2 > - apr-redis-2 > > I want to clarify that I don't propose going back to apr-util time when APR > was splitted to separate projects: this proposal is only about libraries on > the disk. Yes, exactly right. IIRC someone (Justin?) wrote a proposal on doing it like this which had rough consensus around the time we did the big merge combining APR+APR-util into a single tree. Anything "substantial" should be in a separate library, so library dependency chains like [app -> libapr-xml-X.so.X -> libexpat.so] are maintained, but isolated so that apps don't pull in libexpat. Open questions on where to draw the line, as to what goes in a separate library, and much more I'm sure. Regards, Joe
Re: svn commit: r1916332 - /apr/apr/trunk/test/testescape.c
On Fri, Mar 15, 2024 at 10:26:27AM -, yla...@apache.org wrote: > Author: ylavic > Date: Fri Mar 15 10:26:27 2024 > New Revision: 1916332 > > URL: http://svn.apache.org/viewvc?rev=1916332=rev > Log: > testescape: missing ';' in entity escaping. The test now fails... did you intend to commit the fix too, something like this? Index: encoding/apr_escape.c === --- encoding/apr_escape.c (revision 1916388) +++ encoding/apr_escape.c (working copy) @@ -571,7 +571,7 @@ found = 1; } else if (toasc && !apr_isascii(c)) { -int offset = apr_snprintf((char *) d, 6,
Re: svn commit: r1915658 - /apr/apr/trunk/test/testbuckets.c
On Thu, Feb 08, 2024 at 04:56:47PM +0100, Ruediger Pluem wrote: > On 2/8/24 4:49 PM, jor...@apache.org wrote: > > Author: jorton > > Date: Thu Feb 8 15:49:14 2024 > > New Revision: 1915658 ... > Did you miss to commit the changes to buckets/apr_brigade.c? Ooops! Yup, fixed in r1915661 - thanks a lot. At least this proves the test case works :) Regards, Joe
Re: APR_POOL_DEBUG in apr-util (was: APR_POOL_DEBUG usage question)
On Thu, Feb 01, 2024 at 04:50:31PM +0100, Yann Ylavic wrote: > dev@ team, > > I'm wondering if we could/should remove APR_POOL_DEBUG specifics in > apr-util 1.6.x and 1.7.x, otherwise it's not possible run > non-pool-debug apr-util with pool-debug apr. > > Fortunately the only dependency on APR_POOL_DEBUG for apr-util seems > to be the apr_bucket code below. +1 makes sense to me. Regards, Joe
Re: [PATCH] Remove links to mail-archives.apache.org from the website
On Tue, Dec 26, 2023 at 07:11:00PM +0100, Daniel Sahlberg wrote: > Hi, > > The web page Mailing Lists [1] has links to mail-archives.apache.org. That > site was decommissioned by ASF Infra a while back. The addresses still work > but redirect to the new Ponymail archive (same as the first link). > > Please consider removing these links, suggested patch below Thanks Daniel - applied in r1915055. Regards, Joe
Re: Get a user's all groups
On Tue, Jan 02, 2024 at 02:00:00PM +0100, Daniel Sahlberg wrote: > My idea was to incrementally improve the code but maybe a better way > is to switch to access() completely. access() seems to be widely > available but I will have to read up on the setuid properties to make > sure we don't change how things has worked in the past. > > Background: Subversion has the ability to say a file "needs locking", if a > particular user/working copy doesn't hold the "lock" the file should be > read-only (and inversely if the user holds the lock the file should be > writeable). We check for W access using the code above and then update the > permissions accordingly. Makes sense. Yeah, I would recommend switching to using access(,W_OK) or access(,X_OK) on Unix, I'm not aware of any portability concerns with that. Regards, Joe
Re: Get a user's all groups
On Tue, Dec 26, 2023 at 07:17:51PM +0100, Daniel Sahlberg wrote: > Hi, > > apr_uid_current() can retur the user id and primary group id of a user. Is > there a way to find out if a user also has secondary groups (something > similar to getgrouplist(3)? > > The Subversion project has some bug reports where a user has R/W access to > a certain file via a secondary group, but APR doesn't pick up the secondary > groups and thus we don't think the user has R/W access. I'd like to improve > this by also considering all secondary groups. How are you testing for readability/writability here exactly? On Unix the right way is using access() but there isn't an APR wrapper for it. (Trying to manually check against user/groups is not a reliable way to test, not just because of groups but also because of things like setuid processes and RBAC systems which may exist on top of the user/groups.) Regards, Joe
Re: [PATCH] Avoid memcpy-from-NULL in apr_brigade_flatten
On Mon, Dec 18, 2023 at 04:09:36PM +0100, Ruediger Pluem wrote: > > > On 12/18/23 12:09 PM, Joe Orton wrote: > > On Thu, Dec 14, 2023 at 04:44:47PM -0500, Ben Kallus wrote: > >> memcpy to or from NULL is undefined behavior, even for copies of > >> length 0. See this godbolt link for an example of how this can cause > >> problems: https://gcc.godbolt.org/z/zfvnMMsds > >> > >> This patch avoids calling memcpy for 0-length buckets, so that buckets > >> with NULL data and 0 length don't cause UB when flattened. > >> > >> Addresses this bugzilla report from httpd: > >> https://bz.apache.org/bugzilla/show_bug.cgi?id=68278 > > > > Thanks for the patch, this is an interesting find. Can you say what > > bucket type was hit here - e.g. print b->type->name in gdb from inside > > apr_brigade_flatten()? > > > > It looks like metadata bucket types tend to give NULL on read() so I'm > > guessing that's the case here. It would trip up other apr_brigade_* > > functions in similar ways, e.g. split_line looks wrong too. You can make > > a case that ignoring FLUSH is "safe" but it's probably undesirable, and > > trying to flatten across an EOS is almost certainly wrong. > > > > So I'm not sure, maybe a better/safer response is to catch metadata > > buckets and treat them as end-of-brigade or an error rather than > > zero-length data buckets. apr_brigade_split_boundary has an API > > condition for that to explicitly return APR_INCOMPLETE. > > Also for the sake of compatibility I would only fail if we encounter > at non metadata bucket after a possible EOS bucket. > Any metadata before EOS I would just ignore when flattening. I looked at this but it seems like the lowest risk will be to ignore the metadata buckets as in Ben's patch. There is no other place in APR which has special handling for EOS, and looking through httpd apr_brigade_flatten() is widely used with the input filters. So: no behaviour change, just fix possible crashes? I filed a PR - https://github.com/apache/apr/pull/52 Regards, Joe
Re: [PATCH] Avoid memcpy-from-NULL in apr_brigade_flatten
On Thu, Dec 14, 2023 at 04:44:47PM -0500, Ben Kallus wrote: > memcpy to or from NULL is undefined behavior, even for copies of > length 0. See this godbolt link for an example of how this can cause > problems: https://gcc.godbolt.org/z/zfvnMMsds > > This patch avoids calling memcpy for 0-length buckets, so that buckets > with NULL data and 0 length don't cause UB when flattened. > > Addresses this bugzilla report from httpd: > https://bz.apache.org/bugzilla/show_bug.cgi?id=68278 Thanks for the patch, this is an interesting find. Can you say what bucket type was hit here - e.g. print b->type->name in gdb from inside apr_brigade_flatten()? It looks like metadata bucket types tend to give NULL on read() so I'm guessing that's the case here. It would trip up other apr_brigade_* functions in similar ways, e.g. split_line looks wrong too. You can make a case that ignoring FLUSH is "safe" but it's probably undesirable, and trying to flatten across an EOS is almost certainly wrong. So I'm not sure, maybe a better/safer response is to catch metadata buckets and treat them as end-of-brigade or an error rather than zero-length data buckets. apr_brigade_split_boundary has an API condition for that to explicitly return APR_INCOMPLETE. Regards, Joe > > --- apr_brigade-old.c 2023-12-14 21:12:48.616409321 + +++ > apr_brigade.c 2023-12-14 21:10:20.477289754 + @@ -278,7 +278,9 @@ > * > * No, we only copy the data up to their requested size. -- jre > */ > -memcpy(c, str, str_len); > + if (str_len > 0) { > +memcpy(c, str, str_len); > + } > > c += str_len; > actual += str_len; >
Re: mod_dav_fs locking / Re: apr_dbm and concurrency
On Thu, Nov 23, 2023 at 05:42:10PM +, Emmanuel Dreyfus wrote: > On Thu, Nov 23, 2023 at 05:36:06PM +0000, Joe Orton wrote: > > 3) in the mean time I worked up a PR for mod_dav_fs which adds a global > > mutex around the dbm lockdb use. This passes my stress tests for the > > first time. > > How concurent is the stress test? I've been testing with 20 concurrent clients on an 8 core machine. > In the past, I have been badly hurt by a few WebDAV clients proactively > exploring the filesystem using locksdiscovery. That compeltely killed the > service. I introduced the DavLockDiscovery directive to work it around. This is a good point. Was the load in that case PRPOFIND/depth:infinity do you know or "just" depth:1? A global lock like in my PR would make the PROPFIND load even worse, since the lock is held for the duration of the response and there are no concurrent read-only locks. It might be necessary to disable lock discovery by default then, I don't know if any clients rely on or expose that but it's only a "MAY" that lock discvovery is possible in RFC4918. I suspect the lock recovery mechanism for most users & clients is to delete the lockdb. Regards, Joe
mod_dav_fs locking / Re: apr_dbm and concurrency
Adding dev@httpd to a dev@apr thread about apr_dbm locking being broken. On Sun, Nov 12, 2023 at 07:43:13AM -0600, Greg Stein wrote: > Or, apps can stick to an older APR. ... we don't have to carry this forward > into future versions of APR (IMO). > > To your point, it is probably a single page with code samples to show how > to do K/V with sqlite, to replace the apr_dbm calls. That's gonna be way > easier than locking code integrated into apr_dbm. This seems reasonable to me for the mod_dav use case where the database is an implementation detail which users don't care about. For other use cases in httpd I'm not so sure, but saying "dbm is dead, sqlite is the way" is probably possible for all of them. It does mean someone writing a lot of new modules to replace mod_auth*dbm etc tho ;) There are a few alternative approaches I looked at: 1) we could hack fcntl-based locks to work on Linux by using F_OFD_SETLK etc which a sane locking model rather than the weird/dumb F_SETLK which has the traditional/POSIX non-thread-safe model. Linux-specific, so... 2) try to shoehorn "proper" locking into apr_dbm is hard because there isn't a suitable locking primitive that can be used at this level. Maybe the only approach that might work would be filesystem-based locking based off open/O_EXCL but this is not exactly reliable. 3) in the mean time I worked up a PR for mod_dav_fs which adds a global mutex around the dbm lockdb use. This passes my stress tests for the first time. Kind of ugly but this seems like the least ugly option at this point other than the "start again with sqlite": https://github.com/apache/httpd/pull/395 Any comments/review/flames from either dev@ welcome. Regards, Joe
Re: apr_dbm and concurrency
On Mon, Sep 25, 2023 at 03:58:18PM +0100, Joe Orton wrote: > It is unspecified whether the apr_dbm.h interface is safe to use for > multiple processes/threads having r/w access to a single database. > Results appear to be: > > - sdbm, gdbm are safe > - bdb, ndbm are not safe I developed a better test case (event MPM + mod_dav + apr_dbm with parallel clients doing cycles of LOCK/UNLOCK) sdbm, gdbm and bdb all failed. gdbm and sdbm both use either fcntl- or flock-based locking, which doesn't work between threads. (apr_file_lock for sdbm) Unfortunately lmdb is also unsafe at least because *opening* database files has races. This is documented under the Caveats section of http://www.lmdb.tech/doc/ (which also says it relies on flock-based locking, though I only hit issues with it failing during open in testing) The only ways forward I see are either to give up (declare that apr_dbm is not safe to use from threads) which passes the buck to httpd, or implement a locking layer inside apr_dbm() and don't assume the underlying drivers do thread-safe locking at all.
Re: apr_dbm and concurrency
On Mon, Sep 25, 2023 at 05:37:00PM +0200, Branko Čibej wrote: > IIRC, Berkeley DB multi-process concurrency is managed through an on-disk > "register" file external to the actual key/value store. The key/value store > format is not affected by the presence of this file. The DB_REGISTER > mechanism was introduced in BDB 4.4 (now long defunct) and can be used for > both concurrency control and automatic database recovery. The client-side > code for this can be lifted from Subversion. > > (I was involved in designing this mechanism for BDB and implementing its use > in Subversion, but that was ages ago -- back in 2005. There may be better > ways do do this in newer versions of Berkeley DB). > > TL;DR: all upstream supported versions of BDB should have this mechanism > available and APR can detect if it's being used without changing the API, > and even "upgrade" existing databases with the register file on the fly > without affecting the actual database. Thanks a lot for the response Branko, this is helpful. I spent a lot more time playing with this and I can get apr_dbm to use a DB_ENV in the way you suggest with locking. However, the path handling becomes very complicated/impossible - with a DB_ENV the database is assumed to be inside the "db_home" directory but we want to load/save a database by absolute path name. Using BDB in this way also appears incompatible with apr_dbm since the _usednames API assumes that at most two state files are used and with a full BDB environment the state is a whole directory. (mod_dav relies on this feature working correctly) So I don't want to appear like I'm shrugging this off but *at best* it might be possible to fix BDB for some versions (possibly, in theory, with compat concerns), and that leaves older BDB releases plus NDBM still broken. As an httpd developer I really don't care about database internals, I just want this to work. I also know that Berkeley DB is deprecated and will be removed from my Linux distribution of choice, so I don't want to invest too much time in it. :( Regards, Joe
apr_dbm and concurrency
It is unspecified whether the apr_dbm.h interface is safe to use for multiple processes/threads having r/w access to a single database. Results appear to be: - sdbm, gdbm are safe - bdb, ndbm are not safe (Berkeley DB obviously can be used in a way which is safe for multiple r/w users but it appears to require using one of the more complicated modes of operation via a DB_ENV, and changing to that would not be backwards compatible with the current db format. Corrections welcome, not a database expert) This seems pretty bad, httpd's use of this interface depends on the DBM API being safe for concurrent use, but I'm not sure there is any good way forward. Options I can see: 1. Implement APR-specific locking inside apr_dbm for unsafe db types, e.g. by creating a lockfile ".lock" and use apr_file_lock 2. Drop concurrency-unsafe db methods... but, APR 2.x only? 3. No code change. Describe the state of concurrency-safety in the API for each db type. httpd and other users would be forced to select a DB type appropriate to the use case. Any other suggestions, and any preferences among the above? I'm not sure if 3 isn't the least bad choice unfortunately. Regards, Joe
Re: [VOTE] apr 1.7.2 and apr-util 1.6.3
On Tue, Jan 31, 2023 at 04:22:56PM -0500, Eric Covener wrote: > I hosed 1.7.1/1.6.2 and the archives have -rcX in them at the top > level. I would like to call for an expedited vote to replace them > with version bumps. I will proceed once we get 3 binding +1. > > I have re-tagged because I think the consensus will be that updating > the tarballs and signatures is too confusing. > > candidates are here: > > https://apr.apache.org/dev/dist/ > > For the release of apr-1.7.2 AND apr-util-1.6.3 > [X] +1 looks great > [ ] -1 something is broken Huge thanks to you Eric for getting through this, sorry I didn't get to testing properly before. +1 from me for both, httpd 2.4.55 builds and tests fine against these on Fedora. Slightly surprising that CHANGES doesn't mention the new versions but definitely not a showstopper. Regards, Joe
is SQL_INTEGER long or int?
The DBD code assumes long == int in SQL_INTEGER parameter handling. Does anybody know what is right, or if it matters? I can't work it out from 10 minutes of googling. GCC 12.1 is warning for the current code which is clearly heap corruption for 64-bit builds: dbd/apr_dbd_odbc.c: In function 'odbc_bind_param': dbd/apr_dbd_odbc.c:572:17: warning: array subscript 'long int[0]' is partly outside array bounds of 'unsigned char[4]' [-Warray-bounds] 572 | *(long *)ptr = | ^~~~ e.g. this would fix it: Index: dbd/apr_dbd_odbc.c === --- dbd/apr_dbd_odbc.c (revision 1903480) +++ dbd/apr_dbd_odbc.c (working copy) @@ -569,8 +569,8 @@ case SQL_INTEGER: ptr = apr_palloc(pool, sizeof(int)); len = sizeof(int); -*(long *)ptr = -(textmode ? atol(args[*argp]) : *(long *)args[*argp]); +*(int *)ptr = +(textmode ? atoi(args[*argp]) : *(int *)args[*argp]); break; case SQL_FLOAT: ptr = apr_palloc(pool, sizeof(float));
Re: Possible apr 1.7.1 showstopper from httpd test framework
On Fri, Jan 14, 2022 at 11:37:35AM +0100, Ruediger Pluem wrote: > > > On 1/14/22 6:47 AM, William A Rowe Jr wrote: > > In addition to a broken release of brotli (where enc/dec don't specify > > -lbrotlicommon, > > even on trunk, for openssl and other consumers to ferret out that binding), > > and > > lots of fun changes to build flags in curl 7.81 minor release (who does > > that?) > > there appears to be one test failure with date formatting in httpd 2.4.x > > branch > > (including release 2.4.52) and apr 1.7.x branch (including release 1.7.0); > > > > t/modules/include.t . 56/98 # Failed test 64 in > > t/modules/include.t at line 373 > > > > Have not had time to investigate whether this is a change in perl behavior, > > or > > possibly a regression caused by apr datetime handling in 1.7.x itself., but > > any > > release apr-side should hold off just a bit to resolve this question. > > I cannot reproduce this with APR 1.7.x on RedHat 8 and our Travis builds at > least for some builds > on Ubuntu use APR 1.7 as well and do not fail. > Is this probably a Windows specific issue? Can anyone reproduce on Windows? IIRC there is some test case which can be sensitive to filesystems, and e.g. sometimes fails if NFS mounted? I may be mixing it up with another test. The output of "./t/TEST -v t/modules/include.t" should help diagnose. That phrase "including release 1.7.0" implies it's not a 1.7.x regression if it also failed with 1.7.0, Bill? i.e. no reason to hold up a release? Regards, Joe
Re: svn commit: r1893204 - in /apr/apr/trunk/file_io: os2/filedup.c unix/filedup.c win32/filedup.c
On Fri, Sep 10, 2021 at 12:51:53AM -, yla...@apache.org wrote: > Author: ylavic > Date: Fri Sep 10 00:51:53 2021 > New Revision: 1893204 > > URL: http://svn.apache.org/viewvc?rev=1893204=rev > Log: > apr_file_setaside: don't blindly kill the old cleanup and file descriptor. > > There is no cleanup with APR_FOPEN_NOCLEANUP, so apr_pool_cleanup_kill() can > go in the !(old_file->flags & APR_FOPEN_NOCLEANUP) block. > > The file descriptor can't be invalidated either, the file may be split in > multiple buckets and setting aside one shouldn't invalidate the others. Interesting. So is the API claim that: * @remark After calling this function, old_file may not be used not really correct, or needs to be weakened? old_file is also modified/invalidated in the ->buffered case where old_file->thlock is removed. Regards, Joe
CVE-2021-35940: Apache Portable Runtime (APR): Regression of CVE-2017-12613
Description: An out-of-bounds array read in the apr_time_exp*() functions was fixed in the Apache Portable Runtime 1.6.3 release (CVE-2017-12613). The fix for this issue was not carried forward to the APR 1.7.x branch, and hence version 1.7.0 regressed compared to 1.6.3 and is vulnerable to the same issue. The patch below addresses the issue: https://downloads.apache.org/apr/patches/apr-1.7.0-CVE-2021-35940.patch Credit: The Apache Portable Runtime project would like to thank Iveta Cesalova (Red Hat) for reporting this issue. References: http://svn.apache.org/viewvc?view=revision=1891198 http://mail-archives.apache.org/mod_mbox/www-announce/201710.mbox/%3ccacsi251b8ualvm-rrh9fv57-zwi0zhyf3275_jpg1a9vevv...@mail.gmail.com%3E https://downloads.apache.org/apr/patches/apr-1.7.0-CVE-2021-35940.patch
CVE-2021-35940: Apache Portable Runtime (APR): Regression of CVE-2017-12613
Description: An out-of-bounds array read in the apr_time_exp*() functions was fixed in the Apache Portable Runtime 1.6.3 release (CVE-2017-12613). The fix for this issue was not carried forward to the APR 1.7.x branch, and hence version 1.7.0 regressed compared to 1.6.3 and is vulnerable to the same issue. Credit: The Apache Portable Runtime project would like to thank Iveta Cesalova (Red Hat) for reporting this issue. References: http://svn.apache.org/viewvc?view=revision=1891198 http://mail-archives.apache.org/mod_mbox/www-announce/201710.mbox/%3ccacsi251b8ualvm-rrh9fv57-zwi0zhyf3275_jpg1a9vevv...@mail.gmail.com%3E https://dist.apache.org/repos/dist/release/apr/patches/apr-1.7.0-CVE-2021-35940.patch
Re: thread mutex double-locking w/o _timedlock
On Fri, Jul 02, 2021 at 04:10:08PM +0200, Yann Ylavic wrote: > On Fri, Jul 2, 2021 at 3:51 PM Yann Ylavic wrote: > > > > On Fri, Jul 2, 2021 at 3:19 PM Joe Orton wrote: > > > > > > If pthread_mutex_timedlock() is not supported, apr_thread_mutex_unlock() > > > will take a locked mutex and immediately lock it again: > > > > > > https://github.com/apache/apr/blob/trunk/locks/unix/thread_mutex.c#L297 > > > > > > APR_DECLARE(apr_status_t) apr_thread_mutex_unlock(apr_thread_mutex_t > > > *mutex) > > > { > > > apr_status_t status; > > > > > > if (mutex->cond) { > > > status = pthread_mutex_lock(>mutex); > > > > > > This is undefined behaviour unless APR ensures that mutex is recursive, > > > which it doesn't AFAICT. > > > > When mutex->cond != NULL (ie !HAVE_PTHREAD_MUTEX_TIMEDLOCK), > > apr_thread_mutex_lock() and co don't leave mutex->mutex locked (the > > actual locking happens thanks to the ->cond wait on the ->locked > > flag). > > > > So apr_thread_mutex_unlock() can (and actually must) lock ->mutex to > > signal the ->cond and clear the ->locked. > > What am I missing? A, that makes sense. Sorry, I missed that, you're not missing anything, tracing through pages of Coverity output dulls my remaining brain capacity... > Coverity likely can't figure out without the #ifdefs, so both your > patch and the code look good to me :) Great, thanks Yann. I removed the other fields too and dropped the bogus comment, -> r1891204. Regards, Joe
thread mutex double-locking w/o _timedlock
If pthread_mutex_timedlock() is not supported, apr_thread_mutex_unlock() will take a locked mutex and immediately lock it again: https://github.com/apache/apr/blob/trunk/locks/unix/thread_mutex.c#L297 APR_DECLARE(apr_status_t) apr_thread_mutex_unlock(apr_thread_mutex_t *mutex) { apr_status_t status; if (mutex->cond) { status = pthread_mutex_lock(>mutex); This is undefined behaviour unless APR ensures that mutex is recursive, which it doesn't AFAICT. Current versions of Coverity are smart enough to detect double-locking and hence this code triggers a scanner warning for every single call to apr_thread_mutex_unlock() within APR. I'm not sure how to fix the problem, maybe force mutexes to be recursive? But isn't it also true that all this code can be #if'ed out for platforms which do support pthread_mutex_timedlock()? e.g. as attached. Regards, Joe Index: locks/unix/thread_mutex.c === --- locks/unix/thread_mutex.c (revision 1891201) +++ locks/unix/thread_mutex.c (working copy) @@ -121,6 +121,7 @@ { apr_status_t rv; +#ifndef HAVE_PTHREAD_MUTEX_TIMEDLOCK if (mutex->cond) { apr_status_t rv2; @@ -152,7 +153,8 @@ return rv; } - +#endif + rv = pthread_mutex_lock(>mutex); #ifdef HAVE_ZOS_PTHREADS if (rv) { @@ -167,6 +169,7 @@ { apr_status_t rv; +#ifndef HAVE_PTHREAD_MUTEX_TIMEDLOCK if (mutex->cond) { apr_status_t rv2; @@ -196,7 +199,8 @@ return rv; } - +#endif + rv = pthread_mutex_trylock(>mutex); if (rv) { #ifdef HAVE_ZOS_PTHREADS @@ -298,7 +302,11 @@ { apr_status_t status; +#ifndef HAVE_PTHREAD_MUTEX_TIMEDLOCK if (mutex->cond) { +/* ### If the mutex is locked, and there is no guarantee this + * is a recursive mutex, double-locking here looks like + * undefined behaviour. */ status = pthread_mutex_lock(>mutex); if (status) { #ifdef HAVE_ZOS_PTHREADS @@ -320,6 +328,7 @@ mutex->locked = 0; } +#endif status = pthread_mutex_unlock(>mutex); #ifdef HAVE_ZOS_PTHREADS @@ -335,9 +344,12 @@ { apr_status_t rv, rv2 = APR_SUCCESS; +#ifndef HAVE_PTHREAD_MUTEX_TIMEDLOCK if (mutex->cond) { rv2 = apr_thread_cond_destroy(mutex->cond); } +#endif + rv = apr_pool_cleanup_run(mutex->pool, mutex, thread_mutex_cleanup); if (rv == APR_SUCCESS) { rv = rv2; Index: include/arch/unix/apr_arch_thread_mutex.h === --- include/arch/unix/apr_arch_thread_mutex.h (revision 1891201) +++ include/arch/unix/apr_arch_thread_mutex.h (working copy) @@ -33,7 +33,9 @@ struct apr_thread_mutex_t { apr_pool_t *pool; pthread_mutex_t mutex; +#ifndef HAVE_PTHREAD_MUTEX_TIMEDLOCK apr_thread_cond_t *cond; +#endif int locked, num_waiters; }; #endif
Re: svn commit: r1887279 - /apr/apr/trunk/build/apr_common.m4
On Wed, Mar 24, 2021 at 04:07:32PM +0100, Yann Ylavic wrote: > On Wed, Mar 24, 2021 at 3:40 PM Yann Ylavic wrote: > > > > On Wed, Mar 24, 2021 at 3:24 PM Joe Orton wrote: > > > > > > This now fails for me in exactly the same way when using ./configure > > > --enable-maintainer-mode on autoconf 2.69, both in Fedora 33 and in > > > Ubuntu: https://travis-ci.com/github/apache/apr/jobs/492824241 > > > > How about the attached patch? > > This almost restores the previous APR_TRY_COMPILE_NO_WARNING but > > doesn't #include "confdefs.h" if AC_LANG_SOURCE did it already, > > assuming PACKAGE_NAME is defined by "confdefs.h".. > > Applied in r1888017 and backported to 1.7.x (r1888018). Works for me, thanks a lot for the fast response. Regards, Joe
Re: svn commit: r1887279 - /apr/apr/trunk/build/apr_common.m4
On Sat, Mar 06, 2021 at 10:20:59PM -, yla...@apache.org wrote: > Author: ylavic > Date: Sat Mar 6 22:20:59 2021 > New Revision: 1887279 > > URL: http://svn.apache.org/viewvc?rev=1887279=rev > Log: > build/apr_common.m4: avoid explicit inclusion of "confdefs.h" > > The failure is observed on `autoconf-2.69d` (soon to be released > as `autoconf-2.70`). There `int64_t` detection fails as: > > $ autoreconf && ./configure > checking whether int64_t and int use fmt %d... no > checking whether int64_t and long use fmt %ld... no > checking whether int64_t and long long use fmt %lld... no > configure: error: could not determine the string function for int64_t > ``` This now fails for me in exactly the same way when using ./configure --enable-maintainer-mode on autoconf 2.69, both in Fedora 33 and in Ubuntu: https://travis-ci.com/github/apache/apr/jobs/492824241 config.log says the program it tries is: | int | main () | { | ... | return 0; | } which always fails because of the maintainer-mode args: configure:25300: gcc -c -g -O2 -Wall -Wmissing-prototypes -Wstrict-prototypes -Wmissing-declarations -pthread -Werror -DLINUX -D_REENTRANT -D_GNU_SOURCE -DAPR_POOL_DEBUG=1 -DAPR_THREAD_DEBUG=1 conftest.c >&5 conftest.c:166:1: error: function declaration isn't a prototype [-Werror=strict-prototypes] 166 | main () | ^~~~ I can fix that by adjusting NOTEST_CFLAGS, as in: https://github.com/apache/apr/commit/74ce8717319e28a9f5d52602b75258bdebf91708.patch which seems probably right, except it maybe will affect what is exported in the apr-N-config script (etc)? Regards, Joe > > This happens because `./configure` always stumbles on warning: > > configure:3350: gcc -c -g -O2 -Werror conftest.c >&5 > In file included from conftest.c:31: > confdefs.h:22: error: "__STDC_WANT_IEC_60559_ATTRIBS_EXT__" redefined > [-Werror] >22 | #define __STDC_WANT_IEC_60559_ATTRIBS_EXT__ 1 > | > > It's triggered by double inclusion of `"confdefs.h"` contents: > explicitly in `APR_TRY_COMPILE_NO_WARNING` macro and implicitly > via `AC_LANG_SOURCE` use. > > To fix it and avoid having to define `main()` declaration the change > uses `AC_LANG_PROGRAM` instead. > > Tested on both `autoconf-2.69` and `autoconf-2.69d`. > > > Github: closes #25 > Submitted by: Sergei Trofimovich > Reviewed by: ylavic > > Modified: > apr/apr/trunk/build/apr_common.m4 > > Modified: apr/apr/trunk/build/apr_common.m4 > URL: > http://svn.apache.org/viewvc/apr/apr/trunk/build/apr_common.m4?rev=1887279=1887278=1887279=diff > == > --- apr/apr/trunk/build/apr_common.m4 (original) > +++ apr/apr/trunk/build/apr_common.m4 Sat Mar 6 22:20:59 2021 > @@ -467,13 +467,9 @@ AC_DEFUN([APR_TRY_COMPILE_NO_WARNING], > CFLAGS="$CFLAGS -Werror" > fi > AC_COMPILE_IFELSE( > - [AC_LANG_SOURCE( > - [#include "confdefs.h" > - ] > - [[$1]] > - [int main(int argc, const char *const *argv) {] > + [AC_LANG_PROGRAM( > + [[$1]], > [[$2]] > - [ return 0; }] >)], [CFLAGS=$apr_save_CFLAGS > $3], [CFLAGS=$apr_save_CFLAGS > $4]) > >
apr_jose_decode.c gcc warnings
I assume all these functions should be declared static but haven't looked at the code. branches/1.7.x as at r1875766 - jose/apr_jose_decode.c:21:14: warning: no previous prototype for ‘apr_jose_flatten’ [-Wmissing-prototypes] 21 | apr_status_t apr_jose_flatten(apr_bucket_brigade *bb, apr_jose_text_t *in, | ^~~~ jose/apr_jose_decode.c:40:14: warning: no previous prototype for ‘apr_jose_decode_jwk’ [-Wmissing-prototypes] 40 | apr_status_t apr_jose_decode_jwk(apr_jose_t **jose, | ^~~ jose/apr_jose_decode.c:75:14: warning: no previous prototype for ‘apr_jose_decode_jwks’ [-Wmissing-prototypes] 75 | apr_status_t apr_jose_decode_jwks(apr_jose_t **jose, | ^~~~ jose/apr_jose_decode.c:116:14: warning: no previous prototype for ‘apr_jose_decode_jwt’ [-Wmissing-prototypes] 116 | apr_status_t apr_jose_decode_jwt(apr_jose_t **jose, | ^~~ jose/apr_jose_decode.c:151:14: warning: no previous prototype for ‘apr_jose_decode_data’ [-Wmissing-prototypes] 151 | apr_status_t apr_jose_decode_data(apr_jose_t **jose, const char *typ, | ^~~~ jose/apr_jose_decode.c:172:14: warning: no previous prototype for ‘apr_jose_decode_jws_signature’ [-Wmissing-prototypes] 172 | apr_status_t apr_jose_decode_jws_signature(apr_jose_t **jose, | ^ jose/apr_jose_decode.c:272:14: warning: no previous prototype for ‘apr_jose_decode_jwe_recipient’ [-Wmissing-prototypes] 272 | apr_status_t apr_jose_decode_jwe_recipient(apr_jose_t **jose, | ^ jose/apr_jose_decode.c:389:14: warning: no previous prototype for ‘apr_jose_decode_compact_jws’ [-Wmissing-prototypes] 389 | apr_status_t apr_jose_decode_compact_jws(apr_jose_t **jose, | ^~~ jose/apr_jose_decode.c:489:14: warning: no previous prototype for ‘apr_jose_decode_compact_jwe’ [-Wmissing-prototypes] 489 | apr_status_t apr_jose_decode_compact_jwe(apr_jose_t **jose, const char *left, | ^~~ jose/apr_jose_decode.c:636:14: warning: no previous prototype for ‘apr_jose_decode_compact’ [-Wmissing-prototypes] 636 | apr_status_t apr_jose_decode_compact(apr_jose_t **jose, const char *typ, | ^~~ jose/apr_jose_decode.c:817:14: warning: no previous prototype for ‘apr_jose_decode_json_jws’ [-Wmissing-prototypes] 817 | apr_status_t apr_jose_decode_json_jws(apr_jose_t **jose, apr_json_value_t *val, | ^~~~ jose/apr_jose_decode.c:1174:14: warning: no previous prototype for ‘apr_jose_decode_json_jwe’ [-Wmissing-prototypes] 1174 | apr_status_t apr_jose_decode_json_jwe(apr_jose_t **jose, apr_json_value_t *val, | ^~~~ jose/apr_jose_decode.c:1578:14: warning: no previous prototype for ‘apr_jose_decode_json’ [-Wmissing-prototypes] 1578 | apr_status_t apr_jose_decode_json(apr_jose_t **jose, const char *typ, | ^~~~
Re: svn commit: r1862071 - in /apr/apr/trunk: file_io/os2/dir.c file_io/unix/dir.c file_io/win32/dir.c include/apr_file_info.h test/testdir.c
On Sat, Aug 24, 2019 at 05:39:17PM +0300, Ivan Zhakov wrote: > For what it's worth, I'm -1 on the changes done in r1862071 and > r1862435 for the reasons listed above. > > PS: No idea if I can actually veto changes, because while I'm a > committer on the APR project, I am not in its PMC. But "-1" properly > expresses my technical opinion on this topic. OK fair enough, Ivan - reverted in r1866019 and please go ahead with the iterpool-based proposal - thanks a lot for reviewing this. Regards, Joe
Re: svn commit: r1862071 - in /apr/apr/trunk: file_io/os2/dir.c file_io/unix/dir.c file_io/win32/dir.c include/apr_file_info.h test/testdir.c
On Wed, Jul 03, 2019 at 02:43:02PM +0300, Ivan Zhakov wrote: > They also make the existing old API unusable for many cases thus > making a switch to the new API mandatory, even though it doesn't have > to be so (see below). > > APR 1.x did not specify that the result of apr_dir_read() has to be allocated > in dir->pool: > > > https://apr.apache.org/docs/apr/1.7/group__apr__dir.html#ga3e4ee253e0c712160bee10bfb9c8e4a8 > > This function does not accept a pool argument, and I think that it's natural > to assume limited lifetime in such case. This is what the current major APR > users (httpd, svn) do, and other users should probably be ready for that too, > since on Win32 the subsequent calls to apr_dir_read() already invalidate the > previous apr_finfo_t. Are there many examples in the APR API where returned strings are not either pool-allocated or constant (process lifetime)? It seems unusual and very surprising to me. A hypothetical API consumer can have made either assumption: a) of constant memory (true with limits on Win32, breaks for Unix), or b) of pool-allocated string lifetime (works for Unix, breaks for Win32) neither is documented and both are broken by current implementations. It is impossible to satisfy both so anybody can argue that fixing either implementation to match the other is a regression. Doubling down on (a) over (b) seems strongly inferior to me, the API cannot support this without introducing runtime overhead for all callers (a new iterpool in the dir object), so I'd rather fix this properly with a new API. I'd be fine with leaving Win32 apr_dir_read() behaviour as-is for 1.x at the same as introducing _pread() if desired - making the strdup only happen for _pread would only be a minor tweak, not a whole new subpool. Regards, Joe
Re: buildbot exception in on apr-trunk
On Tue, Jul 02, 2019 at 02:47:48PM +, build...@apache.org wrote: > The Buildbot has detected a build exception on builder apr-trunk while > building . Full details are available at: > https://ci.apache.org/builders/apr-trunk/builds/434 > > Buildbot URL: https://ci.apache.org/ Does anybody understand how the CI works and can kick it into life? Log is something weird which I assume is nothing to do with the commit: https://ci.apache.org/builders/apr-trunk/builds/436/steps/svn/logs/err.text Also can we use Travis?
Re: svn commit: r1862071 - in /apr/apr/trunk: file_io/os2/dir.c file_io/unix/dir.c file_io/win32/dir.c include/apr_file_info.h test/testdir.c
On Tue, Jul 02, 2019 at 08:22:47AM -0500, Greg Stein wrote: > On Tue, Jul 2, 2019 at 8:04 AM Branko Čibej wrote: > > Perhaps this is the right time to note (again) that over in Subversion > > land, we found that functions should take _two_ pool parameters: one for > > allocating the returned values and one for temporary allocations. This > > is better than functions creating temporary subpools for temporaries, > > because it allows the API consumer better control over allocations. > > https://subversion.apache.org/docs/community-guide/conventions.html#apr-pools No argument from me, applying this guidance consistently to the APR API would be a massive improvement - and also a massive BC break! APR is particularly bad about embedding pool pointers in public structures - sockaddr is a big PITA here.
Re: svn commit: r1862071 - in /apr/apr/trunk: file_io/os2/dir.c file_io/unix/dir.c file_io/win32/dir.c include/apr_file_info.h test/testdir.c
On Tue, Jul 02, 2019 at 11:18:31AM +0100, Joe Orton wrote: > Reading the win32 code a bit more, it is not constant-memory even with > the buffer in apr_dir_t, because it can allocate from dir->pool (and > even create cleanups!) in the more_finfo() call, so I think the current > behaviour is not justifiable. Also, since the Unix implementation calls apr_stat(), which takes a pool, I actually can't imagine there is another way around the need for a pool passed to apr_dir_read() call to guarantee constant memory.
Re: svn commit: r1862071 - in /apr/apr/trunk: file_io/os2/dir.c file_io/unix/dir.c file_io/win32/dir.c include/apr_file_info.h test/testdir.c
On Tue, Jul 02, 2019 at 09:59:20AM +0200, Branko Čibej wrote: > On 02.07.2019 08:49, Joe Orton wrote: > > On Thu, Jun 27, 2019 at 05:01:56PM +0300, Ivan Zhakov wrote: > >> On Tue, 25 Jun 2019 at 17:21, wrote: > >> > >>> Author: jorton > >>> Date: Tue Jun 25 14:21:56 2019 > >>> New Revision: 1862071 > >>> > >>> URL: http://svn.apache.org/viewvc?rev=1862071=rev > >>> Log: > >>> Add apr_dir_pread(), a variant of apr_dir_read() which allows callers > >>> to read a directory with constant memory consumption: > > ... > >> I'm not sure it's best fix. Better solution would be allocate buffer for > >> dirname + MAX_FILE_NAME in apr_dir_t and reuse it on every iteration. Win32 > >> already has such code. > > > I didn't look at win32 too closely, so on win32 currently doing: > > > > apr_dir_open(, ...); > > apr_dir_read(, ..., x); > > apr_dir_read(, ..., x); > > > > invalidates f1? That's pretty ugly, there is no indication in the API > > that apr_dir_read() has side-effects like that. > > The 2.0 docstring says explicitly that "memory is allocated in the pool > passed to apr_dir_open()". While this is really bad API design (hence > apr_dir_pread() ...), it pretty much forbids the implementation from > reusing an internal name buffer. Looks like the Windows implementation > needs fixing? Reading the win32 code a bit more, it is not constant-memory even with the buffer in apr_dir_t, because it can allocate from dir->pool (and even create cleanups!) in the more_finfo() call, so I think the current behaviour is not justifiable. Ivan, do you have a counter-proposal to apr_dir_pread() to allow constant memory while reading a directory? If we could have a hard constraint on apr_dir_read() *never* allocating any memory I guess that would work but appears to require substantial changes to the win32 side which I'm not qualified for. IMO not pstrdup'ing the filenames on Win32 is an API violation. You have to be very clear in documenting things like that if someone is passing in a struct *. Regards, Joe
Re: svn commit: r1862071 - in /apr/apr/trunk: file_io/os2/dir.c file_io/unix/dir.c file_io/win32/dir.c include/apr_file_info.h test/testdir.c
On Thu, Jun 27, 2019 at 05:01:56PM +0300, Ivan Zhakov wrote: > On Tue, 25 Jun 2019 at 17:21, wrote: > > > Author: jorton > > Date: Tue Jun 25 14:21:56 2019 > > New Revision: 1862071 > > > > URL: http://svn.apache.org/viewvc?rev=1862071=rev > > Log: > > Add apr_dir_pread(), a variant of apr_dir_read() which allows callers > > to read a directory with constant memory consumption: ... > I'm not sure it's best fix. Better solution would be allocate buffer for > dirname + MAX_FILE_NAME in apr_dir_t and reuse it on every iteration. Win32 > already has such code. I didn't look at win32 too closely, so on win32 currently doing: apr_dir_open(, ...); apr_dir_read(, ..., x); apr_dir_read(, ..., x); invalidates f1? That's pretty ugly, there is no indication in the API that apr_dir_read() has side-effects like that.
Re: svn commit: r1816608 - in /apr/apr/branches/1.7.x: ./ CHANGES configure.in include/apr_network_io.h include/arch/win32/apr_private.h network_io/unix/sockaddr.c test/testsock.c
67901,1667903,1667914-1667916,1667962,1669077,1671292,1671329,1671356,1671386,1671389,1671513-1671514,1671957,1672354,1672366,1672495,1672575,1675644,1675656,1675668,1676013,1683521,1685929,1696140,1696767,1722547,1722557,1726928,1727020,1727160,1727175,1727199,1728957,1732582,1733451,1733594,1733694,1733706,1733708,1733775,1734816,1736552,1738791,1738925,1750374,1755709,1755740,1755746,1755758,1755954,1761279,1762326,1774712,1774973,1775069,1776994,1776998,1788334,1788337,1788929,1789947,1789998,1790045,1790200,1790296,1790302-1790304,1790330-1790331,1790436,1790439,1790444,1790446,1790488,1790521,1790523,1790569,1790632,1791598,1791718,1791728,1792621-1792622,1792625,1792961,1792963,1797415,1798105,1805380,1808039,1808836,1809649,1810452,1813286 > > > > +/apr/apr/trunk:733052,739635,741862,741866-741867,741869,741871,745763-745764,746310,747990,748080,748361,748371,748565,74,748902,748988,749810,760443,767895,775683,782838,783398,783958,784633,784773,788588,789050,793192-793193,794118,794485,795267,799497,800627,809745,809854,810472,811455,813063,821306,829490,831641,832904,835607,888669,892028,892159,892435,892909,896382,896653,899905,901088,902077,902090,908427,910419,910597,917819,917837-917838,923311,923320,925965,929796,930508,931973,932585,951771,960665,960671,979891,983618,989450,990435,1003338,100,107,1055657,1072165,1078845,1081462,1081495,1083038,1083242,1084662,1086695,1088023,1089031,1089129,1089438,1099348,1103310,1183683,1183685-1183686,1183688,1183693,1183698,1213382,1235047,1236970,1237078,1237507,1240472,1340286,1340288,1340470,1341193,1341196,1343233,1343243,1367050,1368819,1370494,1372018,1372022,1372093,1372849,1376957,1384764,1389077,1400200,1402868,1405985,1406690,1420106,1420109,1425356,1428809,143 > > > > > > 8940,1438957-1438959,1442903,1449568,1456418,1459994,1460179-1460180,1460241,1460399,1460405,1462738,1462813,1470186,1470348,1475509,1478905,1480067,1481262,1481265,1484271,1487796,1489517,1496407,1502804,1510354,1516261,1523384,1523479,1523484,1523505,1523521,1523604,1523613,1523615,1523844-1523845,1523853,1524014,1524031,1528797,1528809,1529488,1529495,1529515,1529521,1529668,1530786,1530800,1530988,1531554,1531768,1531884,1532022,1533104,1533111,1533979,1535027,1535157,1536744,1538171,1539374,1539389,1539455,1539603,1541054,1541061,1541486,1541655,1541666,1541744,1542601,1542779,1543033,1543056,1548575,1550907,1551650,1551659,1558905,1559382,1559873,1559975,1561040,1561260,1561265,1561321,1561347,1561356,1561361,1561394,1561555,1571894,1575509,1578420,1587045,1587063,1587543,1587545,1588878,1588937,1589982,1593611,1593614-1593615,1593680,1594684,1594708,1595549,1597797,1597803,1604590,1604596,1604598,1605104,1610854,1611023,1611107,160,167,1611120,1611125,1611184,1611193, > > > > > > 1611466,1611515,1611517,1625173,1626564,1634615,1642159,1648830,1664406,1664447,1664451,1664471,1664769-1664770,1664775,1664904,1664911,1664958,1666341,1666411,1666458,111,1667420-1667421,1667423,1667900-1667901,1667903,1667914-1667916,1667962,1669077,1671292,1671329,1671356,1671386,1671389,1671513-1671514,1671957,1672354,1672366,1672495,1672575,1675644,1675656,1675668,1676013,1683521,1685929,1696140,1696767,1722547,1722557,1726928,1727020,1727160,1727175,1727199,1728957,1732582,1733451,1733594,1733694,1733706,1733708,1733775,1734816,1736552,1738791,1738925,1750374,1755709,1755740,1755746,1755758,1755954,1761279,1762326,1774712,1774973,1775069,1776994,1776998,1788334,1788337,1788929,1789947,1789998,1790045,1790200,1790296,1790302-1790304,1790330-1790331,1790436,1790439,1790444,1790446,1790488,1790521,1790523,1790569,1790632,1791598,1791718,1791728,1792621-1792622,1792625,1792961,1792963,1797415,1798105,1805380,1808039,1808836,1809649,1810452,1813286,1816527 > > /apr/apr/trunk/test/testnames.c:1460405 > > /httpd/httpd/trunk:1604590 > > > > Modified: apr/apr/branches/1.7.x/CHANGES > > URL: > > http://svn.apache.org/viewvc/apr/apr/branches/1.7.x/CHANGES?rev=1816608=1816607=1816608=diff > > > > ========== > > --- apr/apr/branches/1.7.x/CHANGES [utf-8] (original) > > +++ apr/apr/branches/1.7.x/CHANGES [utf-8] Wed Nov 29 08:53:28 2017 > > @@ -1,6 +1,15 @@ > > -*- coding: utf-8 -*- > > Changes for APR 1.7.0 > > > > + *) Add apr_sockaddr_zone_set, apr_sockaddr_zone_set to set and retrieve > > + the zone for link-local IPv6 addresses. [Joe Orton] > > + > > + *) apr_sockaddr_equal: Compare link-local IPv6 addresses with different > > + zones as not equal. [Joe Orton] > > + > > + *) apr_sockaddr_ip_getbuf, apr_sockaddr_ip_get: Append "%zone" for > > + IPv6 link-local addresses. [Joe Orton] > > + > >*) Locks: add a --disable-timedlocks config option in case users > > encounter m
Re: Allow SIGUSR2 with apr_signal_thread()
On Thu, Feb 21, 2019 at 07:11:35PM +0100, Yann Ylavic wrote: > Hi, > > can we stop preventing APR users (e.g. httpd) from using SIGUSR2 > because of the way a tool worked 16 years ago (and probably doesn't > anymore)? No sarcasm here, just a question... > > IOW, may I: +1 definitely, it's a bit bizarre that's done by default TBH. Regards, Joe > Index: srclib/apr/threadproc/unix/signals.c > === > --- srclib/apr/threadproc/unix/signals.c(revision 1834551) > +++ srclib/apr/threadproc/unix/signals.c(working copy) > @@ -307,13 +307,7 @@ static void remove_sync_sigs(sigset_t *sig_mask) > /* the rest of the signals removed from the mask in this function > * absolutely must be removed; you cannot block synchronous signals > * (requirement of pthreads API) > - * > - * SIGUSR2 is being removed from the mask for the convenience of > - * Purify users (Solaris, HP-UX, SGI) since Purify uses SIGUSR2 > */ > -#ifdef SIGUSR2 > -sigdelset(sig_mask, SIGUSR2); > -#endif > } > > APR_DECLARE(apr_status_t) apr_signal_thread(int(*signal_handler)(int signum)) > ? > > Regards, > Yann.
Re: svn commit: r1833359 - in /apr/apr/trunk: CHANGES CMakeLists.txt build.conf build/crypto.m4 crypto/apr_crypto.c crypto/apr_crypto_prng.c include/apr.h.in include/apr_crypto.h test/testcrypto.c thr
On Tue, Jul 24, 2018 at 02:25:57PM +0200, Yann Ylavic wrote: > On Tue, Jul 24, 2018 at 11:43 AM, Graham Leggett wrote: > > Throughout APR we have a long established convention where > > “expensive” libraries are bound to APR through DSOs. All the SQL > > database drivers are like this, LDAP is like this, and so is crypto. > > I think this is a mistake (for crypto at least) and that it solves nothing. > As I already said, DSOs don't work better if the external lib is not > installed on the system in the first place (same dependency), and so > will libapr-util. It's much worse than simply being pointless, pointless DSO abstractions make dependency management both more complex and more fragile for distributors. It is a total misfeature in both the crypto and LDAP cases. When I have a direct linkage (i.e. ELF DT_NEEDED tags): mod_ldap -> libaprldap.so -> libldap.so mod_whatever -> libapr.so -> libssl.so my (RPM/.deb/whatever) package manager can detect library dependencies and reflect that in package dependencies to ensure system consistency. When you stick a DSO abstraction in the middle that goes away, and now I have to track the dependencies manually (i.e. it's now fragile), even though I know that 100% of users of mod_ldap CAN ONLY and MUST use exactly the same libldap.so as if they had directly linked against -lldap. Rinse and repeat for libcrypto. This is obviously completely different to db-style APIs where users *actually care* because their runtime environments are completely different (pgsql/mysql/...), hence run-time selection (and configuration) is well matched by a DSO-based abstraction.
Re: Brigade memory lifetime & leaks
On Tue, Jun 05, 2018 at 12:07:31PM +, Plüm, Rüdiger, Vodafone Group wrote: > The usual approach in many locations is to store the created bucket brigade > and reuse it. How about the following (untested): Yes, that works too, thanks! Passes test suite and fixes the leak, though I we need to change the _destroy() to be a _cleanup() for safety, and add another, as in attached. I think my long standing hatred for this particular function is because every single C API ever created in history with a foo_create() foo_destroy() pair has memory allocated by the creator and deallocated by the destructor. But this one is special! We should remove apr_brigade_destroy() in APR 2, IMO, or switch to bucket allocated brigades then. Index: modules/http/http_request.c === --- modules/http/http_request.c (revision 1832932) +++ modules/http/http_request.c (working copy) @@ -345,6 +345,16 @@ return rv; } +#define RETRIEVE_BRIGADE_FROM_POOL(bb, key, pool, allocator) do { \ +apr_pool_userdata_get((void **), key, pool); \ +if (bb == NULL) { \ +bb = apr_brigade_create(pool, allocator); \ +apr_pool_userdata_setn((const void *)bb, key, NULL, pool); \ +} \ +else { \ +apr_brigade_cleanup(bb);\ +} \ +} while(0) AP_DECLARE(void) ap_process_request_after_handler(request_rec *r) { @@ -357,7 +367,8 @@ * this bucket is destroyed, the request will be logged and * its pool will be freed */ -bb = apr_brigade_create(c->pool, c->bucket_alloc); +RETRIEVE_BRIGADE_FROM_POOL(bb, "ap_process_request_after_handler_brigade", + c->pool, c->bucket_alloc); b = ap_bucket_eor_create(c->bucket_alloc, r); APR_BRIGADE_INSERT_HEAD(bb, b); @@ -383,7 +394,7 @@ */ rv = ap_check_pipeline(c, bb, DEFAULT_LIMIT_BLANK_LINES); c->data_in_input_filters = (rv == APR_SUCCESS); -apr_brigade_destroy(bb); +apr_brigade_cleanup(bb); if (c->cs) c->cs->state = (c->aborted) ? CONN_STATE_LINGER @@ -477,7 +488,8 @@ ap_process_async_request(r); if (!c->data_in_input_filters) { -bb = apr_brigade_create(c->pool, c->bucket_alloc); +RETRIEVE_BRIGADE_FROM_POOL(bb, "ap_process_request_brigade", + c->pool, c->bucket_alloc); b = apr_bucket_flush_create(c->bucket_alloc); APR_BRIGADE_INSERT_HEAD(bb, b); rv = ap_pass_brigade(c->output_filters, bb); @@ -490,6 +502,7 @@ ap_log_cerror(APLOG_MARK, APLOG_INFO, rv, c, APLOGNO(01581) "flushing data to the client"); } +apr_brigade_cleanup(bb); } if (ap_extended_status) { ap_time_process_request(c->sbh, STOP_PREQUEST);
Brigade memory lifetime & leaks
In 2.4's http_request.c there are two places doing: bb = apr_brigade_create(c->pool, c->bucket_alloc); to handle sending a FLUSH between requests and an EOR bucket which both can't be done off r->pool. Because the brigade structure itself is allocated out of c->pool this means we leak (~64bytes) per request forever or up to request/conn limits. We can thread a temp pool through to those functions and fix this, though because these core internal functions are in the public API (ha ha) this is the usual compat PITA, and I haven't worked out if that's more complex for the the async requesta processing call chain. As a PoC for non-async MPMs: http://people.apache.org/~jorton/2.4-use-temp-pool.patch Another choice is to allocate the brigade structure using the bucket allocator and actually free it on _destroy(). Anybody around who can remember why we used a pool allocation for that structure from the beginning? http://people.apache.org/~jorton/apr-util-brigade-bucket-alloc.patch plus http://people.apache.org/~jorton/2.4-missing-brigade-destroy.patch This seems like a simple change but it could have some nasty regressions if there places where a brigade is reused after calling apr_brigade_destroy(). Currently that itself can create leaks because there is no pool cleanup but will otherwise work. I'm also not sure if there are performance implications. Also I'd previously written that it was a bad idea to ever call apr_brigade_destroy() so this is kind of reversing that advice... https://httpd.apache.org/docs/trunk/developer/output-filters.html#brigade Whether or not this is a good idea in apr-util 1.x I'm not at all sure. Any thoughts? Regards, Joe
Re: -q64 flag - gets in the way
On Wed, Dec 06, 2017 at 10:06:22AM -0600, William A Rowe Jr wrote: > They trust the compiler to do the right things, so there is no special > sauce added to apr-config. Respective build flags land in > /usr/lib/apr-1/build/ > /usr/lib64/apr-1/build/ We do have to hack /usr/bin/apr-1-config to do: CPPFLAGS=`pkg-config --variable=CPPFLAGS apr-1` because that file has to be identical across both 32- and 64-bit builds. Otherwise we get RPM file conflicts when trying to install both i686 and x86_64 apr-devel at the same time. Because there is no way to actually thread the choice of arch through pkg-config and the myriad /usr/bin/*-config files, this stuff is a gross hack, isn't really widely used in practice, and I don't see much point in trying to solve "properly". It's quite likely 32-bit arches will all be dead by the time we managed it :) Regards, Joe
Re: svn commit: r1816527 - in /apr/apr/trunk: CHANGES configure.in include/apr_network_io.h include/arch/win32/apr_private.h network_io/unix/sockaddr.c test/testsock.c
On Fri, Dec 08, 2017 at 07:49:29AM +0100, Ruediger Pluem wrote: > Why do we do the above test twice? Over-zealous copy and pasting I think! Removed now, thanks for spotting it! r1817485 Regards, Joe
Re: [RFC] apr_sockaddr_zone_set/get() etc v2
On Tue, Nov 28, 2017 at 12:22:27AM +0100, Yann Ylavic wrote: > On Mon, Nov 27, 2017 at 5:54 PM, Joe Orton <jor...@redhat.com> wrote: > > Version #2. > > > > Adds both apr_sockaddr_zone_set() and apr_sockaddr_zone_get() to set and > > retrieve the zone id/name. > > Looks good to me. Thanks Yann for the review! Committed: http://svn.apache.org/viewvc?view=revision=1816527 I hope the Win32 bit doesn't break. > > Also now changes apr_sockaddr_equal() to compare different scopes as not > > equal, which I think is the only sensible way to do it, although that > > function is a bit odd in ignoring port differences already. > > Yeah, I was beaten by this already, we should probably provide > apr_sockaddr_port_equal() or something like this. Bikeshedding, maybe apr_sockaddr_cmp() which compares everything properly? Then again, even comparing the port, ->hostname and ->servname are still ignored! Regards, Joe
Re: [RFC] apr_sockaddr_zone_set()
On Wed, Nov 22, 2017 at 09:01:55AM -0500, Philip M. Gollucci wrote: > Does it make sense to pull magic number 10 out into a #define ? Sorry, I missed this. Could do, but sure why really, it's pretty common to call strtol-like functions with hard-coded base? Regards, Joe
Re: [RFC] apr_sockaddr_zone_set/get() etc v2
Version #2. Adds both apr_sockaddr_zone_set() and apr_sockaddr_zone_get() to set and retrieve the zone id/name. Also now changes apr_sockaddr_equal() to compare different scopes as not equal, which I think is the only sensible way to do it, although that function is a bit odd in ignoring port differences already. IMO this is a bug fix, and is not a problem in a minor or patch version bump; the way to uniquely identify a link-local address is the (ipv6 addr, scope) pair so these should be compared appropriately. Also now enhances apr_sockaddr_ip_get*() to append the scope ID when converting an IP address to text format. Again I think this makes sense, e.g. if you are going to log a link-local address it only makes sense to log the addr with the scope. Regards, Joe Index: configure.in === --- configure.in(revision 1816144) +++ configure.in(working copy) @@ -1069,7 +1069,9 @@ #endif";; esac -AC_CHECK_HEADERS([sys/types.h sys/mman.h sys/ipc.h sys/mutex.h sys/shm.h sys/file.h kernel/OS.h os2.h windows.h]) +AC_CHECK_HEADERS([sys/types.h sys/mman.h sys/ipc.h sys/mutex.h \ + sys/shm.h sys/file.h kernel/OS.h os2.h windows.h \ + net/if.h]) AC_CHECK_FUNCS([mmap munmap shm_open shm_unlink shmget shmat shmdt shmctl \ create_area mprotect]) @@ -2755,7 +2757,7 @@ AC_SEARCH_LIBS(getaddrinfo, socket inet6) AC_SEARCH_LIBS(gai_strerror, socket inet6) AC_SEARCH_LIBS(getnameinfo, socket inet6) -AC_CHECK_FUNCS(gai_strerror) +AC_CHECK_FUNCS(gai_strerror if_nametoindex if_indextoname) APR_CHECK_WORKING_GETADDRINFO APR_CHECK_NEGATIVE_EAI APR_CHECK_WORKING_GETNAMEINFO Index: include/apr_network_io.h === --- include/apr_network_io.h(revision 1816144) +++ include/apr_network_io.h(working copy) @@ -441,6 +441,29 @@ const apr_sockaddr_t *src, apr_pool_t *p); +/* Set the zone of an IPv6 link-local address object. + * @param sa Socket address object + * @param zone_id Zone ID (textual "eth0" or numeric "3"). + */ +APR_DECLARE(apr_status_t) apr_sockaddr_zone_set(apr_sockaddr_t *sa, +const char *zone_id); + + +/* Retrieve the zone of an IPv6 link-local address object. + * @param sa Socket address object + * @param name If non-NULL, set to the textual representation of the zone id + * @param id If non-NULL, set to the integer zone id + * @param p Pool from which *name is allocated if used. + * @return Returns APR_EBADIP for non-IPv6 socket or socket without any zone id + * set, or other error if the interface could not be mapped to a name. + * @remark Both name and id may be NULL, neither are modified if + * non-NULL in error cases. + */ +APR_DECLARE(apr_status_t) apr_sockaddr_zone_get(const apr_sockaddr_t *sa, +const char **name, +apr_uint32_t *id, +apr_pool_t *p); + /** * Look up the host name from an apr_sockaddr_t. * @param hostname The hostname. Index: network_io/unix/sockaddr.c === --- network_io/unix/sockaddr.c (revision 1816144) +++ network_io/unix/sockaddr.c (working copy) @@ -25,6 +25,10 @@ #include #endif +#ifdef HAVE_NET_IF_H +#include +#endif + #define APR_WANT_STRFUNC #include "apr_want.h" @@ -125,9 +129,29 @@ memmove(buf, buf + strlen(":::"), strlen(buf + strlen(":::"))+1); } -#endif + /* ensure NUL termination if the buffer is too short */ buf[buflen-1] = '\0'; + +#ifdef HAVE_IF_INDEXTONAME +if (sockaddr->family == AF_INET6 +&& IN6_IS_ADDR_LINKLOCAL((struct in6_addr *)sockaddr->ipaddr_ptr)) { +char scbuf[IF_NAMESIZE], *p = buf + strlen(buf); + +if (if_indextoname(sockaddr->sa.sin6.sin6_scope_id, scbuf) == scbuf) { +/* Space check, need room for buf + '%' + scope + '\0'. + * Assert: buflen >= strlen(buf) + strlen(scbuf) + 2 + * Equiv: buflen >= strlen(scbuf) + (p-buf) + 2 + * Thus, fail in inverse condition: */ +if (buflen < strlen(scbuf) + (p - buf) + 2) { +return APR_ENOSPC; +} +*p++ = '%'; +memcpy(p, scbuf, strlen(scbuf) + 1); +} +} +#endif /* HAVE_IF_INDEXTONAME */ +#endif /* APR_HAVE_IPV6 */ return APR_SUCCESS; } @@ -899,11 +923,19 @@ &((struct in6_addr *)(b)->ipaddr_ptr)->s6_addr[12], \ (a)->ipaddr_len)) +#if APR_HAVE_IPV6 +#define SCOPE_OR_ZERO(sa_) ((sa_)->family != AF_INET6 ? 0 : \ +
[RFC] apr_sockaddr_zone_set()
This adds support for mapping scope/zone names to interface IDs for IPv6 link-local addresses. Not sure if scope or zone is the better term to use here, any opinions? Could s/_zone_/_scope_/g easily enough. Google suggests Windows supports if_nametoindex too, so the API should be portable, though I don't know about other platforms. Regards, Joe Index: include/apr_network_io.h === --- include/apr_network_io.h(revision 1815933) +++ include/apr_network_io.h(working copy) @@ -441,6 +441,13 @@ const apr_sockaddr_t *src, apr_pool_t *p); +/* Set the zone of an IPv6 link-local address object. + * @param sa Socket address object + * @param zone_id Zone ID (textual "eth0" or numeric "3"). + */ +APR_DECLARE(apr_status_t) apr_sockaddr_zone_set(apr_sockaddr_t *sa, +const char *zone_id); + /** * Look up the host name from an apr_sockaddr_t. * @param hostname The hostname. Index: configure.in === --- configure.in(revision 1815933) +++ configure.in(working copy) @@ -1069,7 +1069,9 @@ #endif";; esac -AC_CHECK_HEADERS([sys/types.h sys/mman.h sys/ipc.h sys/mutex.h sys/shm.h sys/file.h kernel/OS.h os2.h windows.h]) +AC_CHECK_HEADERS([sys/types.h sys/mman.h sys/ipc.h sys/mutex.h \ + sys/shm.h sys/file.h kernel/OS.h os2.h windows.h \ + net/if.h]) AC_CHECK_FUNCS([mmap munmap shm_open shm_unlink shmget shmat shmdt shmctl \ create_area mprotect]) @@ -2755,7 +2757,7 @@ AC_SEARCH_LIBS(getaddrinfo, socket inet6) AC_SEARCH_LIBS(gai_strerror, socket inet6) AC_SEARCH_LIBS(getnameinfo, socket inet6) -AC_CHECK_FUNCS(gai_strerror) +AC_CHECK_FUNCS(gai_strerror if_nametoindex) APR_CHECK_WORKING_GETADDRINFO APR_CHECK_NEGATIVE_EAI APR_CHECK_WORKING_GETNAMEINFO Index: network_io/unix/sockaddr.c === --- network_io/unix/sockaddr.c (revision 1815933) +++ network_io/unix/sockaddr.c (working copy) @@ -25,6 +25,10 @@ #include #endif +#ifdef HAVE_NET_IF_H +#include +#endif + #define APR_WANT_STRFUNC #include "apr_want.h" @@ -1182,3 +1186,38 @@ #endif /* APR_HAVE_IPV6 */ return 0; /* no match */ } + +APR_DECLARE(apr_status_t) apr_sockaddr_zone_set(apr_sockaddr_t *sa, +const char *zone_id) +{ +#if !APR_HAVE_IPV6 || !defined(HAVE_IF_NAMETOINDEX) +return APR_ENOTIMPL; +#else +unsigned int idx; + +if (sa->family != APR_INET6) { +return APR_EBADIP; +} + +idx = if_nametoindex(zone_id); +if (idx) { +sa->sa.sin6.sin6_scope_id = idx; +return APR_SUCCESS; +} + +if (errno != ENODEV) { +return errno; +} +else { +char *endptr; +apr_int64_t i = apr_strtoi64(zone_id, , 10); + +if (*endptr != '\0' || errno || i < 1 || i > APR_INT16_MAX) { +return APR_EGENERAL; +} + +sa->sa.sin6.sin6_scope_id = i; +return APR_SUCCESS; +} +#endif +}
Re: svn commit: r1683521 - /apr/apr/trunk/network_io/unix/sockaddr.c
On Wed, Oct 25, 2017 at 10:22:46AM +0200, Rainer Jung wrote: > Before the change, copying starts at "scope_delim + 1", after the change at > "scope_delim". The log says "Simplify to use apr_pstrmemdup, no functional > change.", so it seems that was not an intentional change. Shouldn't we > correct it with the following change: How annoying, sorry :( Yes you're obviously right. http://svn.apache.org/viewvc?view=revision=1813286
Re: AC_CHECK_LIB issues under maintainer mode (Was: Re: Tagging 2.4.29 / 2.5.0-{alpha/beta?} today)
On Fri, Oct 13, 2017 at 11:51:54AM -0400, Jim Jagielski wrote: > The long and short is that under maintainer mode, we cannot > expect AC_CHECK_LIB to being correct any longer, because > the combination of -Werror and -Wstrict-prototypes means > that any and all functions looked for/checked for using > AC_CHECK_LIB will NOT be found, due to warnings which are > now fatal errors during configure time, even if those > functions DO exist. IMO the correct fix is to add all -W... flags to NOTEST_CFLAGS not CFLAGS so they don't take effect during the configure run at all. At least I can't think of a good motivation for having compiler warnings enabled when running autoconf tests in general.
Re: [PATCH] Add support for IP_FREEBIND sockopt (PR58725)
On Thu, Mar 03, 2016 at 09:41:17AM -0600, William A Rowe Jr wrote: > >From http://apr.apache.org/versioning.html... > > "Versions are denoted using a standard triplet of integers: > *MAJOR.MINOR.PATCH*. The basic intent is that *MAJOR* versions are > incompatible, large-scale upgrades of the API. *MINOR* versions retain > source and binary compatibility with older minor versions, and changes in > the *PATCH* level are perfectly compatible, forwards and backwards." Ah, right. Thanks Bill! Regards, Joe
Re: svn commit: r1733451 - in /apr/apr/trunk: include/apr_network_io.h network_io/unix/sockopt.c test/testsock.c
On Thu, Mar 03, 2016 at 04:24:07PM +0100, Yann Ylavic wrote: > Don't we want to return APR_ENOTIMPL here until the FreeBSD implementation? Good point. Done in r1733594 & r1733595
Re: [PATCH] Add support for IP_FREEBIND sockopt (PR58725)
On Thu, Feb 25, 2016 at 01:12:52PM +0100, Jan Kaluza wrote: > > This is useful, for example, for daemons integration with systemd [1]. When > daemon wants to bind to particular IP address during system start, there is > no way how to determine that the network interface is really configured and > the IP address is available. > > Patch in the PR58725 adds support for IP_FREEBIND to APR. There is at least a FreeBSD variant of the same, so it seems worth putting in APR. Thanks & done in trunk & 1.6.x, though I tweaked the name to APR_SO_FREEBIND. q for the list: Does APR normally avoid adding new API-ish things in stable branches or can I add this to 1.5.x too? Regards, Joe
Re: Fwd: Vulnerability in APR-UTIL, perhaps APR
On Fri, Jul 31, 2015 at 03:50:08PM -0500, William Rowe wrote: Thanks Daniel, sharing this with the dev@ list, as the problem and the fix are both public. Folks, what are your thoughts? Our expat is already quite old, and the current release was 2.10, while we were still shipping 1.95.7, before this issue popped up. Bumping major versions in a subversion release seems out of place. Perhaps though we can ship this in a 1.6 if we are going to proceed. Would we want to ship the patch, or would we want to ship expat project's own patches once they update? Having taken a brief look, I'm not sure if CVE-2015-2716 would be properly considered an expat bug, or a bug in some use of the expat API which Mozilla chose to fix by patching input parameter validation into expat. https://hg.mozilla.org/releases/mozilla-esr31/rev/2f3e78643f5c That said, there is also CVE-2012-0876 and CVE-2012-1148 which look unfixed in the apr-util bundled expat. I have backports of those fixes for expat 1.95.8 which don't apply to the 1.95.7 in apr-util. Dunno. Don't start from here looks like a pretty good option. Regards, Joe
Re: Possible memory leak involved in ap_rflush and friends?
On Thu, Jul 17, 2014 at 09:51:55PM +0200, Ruediger Pluem wrote: ap_rflush creates a new brigade each time from the r-pool. So if you call it often this can sum up. In this case you better do the code of ap_rflush in your code and reuse the brigade instead of creating it again each time. That is fixed in 2.4 tho, the non-constant-memory behaviour of ap_rflush (and many other places) is 2.2 specific.
Re: svn commit: r1535157 - /apr/apr/trunk/configure.in
On Thu, Oct 24, 2013 at 01:39:09AM +0200, Rainer Jung wrote: On 23.10.2013 22:22, jor...@apache.org wrote: --- apr/apr/trunk/configure.in (original) +++ apr/apr/trunk/configure.in Wed Oct 23 20:22:57 2013 @@ -681,7 +681,7 @@ case $host in ;; *linux*) os_major=[`uname -r | sed -e 's/\([1-9][0-9]*\)\..*/\1/'`] -os_minor=[`uname -r | sed -e 's/[1-9][0-9]*\.\([1-9][0-9]*\)\..*/\1/'`] +os_minor=[`uname -r | sed -e 's/[1-9][0-9]*\.\([0-9]+\)\..*/\1/'`] Solaris sed only supports basic regexp, which knows about * but not +. So instead of [0-9]+ it could be [0-9][0-9]* or [0-9]\{1,\}, whichever you find less annoying. Surely we don't have to care about what works with Solaris sed on a *linux* host? Regards, Joe
Re: [VOTE] Release APR 1.4.8
On Mon, Jun 17, 2013 at 10:57:15AM -0400, Jeff Trawick wrote: Tarballs/zipfiles are at http://apr.apache.org/dev/dist/ Sigs good, tests fine on F18, +1 for release and thanks for RMing. [+1] Release APR 1.4.8 as GA Regards, Joe
Re: [VOTE] Release APR 1.4.7
On Sun, May 05, 2013 at 03:12:20PM -0400, Eric Covener wrote: On Sun, May 5, 2013 at 11:51 AM, Eric Covener cove...@gmail.com wrote: I've got an issue AIX where the test added prior to http://svn.apache.org/viewvc?view=revisionrevision=1309386 Works before r1309386 but not after (EINVAL). Will look further this evening It seems like AIX, Solaris, HP, and Apple are in the byte/char camp Linux says integer. (This is strictly for the IPv4 flavors) Mladen -- do you recall if the 4 byte change for ipv4 was needed and where? I'm pretty sure this change was specifically to fix the IPv6 case and the effect on the v4 code was unintended. I checked the Linux kernel, it accepts a char or an int for the v4 options: http://lxr.linux.no/#linux+v3.9.4/net/ipv4/ip_sockglue.c#L454 The glibc header actually says char: #define IP_MULTICAST_TTL 33 /* u_char; set/get IP multicast ttl */ #define IP_MULTICAST_LOOP 34/* i_char; set/get IP multicast loopback */ Stevens UNPv1 says using an int here is a common programming error, and the transition to int with IPv6 options will be less error prone... maybe differently error prone? :) Does r1487796 (et al) work OK for you guys? Regards, Joe
Re: [PATCH] Fix strict-aliasing gcc warning, Remove unused sha2.c/sha2.h code
On Tue, Dec 11, 2012 at 11:35:10AM +0100, Jan Kaluza wrote: the first attached patch fixes following gcc warning: warning: dereferencing type-punned pointer will break strict-aliasing rules [-Wstrict-aliasing] Second patch removes unused SHA384 and SHA512 code from sha2.c/sha2.h. If you think the code should still be there, then it needs the same fix as in first patch. Committed to trunk/1.5.x/1.4.x - thanks a lot Jan! Regards, Joe
Re: Hash collision vectors in APR?
On Thu, Jan 05, 2012 at 01:35:50PM -0500, Jeff Trawick wrote: We don't want to say go fish to APR applications if our hash plus their vector results in an exploit. I'll bite. 1) Chained hash tables have quadratic worst case insertion time, O(n**2). The APR hash table (apr_hash_*) implementation shares that characteristic. 2) APR tables (apr_table_*) provide a data structure with constant time worst case insertion, O(1). 3) If applications are using apr_hash_* where they should be doing apr_table_*, the apps should be fixed to use the correct API. 4) Mixing some per-hash randomness into the hash is mostly futile. The attack is effective if you can generate n keys which hit the same hash bucket. The hash bucket chosen is (hash(key) % ht-max). The proposal is to change the hash lookup function to, say: (hash(key) * ht-pure_random_number) % ht-max where ht-max is 15 by default. So you merely have to increase the size of the input by 15 to produce at least the same overhead; the attacker must generate 15n keys to ensure they hit all the buckets. The attack is still quadratic time. Reducing the impact by a constant factor might make a difference to some app, but it's likely that app needs to switch to use apr_table_* or otherwise limiting untrusted input more tightly anyway. OK, now flame me to death ;) Joe
Re: Fwd: Re: IPv6 broken with apr 1.4.4 / freebsd
On Fri, Jun 17, 2011 at 08:29:38PM +, Philip M. Gollucci wrote: ... # apachectl -t [Wed May 25 17:18:18 2011] [crit] (OS 1)Unknown host: alloc_listener: failed to set up sockaddr for [2001:x:x:x:x::1] Syntax error on line 41 of /usr/local/etc/apache22/httpd.conf: Listen setup failed This error is from an apr_sockaddr_info_get() failure, not apr_socket_create() as suggested. The reporting of errno 1 with that error is odd though. I'd first check whether there is an issue with the resolver configuration, anyway. The system call trace might be useful. If you build http://people.apache.org/~jorton/gai.c, does it fail with either: ./gai 2001:...:1 ./gai -a 2001:...:1 Regards, Joe
Re: Proposal: splitting APR and former apr-util libraries
On Wed, Jun 01, 2011 at 07:27:20PM -0400, Greg Stein wrote: You're just talking about bringing back apr-util. The motivation for combining the apr and apr-util projects (separate tarball distributions) was that there we never had enough churn in either to justify the additional overhead of a separate RM cycle for each. So I agree we don't need to go back to even higher RM overhead with lots of projects. But I agree totally with Graham that splitting out the sub-components into separate *libraries* is good and useful, so as to isolate deps on third-party libraries; that's always been the plan AFAIK. I did at least start the groundwork on the buildsystem support for that (r758206). Regards, Joe
Re: Fwd: DO NOT REPLY [Bug 51219] New: apr_fnmatch infinite loop on pattern /*/WEB-INF/
On Thu, May 19, 2011 at 12:56:23PM -0500, William Rowe wrote: [moving from embargoed to open discussion] On 5/19/2011 9:53 AM, Joe Orton wrote: On Wed, May 18, 2011 at 04:11:19PM -0500, William Rowe wrote: On 5/18/2011 3:55 PM, Joe Orton wrote: b) glibc does not match the pattern \/ against / with FNM_PATHNAME set, but APR does: 52585e22 apr_fnmatch(.*\//, .//, 2) = 0, glibc=1 Do we want to change this behavior? We must still be careful of the special meaning of '/' but we can force a mismatch by dropping the logic at line 218-219, which quietly ignores the leading backslash. Other tests for \/ must remain. If the spec doesn't disallow the current behaviour I don't see any harm in keeping it, the pattern is unambiguous. ... I'm omitting your list for another pass through your tests, now that we have a fix on 1.4 branch (and trunk) to the issue you caught. I screwed up the long run I set up last night so it did not complete :( I've done one short run this morning with the 1.4.x tip. In the diff vs glibc, I filtered out patterns with ']', '[' and '/' within [], and also patterns containing \/. Only two patterns remained with different behaviour: 9199a75c apr_fnmatch(*?[.]., /.., 4) = 0, glibc=1 a504a704 apr_fnmatch(*?*[^aa], /., 4) = 0, glibc=1 the flag used is APR_FNM_PERIOD. APR looks correct for the first, I am not sure why the second is matching. Regards, Joe
Re: svn commit: r1101905 - /apr/apr/trunk/test/testfnmatch.c
On Wed, May 11, 2011 at 10:46:54AM -0500, William Rowe wrote: On 5/11/2011 9:51 AM, jor...@apache.org wrote: Author: jorton Date: Wed May 11 14:51:33 2011 New Revision: 1101905 URL: http://svn.apache.org/viewvc?rev=1101905view=rev Log: * test/testfnmatch.c: Add a few more apr_fnmatch() tests to improve (already good!) coverage a little. (test_fnmatch_test): New test. Thanks for your work on this, Joe! 'tas but a trifle - thanks to *you* for slogging through this one, it looked like a tough nut to crack. Note that I've been careful in prior tests to strike both match and notmatch at the start, middle, end and beyond end of the string tested (Jeff had tripped over one such edge case), and it's probably worth repeating that exercise with these regex patterns. Unfortunately I've been exhausted of cycles to take this any further, so it's great to see others add on to the cases. OK, I will revisit that if I get a chance. I just wanted to make sure some of the more obscure branches were getting tested; gcov says this with the current code which is pretty good: File 'strings/apr_fnmatch.c' Lines executed:91.48% of 176 Branches executed:95.40% of 261 Taken at least once:81.61% of 261 Calls executed:100.00% of 17 I had one question actually, if you have a spare moment ever, there was a branch I could not work out how to trigger in fnmatch_ch(): if (**pattern == '[') { ++*pattern; ... while (**pattern) { /* ']' is an ordinary character at the start of the range pattern */ if ((**pattern == ']') (*pattern mismatch)) { That last expression (*pattern mismatch) looks like it should always evaluate to true; I could not work out why that test should be necessary. When evaluating that line: a) mismatch always points to the first char in the *pattern on entry to the fn b) *pattern must have been incremented at least once since entry c) therefore, *pattern mismatch must always be true Regards, Joe
Re: apr_portable.h and const struct sockaddr * - getting bonjour to work
On Wed, Mar 02, 2011 at 01:27:47AM +0200, Graham Leggett wrote: Hi all, I am currently struggling to get DNSServiceGetAddrInfo() from zeroconf to work nicely with apr_portable.h, and need to clarify whether I am missing anything. DNSServiceGetAddrInfo() returns the DNS lookup in a structure struct sockaddr * which I need to convert into an apr_sockaddr_t *, but I am struggling to find the corresponding function to do it. Is this an oversight in apr or have I missed something? Hiya - there is nothing to stop you from populating the apr_sockaddr_t by hand since the structure is not opaque; you have to duplicate the functionality (or a subset) of apr_sockaddr_vars_set() to set up the pointers appropriately. I wouldn't see a problem with adding a new function which took (pool, struct sockaddr *) and did this internally. Regards, Joe
Re: apr-util 1.5.x - trunk
On Tue, Oct 05, 2010 at 01:17:19PM -0500, William Rowe wrote: On 10/5/2010 2:40 AM, Joe Orton wrote: Any objection to renaming the apr-util 1.5.x branch to trunk? It is the trunk for that tree now. Let us know if Nick's suggested change satisfies, I've drafted a trunk which explains things... I don't really need the status quo explained to me in a README file. I proposed to fix it, because it is (to me, obviously) broken. 1) The tip of development for the apr-util tree is what is currently branches/1.5.x. Yes, most of that code also exists in the apr tree. apr-util releases and branches do not come from the apr tree, they come from the apr-util tree. 2) I have hard-coded into my brain the convention that the trunk is the trunk, not a branch named by its current version. I also have scripts which make this assumption. 3) The trunk called branches/1.5.x will have to be renamed to a trunk called branches/1.6.x if 1.5.x gets cut. Which is dumb. 4) Yes, people might get confused if they try to use apr-util's trunk with the APR 2.x, but, meh. We are the people who use the VCS and it should be arranged for our convenience. Regards, Joe
apr-util 1.5.x - trunk
Any objection to renaming the apr-util 1.5.x branch to trunk? It is the trunk for that tree now. Regards, Joe
Re: [vote] Release apr-util 1.3.10
On Fri, Oct 01, 2010 at 09:22:29AM -0400, Jeff Trawick wrote: +/-1 [+1] Release apr-util 1.3.10 as GA * signature is fine * CHANGES is good * builds, make check pass Fedora/x86_64 * installs, can build 2.2.x against w/apr-1.5.x - config.guess/sub seem to have been replaced by 2006 variants (not sure why that would happen; not sure it matters too much) Thanks a lot for RMing! Regards, Joe
Re: svn commit: r1002584 [1/5] - in /apr/apr-util/branches/1.5.x/xml/expat: ./ conftools/ lib/ win32/
On Wed, Sep 29, 2010 at 06:59:58PM +0200, Rainer Jung wrote: On 29.09.2010 18:32, Rainer Jung wrote: On 29.09.2010 13:52, jor...@apache.org wrote: --- apr/apr-util/branches/1.5.x/xml/expat/buildconf.sh (original) +++ apr/apr-util/branches/1.5.x/xml/expat/buildconf.sh Wed Sep 29 11:52:55 2010 @@ -25,7 +25,7 @@ echo Copying libtool helper files ... (cd conftools ; rm -f ltconfig ltmain.sh) rm -rf aclocal.m4 libtool.m4 ltsugar.m4 autom4te*.cache -$libtoolize --copy --automake +$libtoolize --copy --automake --force Do we actually want that? Now buildconf.sh overrides our svn provided and maintained config.guess and config.sub with the ones installed with libtool, which are often pretty outdated. That might influence negatively the ability to build expat on modern systems. Hmmm, true. The --force should not otherwise make a difference to the RM process so, that argument seems reasonable. Second question: where do we want the libtool m4 files and aclocal.m4 during expat buildconf to be located: in the expat top directory or in conftools? I had the impression you want them now in conftools, but then there are some more fixes in the expat buildconf necessary, at least for libtool 1. I have them ready but want to make sure I understand your intention. It looks like acconfig.m4 itself has to stay top level, only the included files go to the conftools. Fixed some missing conftools pathes in the cited revs and included aclocal.m4 in the final cleanup. (Presume you mean aclocal.m4 there - yes, that's picked up by autoconf automatically when it is in the same directory as configure.in) Yes, that looks good, I was just synching up with expat upstream here, which set the MACRO dir to conftools. Thanks very much for fixing all this up! Regards, Joe
Re: intent to TR apr-util 1.3.10 sometime Thursday
On Tue, Sep 28, 2010 at 07:05:09AM -0400, Jeff Trawick wrote: any concerns about the timing? any additional fixes people would like to see in that release? 1.3.x branch looks good to me; I've had a trawl through bugzilla and can't find any low-hanging fruit there either. Regards, Joe
Re: intent to TR apr-util 1.3.10 sometime Thursday
On Tue, Sep 28, 2010 at 12:04:59PM -0500, William A. Rowe Jr. wrote: On 9/28/2010 10:22 AM, Joe Orton wrote: On Tue, Sep 28, 2010 at 07:05:09AM -0400, Jeff Trawick wrote: any concerns about the timing? any additional fixes people would like to see in that release? I have been trying to backport security fixes for CVE-2009-3720 and CVE-2009-3560 to the bundled copy of expat but am getting nowhere. The patches available work for 1.95.8 and later, but apr-util bundles 1.95.2 which is significantly different :( These are both issues which can segfault the XML parser when parsing particular (invalid) documents; a pertinent issue e.g. for those running public DAV servers. I'm not sure what to recommend here; we could either ship with known vulnerabilities, attempt to upgrade the bundled expat to a more recent version, or drop the bundled expat altogether for new releases. None of these seem attractive. (The latest upstream is expat 2.0.1, which doesn't have the security fixes applied and 2.x breaks ABI with 1.95.x to boot) What about bumping to 1.95.final+patches on APR-util 1.3, and moving to expat 2.0.1 for APR 2? Bumping to a later 1.95.x+patches seems feasible actually, yeah, nice idea. I'm going to try to get this done today; I'll probably break the Win32 build (at least!) so help might be required! My preference would be to unbundle in APR 2 anyways, and not get tied up in 3rd party security quirks, but it seems people still like to solve foreign project build issues at apr, httpd etc. It's gone from the apr trunk already, yup :) Regards, Joe
Re: intent to TR apr-util 1.3.10 sometime Thursday
On Wed, Sep 29, 2010 at 05:10:07AM -0500, William Rowe wrote: On 9/29/2010 3:11 AM, Joe Orton wrote: Bumping to a later 1.95.x+patches seems feasible actually, yeah, nice idea. I'm going to try to get this done today; I'll probably break the Win32 build (at least!) so help might be required! And I'll unbreak Win32 this evening - thanks much, here's to 2.0 :) OK. I have updated expat to 1.95.7 and merged the security fixes. I expect this has broken Win32 and possible also the build with libtool 1.x, if we care about that still. This is committed to the 1.5.x branch and the 1.3.x branch - testing needed! Regards, Joe
Re: intent to TR apr-util 1.3.10 sometime Thursday
On Tue, Sep 28, 2010 at 07:05:09AM -0400, Jeff Trawick wrote: any concerns about the timing? any additional fixes people would like to see in that release? I have been trying to backport security fixes for CVE-2009-3720 and CVE-2009-3560 to the bundled copy of expat but am getting nowhere. The patches available work for 1.95.8 and later, but apr-util bundles 1.95.2 which is significantly different :( These are both issues which can segfault the XML parser when parsing particular (invalid) documents; a pertinent issue e.g. for those running public DAV servers. I'm not sure what to recommend here; we could either ship with known vulnerabilities, attempt to upgrade the bundled expat to a more recent version, or drop the bundled expat altogether for new releases. None of these seem attractive. (The latest upstream is expat 2.0.1, which doesn't have the security fixes applied and 2.x breaks ABI with 1.95.x to boot) Regards, Joe
Re: svn commit: r949211 - /apr/apr/trunk/configure.in
On Fri, May 28, 2010 at 03:54:40PM -, William Rowe wrote: Author: wrowe Date: Fri May 28 15:54:40 2010 New Revision: 949211 URL: http://svn.apache.org/viewvc?rev=949211view=rev Log: Solve ELF-32 builds on ia64-hpux Did you mean to include the configure changes in that commit? --- apr/apr/trunk/configure.in (original) +++ apr/apr/trunk/configure.in Fri May 28 15:54:40 2010 @@ -1260,6 +1260,7 @@ APR_FLAG_HEADERS( kernel/OS.h \ net/errno.h \ netinet/in.h \ +netinet/ip.h \ netinet/sctp.h \ netinet/sctp_uio.h \ sys/file.h \ @@ -1300,6 +1301,12 @@ else netinet_tcph=0 fi +AC_CHECK_DECL(SOL_IP, , , [ ...
Re: svn commit: r942897 - in /httpd/httpd/trunk: CHANGES server/mpm/prefork/prefork.c
On Wed, May 12, 2010 at 03:30:29PM -0400, Jeff Trawick wrote: The multiple-calls-to-pool_clear solution is definitely safer. OK, so now I reviewed that too ;) The only difference between: apr_pool_clear(ptrans); apr_pool_destroy(pchild); and simply: apr_pool_destroy(pchild); (given that ptrans is a child of ptrans and _destroy runs cleanups in child pools) is that it hides problems in cleanup ordering/pool nesting. I think PR 43857 is about the reslist cleanup issue in APR-util, r677505 and all that. Not sure what the status of that is. Regards, Joe
Re: File descriptor leak with mpm-event / apr file bucket cleanup
On Tue, May 18, 2010 at 09:18:23AM +0200, Stefan Fritsch wrote: On Tue, 18 May 2010, Ruediger Pluem wrote: So if you want to close this fd you IMHO would need to do some refcounting and only close it if no other filebucket still references it. The filebuckets already do refcounting. apr_bucket_shared_destroy(f) in the patch above is only true if the refcount has reached zero. They refcount the number of times the FILE bucket has been split/copied, they don't refcount the number of times the underlying apr_file_t object is used. APR does not seem to be consistent about of whether ownership of the object is passed on when you create a bucket wrapper for that object type; there are no API guarantees or constraints here. From a quick review, PIPE, MMAP, HEAP (kind of) do take ownership, FILE and SOCKET do not. Have you run the perl-framework test suite to see whether this breaks anything? This change does look like it'll break the APR test suite but only because of the way I happened to write one of the buckets tests. Regards, Joe
Re: [vote] unplugging apr/ldap (from apr 2.0)
On Sun, Apr 25, 2010 at 10:56:04AM -0500, William Rowe wrote: Going over the blockers to 2.0, here seem to be our choices since nobody appears to have the time or interest in ensuring apr_ldap becomes fully modular; [X] Abandon apr_ldap_* API's to httpd 2.3 ldap, including required autoconf [ ] Keep apr_ldap_* within apr project, as an independent library
Re: svn commit: r920017 - in /apr/apr/branches/1.4.x: ./ file_io/unix/open.c include/apr_file_io.h
On Tue, Mar 09, 2010 at 12:26:08PM +0200, Graham Leggett wrote: On 08 Mar 2010, at 11:26 PM, Joe Orton wrote: APR_FOPEN_LARGEFILE is intended to be (and is documented as) advisory rather than mandatory. Exposing a NONBLOCK flag without attaching *any* semantics to it w.r.t. subsequent read/write calls seems completely wrong. If you want platform-specific behaviour and don't care how badly it interacts with other apr_file_* interfaces then use native open() and apr_os_file_put() to get the APR wrapper. This makes no sense at all. If I open a file using native open() setting flags appropriately, and then use apr_os_file_put(), followed by apr_file_read and apr_file_write, how is that in any way different to using apr_file_open(), setting flags appropriately, and then follow with apr_file_read and apr_file_write? Well, right, this the point I was making - it should be no different at run-time; i.e. you can achieve exactly what you want already without adding a new interface which uses up one of the remaining bits in the APR_FOPEN_* bitmask, must be maintained in perpuity, won't work on platform X and hence confuses users, etc etc. But specifically, if I'm writing an app which depends on being able to do non-blocking IO to a fifo, I *need* the API guarantee that O_NONBLOCK works. You can get that with open()+apr_os_file_put(), but you can't get that with the API added to APR, which has no such guarantee. So from that perspective, adding _NONBLOCK to APR is little use. To be clear; I don't see a problem with adding new flags so long as they come with documented API constraints/guarantees. It's the idea of adding a seemingly deliberately undocumented platform-specific flag which seems wrong. Does that make more sense? Are there specific semantics you are referring to, or are you just assuming they are missing? In the case of read and write, APR_STATUS_IS_EAGAIN already gives you the semantics you want. Or are you referring to something else? I'd expect to see some description of exactly what the new APR_FOPEN_* flag changes w.r.t. method calls; does it affect just read/write, what about flush, sync, etc? Can I presume POSIX semantics w.r.t. O_NONBLOCK in open()? The questions are kind of endless if the documentation isn't there. Regards, Joe
Re: svn commit: r920017 - in /apr/apr/branches/1.4.x: ./ file_io/unix/open.c include/apr_file_io.h
On Mon, Mar 08, 2010 at 03:53:42PM -0500, Jeff Trawick wrote: On Mon, Mar 8, 2010 at 9:59 AM, Graham Leggett minf...@sharp.fm wrote: ... #if APR_HAS_LARGE_FILES defined(_LARGEFILE64_SOURCE) oflags |= O_LARGEFILE; #elif defined(O_LARGEFILE) if (flag APR_FOPEN_LARGEFILE) { oflags |= O_LARGEFILE; } #endif This might be more interesting., though perhaps when the configure-time logic to turn on large file support is out of sync with this code? I'm not sure... APR_FOPEN_LARGEFILE is intended to be (and is documented as) advisory rather than mandatory. Exposing a NONBLOCK flag without attaching *any* semantics to it w.r.t. subsequent read/write calls seems completely wrong. If you want platform-specific behaviour and don't care how badly it interacts with other apr_file_* interfaces then use native open() and apr_os_file_put() to get the APR wrapper. Regards, Joe
Re: svn commit: r905970 - in /apr/apr/trunk: file_io/unix/filedup.c file_io/unix/open.c file_io/unix/readwrite.c include/apr_file_io.h include/arch/unix/apr_arch_file_io.h
On Wed, Feb 03, 2010 at 10:18:04AM -, Paul Querna wrote: Author: pquerna Date: Wed Feb 3 10:17:57 2010 New Revision: 905970 URL: http://svn.apache.org/viewvc?rev=905970view=rev Log: Add two new features to APR Files: - When opened with normal rotating flag, every 60 seconds the file will check if the file it is writing to has changed inode (ie, been replaced/moved). - When opened with the manual rotating flag, the consumer must call the check, but can provider the current timestamp, to avoid a apr_time call. This is based off of the patch from Brian, but I've modified it for style, and adding the manual rotation flag after discussion with brian at the httpd hackathon. Why do this in APR? This seems like a fairly obscure feature to try to shoehorn into apr_file_*. Combinatorial explosion of API features is a dangerous thing: how does this interact with the _XTHREAD, _DELONCLOSE flags? What about if _EXCL is set in flags? Lack of API docs for the semantics of APR_FOPEN_ROTATING and APR_FOPEN_MANUAL_ROTATE make it hard to understand what this is doing without reading the svn message, likewise for: APR_DECLARE(apr_status_t) apr_file_rotating_check(apr_file_t *thefile); APR_DECLARE(apr_status_t) apr_file_rotating_manual_check(apr_file_t *thefile, apr_time_t time); Is this an advisory flag which will be ignored on platforms which don't support it, or mandatory where I can expect ENOTIMPL if it's not supported? More inline: --- apr/apr/trunk/file_io/unix/readwrite.c (original) +++ apr/apr/trunk/file_io/unix/readwrite.c Wed Feb 3 10:17:57 2010 ... +static apr_status_t do_rotating_check(apr_file_t *thefile, apr_time_t now) +{ +apr_size_t rv = APR_SUCCESS; + +if ((now - thefile-rotating-lastcheck) = thefile-rotating-timeout) { +apr_finfo_t new_finfo; +apr_pool_t *tmp_pool; + +apr_pool_create(tmp_pool, thefile-pool); + +rv = apr_stat(new_finfo, thefile-fname, + APR_FINFO_DEV|APR_FINFO_INODE, tmp_pool); + +if (rv != APR_SUCCESS || +new_finfo.inode != thefile-rotating-finfo.inode || +new_finfo.device != thefile-rotating-finfo.device) { + +if (thefile-buffered) { +apr_file_flush(thefile); +} + +close(thefile-filedes); Dropping errors from _flush or close() here means potential silent data loss on failure. +thefile-filedes = -1; + +if (thefile-rotating-perm == APR_OS_DEFAULT) { +thefile-filedes = open(thefile-fname, +thefile-rotating-oflags, +0666); +} +else { +thefile-filedes = open(thefile-fname, +thefile-rotating-oflags, + apr_unix_perms2mode(thefile-rotating-perm)); +} FD_CLOEXEC needs to be reinstated here if necessary. Re-opening the file invalidates a bunch of the state in the file object; as per apr_os_file_put() this will all needs to be reset (eof_hit, ungetchar, buffering state etc). Regards, Joe
Re: svn commit: r905040 - /apr/apr/trunk/file_io/unix/open.c
On Sun, Jan 31, 2010 at 10:21:30AM -0500, Jeff Trawick wrote: On Sun, Jan 31, 2010 at 8:46 AM, jor...@apache.org wrote: Author: jorton Date: Sun Jan 31 13:46:16 2010 New Revision: 905040 URL: http://svn.apache.org/viewvc?rev=905040view=rev Log: * file_io/unix/open.c (apr_file_open): Don't set FD_CLOEXEC if it was already set via O_CLOEXEC. PR: 46297 This looks like a useful+safe backport to get into 1.3.10 which I will tag Real Soon Now. Any concerns out there? Yeah, I think it should be fine. I've merged it to 1.3.x and 1.4.x now. Regards, Joe
[VOTE] APR versioning rules w.r.t. released snapshots
Snapshots of the APR and APR-util 1.4.x trees have been distributed by as part of the httpd 2.3.4 alpha release. Should the APR project treat those snapshots as releases for versioning purposes? In other words, should we ensure future APR/APR-util releases maintain source and binary backwards compatibility with those snapshots as required by the versioning guidelines, http://apr.apache.org/versioning.html? Please vote: [ ] Yes [ ] No
Re: Quick things, apr/apr-util releases
On Tue, Dec 15, 2009 at 01:48:33AM -0500, William Rowe wrote: Paul Querna wrote: its an alpha release with a -dev copy of APR and APR-Util. Frankly, Screw the user, because there aren't any. If someone bitches about an alpha release breaking thier shit, tough shit, its alpha. That's entirely sensible, Paul, if you were talking about an httpd alpha breaking their httpd release. An httpd release shouldn't be breaking the rest of their shit, no? Clearly we aren't going to get agreement by repeating the same arguments back at each other, so let's vote on this and move on to doing something useful. Regards, Joe
Re: [VOTE] APR versioning rules w.r.t. released snapshots
On Tue, Dec 15, 2009 at 03:05:29PM +, Joe Orton wrote: [ ] Yes [X] No I vote no: what other ASF projects ship has no bearing on API commitments made by the APR project. Regards, Joe
Re: [RESULTS] [VOTE] Release httpd 2.3.4-alpha
On Fri, Dec 04, 2009 at 07:48:02PM -0600, William Rowe wrote: Joe Orton wrote: 1) the httpd project cannot force the APR project to commit to API stability by distributing a snapshot of the APR 1.4 branch. Why on earth would that be the case? The only time the APR project commits to API stability is by making a new .0 release itself. What other projects do is irrelevant to APR. Joe, as I pointed out in another thread, httpd *does* commit apr the moment the mislabeled artifact hits www.apache.org/dist/ - where does that package suggest it is nothing but a snapshot? The answer is, it doesn't. When we did this previously in httpd 2.0 (never 2.1 to the best of my recollection) we actually did commit to that particular apr API. The rules in 0.9 just weren't so draconian. If you don't like any of these rules; * what is on /dist/ is a release * what is released apr = 1.0 follows absolute versioning rules * what is a snapshot doesn't appear on /dist/ then take those issues to an infra/board level. We've had these discussions a thousand times on incubator lists, and this is a clear case of what is good for the goose... This is all fine and good but I don't see any implication above that the APR project must enforce its versioning rules on anything other than releases *it voted on* - i.e. releases of APR, rather than releases of httpd. What and how the APR project chooses to apply API stability rules is something decided by the APR project, not the board or infra. Do you disagree with that? If you think we need to amend apr/README to describe how the APR versioning rules apply to the APR project, so be it, but I think you are making an Everest-sized mountain out of a molehill here. Regards, Joe
Re: Prepared to tag apr 1.4.0 [NOT -util!]
On Tue, Dec 01, 2009 at 03:01:34AM -0600, William Rowe wrote: Please review the attached delta of apr 1.3 to 1.4 changes and chime in if you believe we are ready to tag 1.4. I suspect we are. Looks good here! Regards, Joe
Re: Crash in apr_file_close() (Linux)
On Fri, Oct 16, 2009 at 03:26:31PM +0700, Yuri V. Vishnevskiy wrote: Dear developers, For some reason in my program I need two file handlers of stdout stream. When I close these handlers I obtain an error on the second close: Bad file descriptor It seems like reasonable behaviour that the second apr_file_close() call should fail, though it certainly isn't obvious from reading the docs. How else do you expect it to behave? You cannot close an fd twice. If you want to be able to close the fds without closing stdout, then use apr_file_dup() to create the second handle to stdout. Regards, Joe
Re: Crash in apr_file_close() (Linux)
On Fri, Oct 16, 2009 at 10:54:13AM +0200, Mladen Turk wrote: On 16/10/09 10:39, Joe Orton wrote: It seems like reasonable behaviour that the second apr_file_close() call should fail, though it certainly isn't obvious from reading the docs. How else do you expect it to behave? You cannot close an fd twice. I suppose it shouldn't crash the apr. IMO either second call to open should fail or the call to close should fail, but without actually calling the close(). I think the reporter meant fail when they said crash. I don't see any reason why it would crash on Unix: the first call to apr_file_close() will set fd-filedes to -1 after calling close(), the second one will call close(-1) and fail with EBADF. Regards, Joe
Re: Crash in apr_file_close() (Linux)
On Fri, Oct 16, 2009 at 11:17:17AM +0200, Mladen Turk wrote: each invocation to apr_file_open_stdout creates a new apr_file_t so each fd-filedes has a value of STDOUT_FILENO. This is the same as calling: close(1); close(1); Oh, good point. This is rather horrible :(. I'm not sure what can be done about it, we can't change apr_file_open_stdout() to do a dup() internally, since that would break existing apps. Regards, Joe
CVE-2009-2699 - Solaris port fix
Since there is no specific reference to the fix for CVE-2009-2699 in the APR change history or elsewhere, can someone (hello Jeff) confirm that the patch referenced here: https://issues.apache.org/bugzilla/show_bug.cgi?id=47645#c13 is a sufficient fix for the vulnerability? Regards, Joe
apr_crypto API review
Sorry I haven't sent out this review sooner. Broader issues: ** Having both apr_crypto_init() and apr_crypto_get_driver() seems to be redundant (unnecessary complexity in API and code). The lowest common denominator in initialization is NSS, which requires process-global configuration when initialized. You cannot obtain a driver of either crypto toolkit which is configured differently to how it was initialized by apr_crypto_init(). ** Is the caller of this code expected to be crypto-toolkit agnostic or not? I am struggling to imagine in Fedora, why we'd want to build APR(-util) with support for *both* crypto toolkits at run-time. Why not just pick one at build time, like every other project in the entire world does? ** Generally the naming of the object types exposed seems to be highly non-intuitive. A driver creates a factory -- so APR has a genuine factory-building-factory! Joel Spolsky would be proud. A factory creates a block context. A block context can encrypt/decrypt data. So a block context is really a cipher context? Anybody with a passing knowledge of crypto would understand the latter but not the former, and still be bewildered by the use of driver and factory. Specific issues: ** initialization via params array headers everywhere is completely unclear. there's no indication of what those params arrays should contain for either init/get_driver or apr_crypto_factory. ** the structure used to represent the factory object doesn't use factory in its name, which is obscure. also, why it is not opaque? Likewise the naming of apr_crypto_cleanup without reference to the factory is non-intuitive. ** given that a factory has a one-to-one relationship with a driver (right?) why must the poor caller pass both into functions? ** similarly for apr_crypto_block_t * object; a one-one relatioship between the block context and the driver, so why pass both any time? ** the description of the factory mentions keys, certs, and algorithms, but none of the backends appear to do anything at all with keys, certs, and algorithms in the factory init function. ** enum constants in apr_crypto_block_key_type_e and apr_crypto_block_key_mode_e are not namespace-safe (lack APR_ prefix) ** various code style issues: a) the brace following the function definition should come on the new-line. line-wrapped function definitions are not indented correctly - see the example at http://httpd.apache.org/dev/styleguide.html wrong: static apr_status_t crypto_init(apr_pool_t *pool, const apr_array_header_t *params, int *rc) { correct: static apr_status_t crypto_init(apr_pool_t *pool, const apr_array_header_t *params, int *rc) { b) lots of unnecessary casts from void *. don't do that. c) lots of apr_*alloc return value checks. don't do that. d) also some chunks of commented-out code ** there should be #defines for the driver names, so the consumer of the API doesn't have to pluck them out of the air, if the consumer is expected to use those directly. ** the apu_err_t structure seems to be unnecessary, and pretending this is more general by putting it in apu_errno.h is wrong. It would be equivalent, and better API (no structures exposed, so easier to extend in the future) to have: void apr_crypto_error(const apr_crypto_t *f, const char **reason, const char **message, int *rc); to return whatever is stored in f-result. note also that the OpenSSL backend initializes this structure but never uses it AFAICT. Returning the int rc at all is wrong because it's exposing an error code specific to the underlying toolkit, breaking the abstraction, per previous discussion. ** there are a bunch of #defines which are unused: APR_CRYPTO_CERT_BASE64, APR_CRYPTO_CERT_PFX, ...
Re: Crash in apr_file_close() (Linux)
On Fri, Oct 16, 2009 at 09:56:50AM -0500, William Rowe wrote: Mladen Turk wrote: What 99% users would need is apr_file_attach_std* set of functions giving the apr_file_t capable API without destroying the system std streams. And we should probably protect the sigleton apr_file_open_std* against multiple invocations, cause the always acts upon a singleton std stream object. We already have it. Simply don't call close() :) Yeah, I think this is a reasonable API constraint. Just check if apr is registering closure of these streams and perhaps ensure they are not closed at all if they are not overridden, certainly by apr 2.0. APR already does not register a cleanup for the stdio streams opened via apr_file_open_std*. Regards, Joe
Re: svn commit: r824767 - in /apr/apr/trunk: build/apr_hints.m4 configure.in include/apr.h.in include/apr_os_override.h include/arch/darwin/ include/arch/darwin/apr_darwin_types.h
On Tue, Oct 13, 2009 at 07:24:55PM -0500, William Rowe wrote: Joe; what you fail to realize is that your definition of 'nuts' is precisely Apple's definition of an ABI (parallel, multiple platform architectures). Eh? The Reality Distortion Field does not extend to the definition of ABI. Apple support multiple platform ABIs and can build executables which contain multiple chunks of object code, each compiled for a different ABI/architecture. If the compiler is doing this by default, the results of the configure script *cannot* be relied upon to be accurate accross all the ABIs the compiler is using. What do you disagree with in that statement? This starts at AC_CHECK_SIZEOF and goes all the way to the detection of e.g. an expat library. How do you know whether the detected expat is built for all the ABIs your compiler is building for? Answer: you don't, you're guessing, and a configure script for which all the results are only half true is worse than useless. Reducing half of our configure-time tests to build-time compiler macro tests for one platform is not going to be maintainable. Why is it useful for Joe Random APR User to be building for more than one ABI in the first place? Why the heck would he care? Just pick a default and run with it by setting $CC to select an ABI. Regards, Joe
Re: svn commit: r824767 - in /apr/apr/trunk: build/apr_hints.m4 configure.in include/apr.h.in include/apr_os_override.h include/arch/darwin/ include/arch/darwin/apr_darwin_types.h
On Wed, Oct 14, 2009 at 08:00:20AM -0400, Jim Jagielski wrote: On Oct 14, 2009, at 7:00 AM, Joe Orton wrote: Reducing half of our configure-time tests to build-time compiler macro tests for one platform is not going to be maintainable. Why is it useful for Joe Random APR User to be building for more than one ABI in the first place? Why the heck would he care? Just pick a default and run with it by setting $CC to select an ABI. If APR is built with i386, and installed, then any apps built for 64 bit will core dump on it. The reverse is true as well. If APR changes $CC to force a platform ABI choice as I suggested, then any apps built against APR will be (must be) built using the same $CC, inherited via `apr-N-config --cc`. Otherwise all bets are off, as has always been true. The default is to build for *both*. And when this is done, then the bit sizes locked in apr.h may or may not be correct depending on how you build the app that will be *using* it. The __LP64__ tests in the apr.h (etal) files is to ensure that depending on how people are building *against an installed APR* will have the right values. Otherwise, anyone shipping an APR lib must built it for just 1 architecture and ensure that any consumers build against just that architecture. Packagers/redistributors can build APR twice (or N times) and merge the results to get a fat binary - this is a problem for Darwin packagers, not for upstream. From a 10 second google, this seems to be how Apple recommend you do it: http://developer.apple.com/opensource/buildingopensourceuniversal.html I would not be averse to including (configure --enable-flag conditional) a solution like the apr-wrapper.h we use in Fedora to allow distribution of a single arch-independent apr.h which will DTRT on multiple platforms, if there's wider support for that. But that wouldn't distract from the need to build the thing twice and merge the results. At the very least we should create an APR define that specifies what bitsize was used APR_SIZEOF_VOIDP? Regards, Joe
Re: svn commit: r822263 - /apr/apr-util/branches/1.4.x/CHANGES
On Tue, Oct 13, 2009 at 09:14:54AM -0400, Jeff Trawick wrote: Including these details in CHANGES is unsustainable. CHANGES would be completely unusable if these iterative fixes for new features were included. The few people who need to know this kind of information will have to get it from Subversion history. I agree with everything Jeff is saying here, especially this paragraph. Regards, Joe
Re: svn commit: r821524 - in /apr/apr/trunk: CHANGES include/apr_network_io.h libapr.dsp network_io/beos/socketcommon.c network_io/os2/socket_util.c network_io/unix/socket_util.c test/sockchild.c test
On Fri, Oct 09, 2009 at 10:16:45PM +0200, Ruediger Pluem wrote: Thanks. General looks fine to me. Just one comment in the code. Do you commit? Thanks for catching the TIMEUP thing - fixed that and committed: http://svn.apache.org/viewvc?rev=824500view=rev with otherwise only minor wording changes to the header. Regards, Joe
Re: svn commit: r821524 - in /apr/apr/trunk: CHANGES include/apr_network_io.h libapr.dsp network_io/beos/socketcommon.c network_io/os2/socket_util.c network_io/unix/socket_util.c test/sockchild.c test
On Thu, Oct 08, 2009 at 09:10:12PM +0200, Ruediger Pluem wrote: Sorry for being impatient here, but Joe any comment on the above? I would like to fix this. Sorry for the delay. The closest parallel to this function in APR is apr_socket_atmark(). So I suggest this function should mirror that one as closely in possible. How about this instead? Index: include/apr_network_io.h === --- include/apr_network_io.h(revision 823151) +++ include/apr_network_io.h(working copy) @@ -375,17 +375,19 @@ apr_sockaddr_t *sa); /** - * Check whether the send part of the socket is still open on the - * peer or that there is still data in the socket's read buffer. - * If this is false the next read on the socket will return APR_EOF. + * Determine whether the receive part of the socket has been closed by + * the peer (such that the subsequent apr_socket_read call would + * return APR_EOF), if the socket's read buffer is empty. This + * function does not block waiting for I/O. + * * @param socket The socket to check - * @remark - * PRE - * This function does not block on the socket but returns immediately in - * any case. - * /PRE + * @param atreadeof If APR_SUCCESS is returned, *atreadeof is set to + * non-zero if a subsequent read will return APR_EOF + * @return an error is returned if it was not possible to determine the + * status, in which case *atreadeof is not changed. */ -APR_DECLARE(int) apr_socket_is_connected(apr_socket_t *socket); +APR_DECLARE(apr_status_t) apr_socket_atreadeof(apr_socket_t *sock, + int *atreadeof); /** * Create apr_sockaddr_t from hostname, address family, and port. Index: network_io/unix/socket_util.c === --- network_io/unix/socket_util.c (revision 823151) +++ network_io/unix/socket_util.c (working copy) @@ -17,41 +17,58 @@ #include apr_network_io.h #include apr_poll.h -int apr_socket_is_connected(apr_socket_t *socket) +apr_status_t apr_socket_atreadeof(apr_socket_t *sock, int *atreadeof) { apr_pollfd_t pfds[1]; -apr_status_t status; +apr_status_t rv; apr_int32_t nfds; +/* The purpose here is to return APR_SUCCESS only in cases in + * which it can be unambiguously determined whether or not the + * socket will return EOF on next read. In case of an unexpected + * error, return that. */ + pfds[0].reqevents = APR_POLLIN; pfds[0].desc_type = APR_POLL_SOCKET; -pfds[0].desc.s = socket; +pfds[0].desc.s = sock; do { -status = apr_poll(pfds[0], 1, nfds, 0); -} while (APR_STATUS_IS_EINTR(status)); +rv = apr_poll(pfds[0], 1, nfds, 0); +} while (APR_STATUS_IS_EINTR(rv)); -if (status == APR_SUCCESS nfds == 1 -pfds[0].rtnevents == APR_POLLIN) { +if (rv == APR_TIMEUP) { +/* Read buffer empty - subsequent reads would block, so, + * definitely not at EOF. */ +*atreadeof = 0; +return APR_SUCCESS; +} +else if (rv) { +/* Some other error - unexpected error. */ +return rv; +} +else if (nfds == 1 pfds[0].rtnevents == APR_POLLIN) { apr_sockaddr_t unused; apr_size_t len = 1; -char buf[1]; -/* The socket might be closed in which case - * the poll will return POLLIN. - * If there is no data available the socket - * is closed. - */ -status = apr_socket_recvfrom(unused, socket, MSG_PEEK, - buf[0], len); -if (status == APR_SUCCESS len) -return 1; -else -return 0; +char buf; + +/* The socket is readable - peek to see whether it returns EOF + * without consuming bytes from the socket buffer. */ +rv = apr_socket_recvfrom(unused, sock, MSG_PEEK, buf, len); +if (rv == APR_EOF) { +*atreadeof = 1; +return APR_SUCCESS; +} +else if (rv) { +/* Read error - unexpected error. */ +return rv; +} +else { +*atreadeof = 0; +return APR_SUCCESS; +} } -else if (APR_STATUS_IS_EAGAIN(status) || APR_STATUS_IS_TIMEUP(status)) { -return 1; -} -return 0; +/* Should not fall through here. */ +return APR_EGENERAL; } Index: test/testsock.c === --- test/testsock.c (revision 823151) +++ test/testsock.c (working copy) @@ -212,6 +212,7 @@ apr_proc_t proc; apr_size_t length = STRLEN; char datastr[STRLEN]; +int atreadeof = -1; sock = setup_socket(tc); if (!sock) return; @@ -222,7 +223,9 @@ APR_ASSERT_SUCCESS(tc, Problem with receiving connection, rv); /* Check
Re: [PATCH] remove some transient details from APR-util 1.4.x CHANGES
On Fri, Oct 02, 2009 at 11:51:10AM -0400, Jeff Trawick wrote: IMO the pre-release commentary for a new feature isn't needed and detracts from a user's ability to see what is fixed or added with 1.4. +1
Re: svn commit: r821524 - in /apr/apr/trunk: CHANGES include/apr_network_io.h libapr.dsp network_io/beos/socketcommon.c network_io/os2/socket_util.c network_io/unix/socket_util.c test/sockchild.c test
On Sun, Oct 04, 2009 at 12:09:58PM -, rpl...@apache.org wrote: Author: rpluem Date: Sun Oct 4 12:09:57 2009 New Revision: 821524 URL: http://svn.apache.org/viewvc?rev=821524view=rev Log: * Add apr_socket_is_connected to detect whether the remote side of a socket is still open. The origin of apr_socket_is_connected is r473278 from http://svn.apache.org/viewvc/httpd/httpd/trunk/modules/proxy/proxy_util.c in httpd. The naming is not good here. A TCP socket is connected after the connection is successfully established via connect(), up until it is destroyed. From the name, I would expect this function to do would be to say whether a non-blocking connect() has completed. This function will reliably tell you that the receive part of the socket has been shut down by the peer, in the case that the socket's read buffer is empty. Notably the socket may still be connected in that state, albeit in half-duplex mode. I'd suggest a name like: apr_socket_atreadeof apr_socket_at_read_eof or something similar? The API docs should reflect that the return value is 1-on-success/zero-on-failure (unusual for APR), and that the function does not block. Regards, Joe
Re: [VOTE] Release APR 1.3.7
On Mon, Jul 20, 2009 at 05:21:32PM -0400, Jim Jagielski wrote: Release candidate tarballs in usual location (http://apr.apache.org/dev/dist/) [not for distribution]. Vote starts as we speak and will run for 48 hours. +1, thanks for RMing. Tests and general sanity check pass on Fedora 11/x86_64. Regards, Joe
Re: svn commit: r794485 - /apr/apr/trunk/configure.in
On Thu, Jul 16, 2009 at 11:42:52AM +1000, Bojan Smojver wrote: On Thu, 2009-07-16 at 01:39 +, bo...@apache.org wrote: Use more elaborate checks for epoll_create1, dup3 and accept4. This requires review folks. There is code in the tests that can potentially hang the configure process. It does work on my Fedora 11 box (which has the functions) and on RHEL4 (which doesn't have them), but it would be really great to test this on other platforms, especially other Linux flavours. The accept4() configure test looks a bit overcomplicated; it would be possible to simplify a bit to avoid the fork etc, by creating a socket with SOCK_NONBLOCK set, and simply calling accept4() and presuming the syscall is wired up if it returns EAGAIN. But, on the issue of runtime/build-time feature detection: In the past I have argued we should never do runtime platform feature detection, i.e., if accept4() works at build-time, we can presume it works at run-time. I think I need to soften that position now; use of a modern userspace on an older kernel seems to be very widespread in Xen hosting sites. I shipped socket(,SOCK_CLOEXEC) support in neon and got lots of complaints, from users across the spectrum of distributions. So I fear we should probably do the same thing in APR. Any other opinions? (Debianites, feel free to send me the hate mail now ;) It would simplify the configure script, e.g. for accept4 support we could use simply AC_CHECK_FUNCS and change apr_socket_accept something like this (completely untested): Thoughts? Index: network_io/unix/sockets.c === --- network_io/unix/sockets.c (revision 790537) +++ network_io/unix/sockets.c (working copy) @@ -230,14 +230,21 @@ apr_status_t apr_socket_accept(apr_socket_t **new, apr_socket_t *sock, apr_pool_t *connection_context) { -int s; +int s, flags; apr_sockaddr_t sa; sa.salen = sizeof(sa.sa); #ifdef HAVE_ACCEPT4 +/* Attempt to use accept4() but fall back on accept(). */ +flags = FD_CLOEXEC; s = accept4(sock-socketdes, (struct sockaddr *)sa.sa, sa.salen, SOCK_CLOEXEC); +if (s 0 errno == ENOSYS) { +flags = 0; +s = accept(sock-socketdes, (struct sockaddr *)sa.sa, sa.salen); +} #else +flags = 0; s = accept(sock-socketdes, (struct sockaddr *)sa.sa, sa.salen); #endif @@ -321,10 +328,10 @@ (*new)-local_interface_unknown = 1; } -#ifndef HAVE_ACCEPT4 -{ -int flags; - +/* Toggle the CLOEXEC bit on. Note that this will be done either + * if no accept4() support was present, or, if it was present but + * failed with ENOSYS. */ +if (flags == 0) { if ((flags = fcntl((*new)-socketdes, F_GETFD)) == -1) return errno; @@ -332,7 +339,6 @@ if (fcntl((*new)-socketdes, F_SETFD, flags) == -1) return errno; } -#endif (*new)-inherit = 0; apr_pool_cleanup_register((*new)-pool, (void *)(*new), socket_cleanup,
Re: svn commit: r794485 - /apr/apr/trunk/configure.in
On Thu, Jul 16, 2009 at 07:05:46PM +1000, Bojan Smojver wrote: On Thu, 2009-07-16 at 09:27 +0100, Joe Orton wrote: In the past I have argued we should never do runtime platform feature detection, i.e., if accept4() works at build-time, we can presume it works at run-time. I think I need to soften that position now; use of a modern userspace on an older kernel seems to be very widespread in Xen hosting sites. I shipped socket(,SOCK_CLOEXEC) support in neon and got lots of complaints, from users across the spectrum of distributions. Shouldn't these people build their runtime using the older kernel then? The problem is that these Xen hosting sites use a specific guest kernel which they have tuned/customised/whatever. Then they let people run an otherwise vanilla Linux distribution on top of that. It is bad enough that we need to do will it really work gymnastics at build time. If we go down the road of runtime checks, we'll soon be testing for all kinds of stuff, wasting lots of CPU cycles. I know, I totally agree. The only mitigation is that those who waste CPU cycles are those who run with mismatched kernel/userspace. It's the added complexity which scares me the most. Maybe let's just do a release with this code as-is, and see how many people complain, and then revisit the decision if necessary. PS. This is obviously some kind of Xen bug (i.e. inability to run the latest kernel). Shouldn't they be fixing it? I think this situation arose because of the historical lack of a stable hypervisor/guest interface for running paravirt. guests, which did get fixed more recently (VMI?), but, I'm not really an expert here, to say the least. Regards, Joe