On 17.08.2012 19:34, Jim Jagielski wrote:
The pre-release test tarballs for Apache httpd 2.4.3 can be found
at the usual place:

        http://httpd.apache.org/dev/dist/

I'm calling a VOTE on releasing these as Apache httpd 2.4.3 GA.
NOTE: The -deps tarballs are included here *only* to make life
easier for the tester. They will not be, and are not, part
of the official release.

[X] +1: Good to go
[ ] +0: meh
[ ] -1: Danger Will Robinson. And why.

+1

Interesting detail: I noticed that when testing with LogLevel trace7 the error logs were much bigger when building against OpenSSL 1.0.0g than when using OpenSSL 1.0.1c plus patch.

It seems in some situations with 1.0.0 there's about 64KB data being exchanged and dumped to the error log whereas using 1.0.1 it is less than 100 Bytes. I checked only one occurrence and there it seemed to be related to renegotiations. The protocol spoken was TLSv1 resp. TLSv1.2. I did not find any indication why this should happen in the OpenSSL change log.

Test Details:

- Sigs and hashes OK
- contents of tarballs identical
- contents of tag and tarballs identical
  except for expected deltas
  (we could cleanup some m4 files in apr-util/xml/expat/conftools
   at the end of buildconf, no regression)

Built on

- Solaris 8+10 Sparc as 32 Bit Binaries
- SLES 10 (32/64 Bits)
- SLES 11 (64 Bits)
- RHEL 5 and 6 (64 Bits)

- with default (shared) and static modules
- with module sets none, few, most, all, reallyall and default
  (always mod_privileges disabled)
- using --enable-load-all-modules
- against "included" APR/APU from deps tarball and
  external APR/APU 1.4.6/1.4.1

- using external libraries
  - expat 2.1.0
  - pcre 8.30
  - openssl 1.0.1c (when using bundled APR
    and openssl 1.0.0g (when using external APR)
  - lua 5.2.1
  - distcache 1.5.1
  - libxml2 2.8.0

- Tool chain:
    - platform gcc except for Solaris
      (gcc 4.1.2 for Solaris 8 and 4.7.1 for Solaris 10)
    - CFLAGS: -O2 -g -Wall -fno-strict-aliasing
              (and -mpcu=v9 on Solaris)

All builds succeeded except for

- SLES 10 32 and 64 Bits many static builds stop with error or
  crash during linking httpd.
  IMHO because of too many commandline params.
  Not a regression.
  - only with "reallyall", "all" or "most" modules
- SLES 11 one build stopped in libtool with a "Memory error"
  when linking mod_proxy_html. It proceeded correctly
  when calling make install afterwards. So it seems to
  be an OS / shell / ressource problem.
  - only with shared "all" modules and bundled apr

I then updated the installed ksh from 93r to 93s and all builds could be completed.

All builds against bundled APR did not detect crypto support,
so were build without mod_session_crypto.
That's a known problem in apr-util configure and not a regression.
My build script workaround was broken this time.

Tested for

- Solaris 8+10 (32), SLES 10 (32/64), SLES 11 (64), RHEL 5+6 (64)
- MPMs prefork, worker, event (except for Solaris 8 - no event)
- default (shared) and static modules
- log levels info, debug and trace8
- module set reallyall (~117 modules)

All Tests passed with the following exceptions:

a Test 5 in t/modules/dav.t:
  8 out of 240 runs had the "created" time after
  the "modified" time.
  This seems to be a platform issue, all tests done on NFS,
  many tested on virtualized guests.
  Not a regression.

b Test 8 in t/ssl/pr12355.t:
  Of the 240 runs there were two that failed this test,
  (on RHEL 5). 60000 bytes were posted, but only 40934 bytes received
  Not reproducible, very rare.
  PR 12355 is: POST incompatible w/ renegotiate https: connection
  Not a regression.

c Test 162 in t/ssl/proxy.t:
  Of the 240 runs there was one that failed this test,
  (on RHEL 5). 60000 bytes were posted, but only 40934 bytes received
  Not reproducible, very rare.
  PR 12355 is: POST incompatible w/ renegotiate https: connection

d On Solaris 8 one test run aborted, because at some
  point the web server could no longer acquire locks. So the
  children all died and finally the parent process exited.
  The testing file system was on NFS and multiple servers were
  tested in parallel, so this is possible. When rerunning the tests
  they suceeded.

Regards,

Rainer

Reply via email to