Re: [RESULT] (Was: Re: [VOTE] Release Apache httpd 2.4.25 as GA)
On Mon, Dec 19, 2016 at 12:32 PM, Jim Jagielskiwrote: > With 72 hours up, and sufficient +1 (binding) votes, I call > this vote CLOSED with the result of PASS. > > I have gone ahead and moved the release artifacts over the > the required place for the mirrors to sync up. > Thanks for RM'ing! > I will be away (In Tyson's Corner, VA) later today and > all day tomorrow, and the 1st half of Weds), so if someone > wants to take on the announcing part, that would be great. > I'll have limited cycles and external net access. > I'm on it.
Re: T of 2.4.24
On Fri, Dec 9, 2016 at 1:44 AM, Yann Ylavic <ylavic@gmail.com> wrote: > On Thu, Dec 8, 2016 at 7:16 PM, William A Rowe Jr <wr...@rowe-clan.net> > wrote: > > > > It does raise the question again of whether the httpd project can > distribute > > a source code package on www.apache.org/dist/httpd/ which is not voted > > on by the project, and whether it violates the spirit of the pmc > consensus > > to no longer be the distributor of dependencies which frequently fall > into > > a poorly maintained/updated state. > > Current httpd-2.4.23-deps.tar.*/srclib seem to contain APR(-util) > only, no expat or PCRE, wasn't this decision taken already? > The decision in Nov 2008 to drop pcre was followed, that was not in any -deps 'not-a-release' tarball. Expat was more deeply embedded; httpd-2.4.23/srclib/apr-util/xml/expat/ > httpd-2.2.32.tar.*/srclib contain PCRE 5.0 (according to Changelog), > no expat, but it looks off topic for this T > Yup, I'm working on language that would accompany httpd-2.2.32.tar.gz that the distribution includes ancient, bundled legacy binary-compatible pcre and expat, and that users are strongly cautioned to provision pcre, expat and the most current versions of apr and apr-util themselves from the respective projects or their OS distribution.
Re: Better ./configure --help wording for paths?
My first attempt at this was a major change to how --with-pcre= now works on trunk, it is now a path to .../pcre[2]-config, .../bin/pcre[2]-config, (first it tries the {target}-pcre[2]-config for cross-compilation, and then the undecorated flavor.) I did hack around providing either a full path including the script name, or only the config script name (e.g. --with-pcre=pcre-config to revert to a legacy pcre-8.x). That causes some extra hassle but seems like a good idea for these edge cases. In these cases, or where --with-pcre is simply not given, we actually walk the $PATH, and --with-pcre=PATH can actual be a traditional path list in place of default $PATH. So it seems --with-pcre=PATH is the right textual description for this specific case, and changing these without potentially providing a list of paths would be erroneous. I'd welcome any feedback and review of trunk's --with-pcre= detection. Bill On Wed, Nov 16, 2016 at 6:40 AM, William A Rowe Jr <wr...@rowe-clan.net> wrote: > Can we agree on a keyword/wording convention here for httpd-2.5-dev? > > --with-apr=PATH prefix for installed APR or the full path to > --with-pcre=PATHUse external PCRE library > --with-valgrind[=DIR] Enable code to reduce valgrind false positives > (optionally: set path to valgrind headers) > --with-distcache=PATH Distcache installation directory > --with-z=PATH use a specific zlib library > --with-libxml2=DIR Specify location of libxml2 > --with-brotli=PATH Brotli installation directory > --with-lua=PATH Path to the Lua 5.2/5.1 prefix > --with-serf=PATHSerf client library > --with-ssl=PATH OpenSSL installation directory > --with-nghttp2=PATH nghttp2 installation directory > directory. > > Contrast to apr-2.0 and the autoconf keywords and text; > > Installation directories: > --prefix=PREFIX install architecture-independent files in PREFIX > [/usr/local/apr] > --exec-prefix=EPREFIX install architecture-dependent files in EPREFIX > [PREFIX] > > By default, `make install' will install all the files in > `/usr/local/apr/bin', `/usr/local/apr/lib' etc. You can specify > an installation prefix other than `/usr/local/apr' using `--prefix', > for instance `--prefix=$HOME'. > > For better control, use the options below. > > Fine tuning of the installation directories: > --bindir=DIRuser executables [EPREFIX/bin] > --sbindir=DIR system admin executables [EPREFIX/sbin] > --libexecdir=DIRprogram executables [EPREFIX/libexec] > --sysconfdir=DIRread-only single-machine data [PREFIX/etc] > --sharedstatedir=DIRmodifiable architecture-independent data > [PREFIX/com] > --localstatedir=DIR modifiable single-machine data [PREFIX/var] > --libdir=DIRobject code libraries [EPREFIX/lib] > --includedir=DIRC header files [PREFIX/include] > --oldincludedir=DIR C header files for non-gcc [/usr/include] > --datarootdir=DIR read-only arch.-independent data root > [PREFIX/share] > --datadir=DIR read-only architecture-independent data > [DATAROOTDIR] > --infodir=DIR info documentation [DATAROOTDIR/info] > --localedir=DIR locale-dependent data [DATAROOTDIR/locale] > --mandir=DIRman documentation [DATAROOTDIR/man] > --docdir=DIRdocumentation root [DATAROOTDIR/doc/PACKAGE] > --htmldir=DIR html documentation [DOCDIR] > --dvidir=DIRdvi documentation [DOCDIR] > --pdfdir=DIRpdf documentation [DOCDIR] > --psdir=DIR ps documentation [DOCDIR] > --with-sysroot=DIR Search for dependent libraries within DIR > --with-installbuilddir=DIR location to store APR build files (defaults > to '${datadir}/build') > --with-efence[=DIR] path to Electric Fence installation > --with-valgrind[=DIR] Enable code to teach valgrind about apr pools > (optionally: set path to valgrind headers) > --with-egd[=DIR]use EGD-compatible socket > --with-openssl=DIR specify location of OpenSSL > --with-nss=DIR specify location of NSS > --with-commoncrypto=DIR specify location of CommonCrypto > --with-gdbm=DIR enable GDBM support > --with-ndbm=PATHFind the NDBM header and library in > `PATH/include' > and `PATH/lib'. If PATH is of the form > `HEADER:LIB', > library in LIB. If you omit the `=PATH' part > --with-berkeley-db=PATH Find the Berkeley DB header and library in > `PATH/include'
Re: svn commit: r1775186 - /httpd/test/framework/trunk/t/apache/http_strict.t
On Mon, Dec 19, 2016 at 4:20 PM,wrote: > Author: rjung > Date: Mon Dec 19 22:20:12 2016 > New Revision: 1775186 > > URL: http://svn.apache.org/viewvc?rev=1775186=rev > Log: > Skip tests that need a cgi module if cgi > is not available. > > Modified: > httpd/test/framework/trunk/t/apache/http_strict.t > > Modified: httpd/test/framework/trunk/t/apache/http_strict.t > URL: http://svn.apache.org/viewvc/httpd/test/framework/trunk/t/ > apache/http_strict.t?rev=1775186=1775185=1775186=diff > > == > --- httpd/test/framework/trunk/t/apache/http_strict.t (original) > +++ httpd/test/framework/trunk/t/apache/http_strict.t Mon Dec 19 22:20:12 > 2016 > @@ -122,6 +122,10 @@ foreach my $t (@test_cases) { > my $decoded; > > if ($req =~ s/^R//) { > +if (!have_cgi) { > +skip "Skipping test without CGI module"; > +next; > +} > Does this work in the presence of either cgi and/or cgid?
Re: [VOTE] Release Apache httpd 2.4.25 as GA
On Fri, Dec 16, 2016 at 12:29 PM, Jim Jagielskiwrote: > At long, long last, the pre-release test tarballs for Apache httpd > version 2.4.25 can be found at the usual place: > > http://httpd.apache.org/dev/dist/ > > I'm calling a VOTE on releasing these as Apache httpd 2.4.25 GA. > Regressions noted; Build fails at mod_socache_memcache.mak on Windows command line 'nmake' invocation, fixed in 2.4.x branch with r1774650. GUI build of Windows works; cmake-provisioned build works, in light of security fixes, decided this won't-fix defect, is not a showstopper. +1 for httpd-2.4.25.tar.gz release. httpd-2.4.25/srclib/apr-util/xml/expat contains Expat 1.95.7. For reference, 2.0 derived from 1.95.8 final, which was not a security release. 2.1.0, 2.1.1 and 2.2.0 have resolved a significant number of security defects, noted here; http://expat.sourceforge.net/ Note apr-util-1.6.x branch (and apr-trunk) no longer bundles any expat sources. -1 to distribute httpd-2.4.25-deps.tar.gz from httpd.
Re: [VOTE] Release Apache httpd 2.4.25 as GA
It isn't based on the modules/*.so contents, but based on the draft conf/httpd.conf indicated by apxs. I had the converse problem, with --enable-modules-all I accidentally loaded both cgi and cgid. On Dec 18, 2016 02:43, "Nick Kew"wrote: On Sat, 2016-12-17 at 20:32 -0500, Eric Covener wrote: > I think your t/conf/extra.conf is out of date and has the +ExecCGI > masked out by . So you are getting the perl script > instead of its output setting nasty headers. [pre-breakfast post: not yet at the desktop where I ran the tests] For what it's worth, this was a pristine trunk repo of the test suite as of date about 12 hours ago. As ever, built/run in a directory newly created by lndir so as never to pollute the repo. However, there's a message earlier in the tests: CGI tests skipped because neither mod_cgi nor mod_cgid were found (nonsense: mod_cgid.so is in its expected place in /modules/ ). Looks like some gremlin in the test suite, plus running those tests when CGI tests had been skipped. -- Nick Kew
Re: [VOTE] Release Apache httpd 2.4.25 as GA
t/TEST -clean might help On Dec 17, 2016 19:32, "Eric Covener" <cove...@gmail.com> wrote: > I think your t/conf/extra.conf is out of date and has the +ExecCGI > masked out by . So you are getting the perl script > instead of its output setting nasty headers. > > On Sat, Dec 17, 2016 at 8:27 PM, Nick Kew <n...@apache.org> wrote: > > On Sat, 17 Dec 2016 18:35:22 -0600 > > William A Rowe Jr <wr...@rowe-clan.net> wrote: > > > >> On Dec 17, 2016 17:22, "Nick Kew" <n...@apache.org> wrote: > >> > >> > >> Got some test errors with the new stuff. Investigating. > >> > >> Test Summary Report > >> --- > >> t/apache/http_strict.t(Wstat: 0 Tests: 78 Failed: 5) > >> Failed tests: 72-75, 77 > >> > >> > >> Really should not occur and I haven't seen this. Shoot me the -v > >> (erbose) output from your t/apache/http_strict.t please? > > > > Cut here. Bedtime now (coming up for 01:30), but if I > > find time tomorrow I may play with the testcases under gdb. > > > > They're all cases where the server returns 200 when the > > testcase expects an error. > > > > 72: > > # SENDING: > > # GET /apache/http_strict/send_hdr.pl?OiBiYXI= HTTP/1.0\r\n\r\n > > # DECODED: : bar > > # RESPONSE: > > # HTTP/1.1 200 OK\r\n > > # Date: Sun, 18 Dec 2016 01:17:34 GMT\r\n > > # Server: Apache/2.4.25 (Unix)\r\n > > # Last-Modified: Sun, 18 Dec 2016 00:13:28 GMT\r\n > > # ETag: "226-543e3ac24b8ae"\r\n > > # Accept-Ranges: bytes\r\n > > # Content-Length: 550\r\n > > # DMMATCH1: 1\r\n > > # Connection: close\r\n > > # \r\n > > # #!/usr/bin/perl\n > > # # WARNING: this file is generated, do not edit\n > > # # generated on Sun Dec 18 00:13:28 2016\n > > # # 01: Apache-Test/lib/Apache/TestConfig.pm:961\n > > # # 02: Apache-Test/lib/Apache/TestConfig.pm:1051\n > > # # 03: Apache-Test/lib/Apache/TestMM.pm:142\n > > # # 04: Makefile.PL:26\n > > # \n > > # BEGIN { eval { require blib && blib->import; } }\n > > # use MIME::Base64;\n > > # use strict;\n > > # use warnings;\n > > # \n > > # print "Content-type: text/plain\\r\\n > > # ";\n > > # print decode_base64($ENV{QUERY_STRING}), "\\r\\n > > # ";\n > > # print "\\r\\n > > # ";\n > > # print "Hi!\\n > > # ";\n > > # print "SERVERNAME=$ENV{SERVER_NAME}\\n > > # ";\n > > # print "HTTP_HOST=$ENV{HTTP_HOST}\\n > > # ";\n > > # > > # expecting 500, got 200 > > > > 73: > > # SENDING: > > # GET /apache/http_strict/send_hdr.pl?RgBvbzogYmFy HTTP/1.0\r\n\r\n > > # DECODED: F\x00oo: bar > > # RESPONSE: > > # HTTP/1.1 200 OK\r\n > > # Date: Sun, 18 Dec 2016 01:17:34 GMT\r\n > > # Server: Apache/2.4.25 (Unix)\r\n > > # Last-Modified: Sun, 18 Dec 2016 00:13:28 GMT\r\n > > # ETag: "226-543e3ac24b8ae"\r\n > > # Accept-Ranges: bytes\r\n > > # Content-Length: 550\r\n > > # DMMATCH1: 1\r\n > > # Connection: close\r\n > > # \r\n > > > > 74: > > # SENDING: > > # GET /apache/http_strict/send_hdr.pl?RgFvbzogYmFy HTTP/1.0\r\n\r\n > > # DECODED: F\x01oo: bar > > # RESPONSE: > > # HTTP/1.1 200 OK\r\n > > # Date: Sun, 18 Dec 2016 01:17:34 GMT\r\n > > # Server: Apache/2.4.25 (Unix)\r\n > > # Last-Modified: Sun, 18 Dec 2016 00:13:28 GMT\r\n > > # ETag: "226-543e3ac24b8ae"\r\n > > # Accept-Ranges: bytes\r\n > > # Content-Length: 550\r\n > > # DMMATCH1: 1\r\n > > # Connection: close\r\n > > # \r\n > > > > 75: > > # SENDING: > > # GET /apache/http_strict/send_hdr.pl?RgpvbzogYmFy HTTP/1.0\r\n\r\n > > # DECODED: F\noo: bar > > # RESPONSE: > > # HTTP/1.1 200 OK\r\n > > # Date: Sun, 18 Dec 2016 01:17:34 GMT\r\n > > # Server: Apache/2.4.25 (Unix)\r\n > > # Last-Modified: Sun, 18 Dec 2016 00:13:28 GMT\r\n > > # ETag: "226-543e3ac24b8ae"\r\n > > # Accept-Ranges: bytes\r\n > > # Content-Length: 550\r\n > > # DMMATCH1: 1\r\n > > # Connection: close\r\n > > # \r\n > > > > > > 77: > > # SENDING: > > # GET /apache/http_strict/send_hdr.pl?Rm9vOiBiAWFy HTTP/1.0\r\n\r\n > > # DECODED: Foo: b\x01ar > > # RESPONSE: > > # HTTP/1.1 200 OK\r\n > > # Date: Sun, 18 Dec 2016 01:12:21 GMT\r\n > > # Server: Apache/2.4.25 (Unix)\r\n > > # Last-Modified: Sun, 18 Dec 2016 00:13:28 GMT\r\n > > # ETag: "226-543e3ac24b8ae"\r\n > > # Accept-Ranges: bytes\r\n > > # Content-Length: 550\r\n > > # DMMATCH1: 1\r\n > > # Connection: close\r\n > > # \r\n > > > > -- > Eric Covener > cove...@gmail.com >
Re: [VOTE] Release Apache httpd 2.4.25 as GA
On Dec 17, 2016 17:22, "Nick Kew"wrote: Got some test errors with the new stuff. Investigating. Test Summary Report --- t/apache/http_strict.t(Wstat: 0 Tests: 78 Failed: 5) Failed tests: 72-75, 77 Really should not occur and I haven't seen this. Shoot me the -v (erbose) output from your t/apache/http_strict.t please?
Re: [VOTE] Release Apache httpd 2.4.25 as GA
Bonus reminder; if you are working from an existing checkout, top level Makefile.PL plus make must be rerun to provision all new tests correctly. On Dec 17, 2016 7:13 AM, "Jim Jagielski" <j...@jagunet.com> wrote: > As a reminder, when Joe added those tests a few days ago, > it was for: > > URL: http://svn.apache.org/viewvc?rev=1774010=rev > Log: > Add basic test case for mod_ext_filter & specific test for PR 60375. > > > > > On Dec 17, 2016, at 7:35 AM, Rainer Jung <rainer.j...@kippdata.de> > wrote: > > > > Am 17.12.2016 um 10:46 schrieb Marion & Christophe JAILLET: > >> Proposed fix in r1774728. > >> > >> A solution, stating that the tests have been skipped because of sed > >> location, would be better, though. > > > > I switched the test to using a simple perl script instead of sed, so > that we have no platform dependencies any more. It was only using the > search-and-replace feature of sed, which is trivial in perl too. > > > > I did not change the failing tests to TODO though, because I'm not sure > about the original intention. > > > >> Le 16/12/2016 à 21:46, Jacob Champion a écrit : > >>> On 12/16/2016 12:44 PM, William A Rowe Jr wrote: > >>>> I expect this is a bug in the 2.4.x branch. However, I found the same > >>>> exceptions in 2.4.23 tag, so this is not a regression, and Joe is > still > >>>> researching what is happening here, so it doesn't seem to be something > >>>> we would want to hold up a release for. > >>> > >>> Yeah, in my case I think it's just that the tests don't exit after > >>> skipping (i.e. 48 out of 24 tests are run...). Not a showstopper. > > > > Regards, > > > > Rainer > > > >
Re: [VOTE] Release Apache httpd 2.4.25 as GA
On Sat, Dec 17, 2016 at 12:51 AM, William A Rowe Jr <wr...@rowe-clan.net> wrote: > Color me confused, is this w/w-o the hack for APR_HAVE_TIME_T? > Nevermind, that issues is entirely resolved. Thanks for confirmation! > On Sat, Dec 17, 2016 at 12:09 AM, Jan Ehrhardt <php...@ehrhardt.nl> wrote: > >> Jim Jagielski in gmane.comp.apache.devel (Fri, 16 Dec 2016 13:29:04 >> -0500): >> >[x] +1: Good to go >> >> Running without problems on my dev server now. >> Windows 2008 R2, VC9 x86 and VC11, x64 >> -- >> Jan >> >> >
Re: [VOTE] Release Apache httpd 2.4.25 as GA
Color me confused, is this w/w-o the hack for APR_HAVE_TIME_T? On Sat, Dec 17, 2016 at 12:09 AM, Jan Ehrhardtwrote: > Jim Jagielski in gmane.comp.apache.devel (Fri, 16 Dec 2016 13:29:04 > -0500): > >[x] +1: Good to go > > Running without problems on my dev server now. > Windows 2008 R2, VC9 x86 and VC11, x64 > -- > Jan > >
Re: svn commit: r1774650 - /httpd/httpd/branches/2.4.x/modules/cache/mod_socache_memcache.mak
I'm torn whether this is a showstopper or not. In all fairness, feature/enhancement crap modified 2.4.x branch as of; Modified *Tue Dec 13 13:57:02 2016 UTC* (3 days, 16 hours ago) by *jim* So the question is, is it fair to other platform maintainers to deal with enhancements requiring build structure refactoring less than 3 days prior to T? (And if you presume me a win32 platform maintainer, you would be mistaken. I resuscitated my win32 environments as of 4pm today across all my VMs, after some brutal force-update- reboot crap by M$ after I configured it never to do that again. and again. and...) I'll leave this question for the RM to decide. Right now I can offer no vote for or against. We have another thread on how to fix this long term. I haven't used the packaged .mak/.dep files in a decade, since I forked the APR core build from the httpd build, and that isn't reflected in our httpd distro. At this point, I'm not looking back, the CMake solution on trunk seems like the one and only one correct solution, and some proposal to scuttle .dsp from that tree will be arriving once I have a reasonable solution to CMake's 'f' you, it is a fixed root' nightmare. If this were an enhancement release, I would shrug my shoulders and say 'feh, small bug, patch this.' Since this release is overloaded with 10 months of security fixes, I'd suggest an entirely correct tarball is to everyone's' benefit. That's why I can't +1 this otherwise satisfactory tag. I won't vote against this release, the sources are as best as we can accomplish, correct, build issues notwithstanding. On Fri, Dec 16, 2016 at 12:48 PM,wrote: > Author: wrowe > Date: Fri Dec 16 18:48:49 2016 > New Revision: 1774650 > > URL: http://svn.apache.org/viewvc?rev=1774650=rev > Log: > Fix exported .mak output of mod_socache_memcache for '../generators' > dependency > > Modified: > httpd/httpd/branches/2.4.x/modules/cache/mod_socache_memcache.mak > > Modified: httpd/httpd/branches/2.4.x/modules/cache/mod_socache_ > memcache.mak > URL: http://svn.apache.org/viewvc/httpd/httpd/branches/2.4.x/ > modules/cache/mod_socache_memcache.mak?rev=1774650= > 1774649=1774650=diff > > == > --- httpd/httpd/branches/2.4.x/modules/cache/mod_socache_memcache.mak > (original) > +++ httpd/httpd/branches/2.4.x/modules/cache/mod_socache_memcache.mak Fri > Dec 16 18:48:49 2016 > @@ -62,7 +62,7 @@ CLEAN : > if not exist "$(OUTDIR)/$(NULL)" mkdir "$(OUTDIR)" > > CPP=cl.exe > -CPP_PROJ=/nologo /MD /W3 /Zi /O2 /Oy- /I "../../srclib/apr-util/include" > /I "../../srclib/apr/include" /I "../../include" /D "NDEBUG" /D "WIN32" /D > "_WINDOWS" /Fo"$(INTDIR)\\" /Fd"$(INTDIR)\mod_socache_memcache_src" /FD /c > +CPP_PROJ=/nologo /MD /W3 /Zi /O2 /Oy- /I "../../srclib/apr-util/include" > /I "../../srclib/apr/include" /I "../../include" /I "../generators" /D > "NDEBUG" /D "WIN32" /D "_WINDOWS" /Fo"$(INTDIR)\\" > /Fd"$(INTDIR)\mod_socache_memcache_src" /FD /c > > .c{$(INTDIR)}.obj:: > $(CPP) @<< > @@ -166,7 +166,7 @@ CLEAN : > if not exist "$(OUTDIR)/$(NULL)" mkdir "$(OUTDIR)" > > CPP=cl.exe > -CPP_PROJ=/nologo /MDd /W3 /Zi /Od /I "../../srclib/apr-util/include" /I > "../../srclib/apr/include" /I "../../include" /D "_DEBUG" /D "WIN32" /D > "_WINDOWS" /Fo"$(INTDIR)\\" /Fd"$(INTDIR)\mod_socache_memcache_src" /FD > /EHsc /c > +CPP_PROJ=/nologo /MDd /W3 /Zi /Od /I "../../srclib/apr-util/include" /I > "../../srclib/apr/include" /I "../../include" /I "../generators" /D > "_DEBUG" /D "WIN32" /D "_WINDOWS" /Fo"$(INTDIR)\\" > /Fd"$(INTDIR)\mod_socache_memcache_src" /FD /EHsc /c > > .c{$(INTDIR)}.obj:: > $(CPP) @<< > > >
Re: [VOTE] Release Apache httpd 2.4.25 as GA
On Fri, Dec 16, 2016 at 3:01 PM, Rainer Jungwrote: > Am 16.12.2016 um 21:11 schrieb Jacob Champion: > >> On 12/16/2016 10:29 AM, Jim Jagielski wrote: >> >>> I'm calling a VOTE on releasing these as Apache httpd 2.4.25 GA. >>> >> >> mod_ext_filter tests are failing for me on Ubuntu 16.04 x64, but I >> *think* this is due to a bug in the tests as opposed to a regression, >> related to the conversation in [1]. Does anyone else see this? >> > > Tests 5-24 fail for me as well, status 200 instead of 413. But I thought > it might be due to > > https://bz.apache.org/bugzilla/show_bug.cgi?id=60375 > > not yet being fixed. The attached patch was not yet applied, not even for > trunk. So probably the tests should be marked as TODO? > +1 on TODO for tests 5-24 under 2.4.x. These tests already pass on trunk, which is odd if the proposed bugfix isn't there yet :)
Re: [VOTE] Release Apache httpd 2.4.25 as GA
On Fri, Dec 16, 2016 at 2:11 PM, Jacob Championwrote: > On 12/16/2016 10:29 AM, Jim Jagielski wrote: > >> I'm calling a VOTE on releasing these as Apache httpd 2.4.25 GA. >> > > mod_ext_filter tests are failing for me on Ubuntu 16.04 x64, but I *think* > this is due to a bug in the tests as opposed to a regression, related to > the conversation in [1]. Does anyone else see this? > > --Jacob > > [1] https://lists.apache.org/thread.html/a11542ab3c06511e87bde43 > 27dd1cfb72a027ac7bae895a18a772c5b@%3Ccvs.httpd.apache.org%3E > I expect this is a bug in the 2.4.x branch. However, I found the same exceptions in 2.4.23 tag, so this is not a regression, and Joe is still researching what is happening here, so it doesn't seem to be something we would want to hold up a release for.
Re: svn commit: r1774650 - /httpd/httpd/branches/2.4.x/modules/cache/mod_socache_memcache.mak
On Fri, Dec 16, 2016 at 1:41 PM, William A Rowe Jr <wr...@rowe-clan.net> wrote: > On Fri, Dec 16, 2016 at 1:36 PM, Rainer Jung <rainer.j...@kippdata.de> > wrote: > >> Thanks, didn't notice the mak files, because trunk doesn't have them. >> >> I just noticed that the RSC_PROJ lines in the mak files also contain >> include directories but not the one for "generators", in nove of the cache >> module mak files. I have no idea what RSC_PROJ is used for but it looks >> suspicious to only contain parts of the include path. >> >> Example line: >> >> RSC_PROJ=/l 0x409 /fo"$(INTDIR)\mod_socache_memcache.res" /i >> "../../include" /i "../../srclib/apr/include" /d "NDEBUG" /d >> BIN_NAME="mod_socache_memcache.so" /d LONG_NAME="socache_memcache_module >> for Apache" >> > > Yup, not needed. This is only used to emit the .res -> .exe/.dll stubs > that describe the module, and only need to pick up the HTTPD version > tag, so these have nothing to do with > > Looks like we have an issue with mod_socache_redis as well, I'll wire > that appropriately in the next couple minutes. > > Arch-specific, no backport voting required. > Redis is not (yet) backported, nothing needed here for 2.4.x branch.
Re: svn commit: r1774650 - /httpd/httpd/branches/2.4.x/modules/cache/mod_socache_memcache.mak
On Fri, Dec 16, 2016 at 1:36 PM, Rainer Jungwrote: > Thanks, didn't notice the mak files, because trunk doesn't have them. > > I just noticed that the RSC_PROJ lines in the mak files also contain > include directories but not the one for "generators", in nove of the cache > module mak files. I have no idea what RSC_PROJ is used for but it looks > suspicious to only contain parts of the include path. > > Example line: > > RSC_PROJ=/l 0x409 /fo"$(INTDIR)\mod_socache_memcache.res" /i > "../../include" /i "../../srclib/apr/include" /d "NDEBUG" /d > BIN_NAME="mod_socache_memcache.so" /d LONG_NAME="socache_memcache_module > for Apache" > Yup, not needed. This is only used to emit the .res -> .exe/.dll stubs that describe the module, and only need to pick up the HTTPD version tag, so these have nothing to do with Looks like we have an issue with mod_socache_redis as well, I'll wire that appropriately in the next couple minutes. Arch-specific, no backport voting required. Bill
Re: Fixing module-specific, public include/*.h file inclusion on trunk
On Fri, Dec 16, 2016 at 12:57 PM, William A Rowe Jr <wr...@rowe-clan.net> wrote: > So today's primary bogus result is courtesy of is due to leaving > public headers hiding in modules/class/*.h paths for our builds. > > I'd suggest one of two approaches, pick a favorite solution? > > [ ] Have the top level build copy all modules/*/mod_*.h > definitions to the build tree include/ path (conditionally, > based on datestamp)? > > [ ] Make individual modules/*/Makefile[*] responsible for > moving their headers into the build tree include/ path > (conditionally, based on datestamp)? > The challenge with option 2 is that we run the makefiles in sequence by directory name. It introduces the need to reorder the subdirectories by function, so that each makefile runs based on its dependencies (and reciprocal dependencies require the current /I "../foomodules/" hack.)
Fixing module-specific, public include/*.h file inclusion on trunk
So today's primary bogus result is courtesy of is due to leaving public headers hiding in modules/class/*.h paths for our builds. I'd suggest one of two approaches, pick a favorite solution? [ ] Have the top level build copy all modules/*/mod_*.h definitions to the build tree include/ path (conditionally, based on datestamp)? [ ] Make individual modules/*/Makefile[*] responsible for moving their headers into the build tree include/ path (conditionally, based on datestamp)? Advantage to option 2 is that we could add magic to exclude those headers for modules not built. Advantage to both options is that we eliminate all of the module-specific cross-module-class inclusions. Overall logic change should be close to nill, since the 'make install' path no longer needs to be smart enough to look throughout all of the modules/*/mod_*.h files for the target installation. What doesn't end up in include/ wasn't needed for the target install tree anyways.
Re: svn commit: r1774609 - /httpd/httpd/trunk/modules/cache/mod_socache_memcache.dsp
Erm... did you miss the same edit to mod_socache_memcache.mak? (Just catching up on the flurry of traffic, so this is speculation.) Further speculation, are we solid for NetWare now? Any word, Norm? [wrowe@hub cache]$ grep "/generators" * mod_cache_socache.dsp:# ADD CPP /nologo /MD /W3 /O2 /Oy- /Zi /I "../../srclib/apr-util/include" /I "../../srclib/apr/include" /I "../../include" /I "../generators" /D "WIN32" /D "NDEBUG" /D "_WINDOWS" /Fd"Release\mod_cache_socache_src" /FD /c mod_cache_socache.dsp:# ADD CPP /nologo /MDd /W3 /EHsc /Zi /Od /I "../../srclib/apr-util/include" /I "../../srclib/apr/include" /I "../../include" /I "../generators" /D "WIN32" /D "_DEBUG" /D "_WINDOWS" /Fd"Debug\mod_cache_socache_src" /FD /c mod_cache_socache.mak:CPP_PROJ=/nologo /MD /W3 /Zi /O2 /Oy- /I "../../srclib/apr-util/include" /I "../../srclib/apr/include" /I "../../include" /I "../generators" /D "WIN32" /D "NDEBUG" /D "_WINDOWS" /Fo"$(INTDIR)\\" /Fd"$(INTDIR)\mod_cache_socache_src" /FD /c mod_cache_socache.mak:CPP_PROJ=/nologo /MDd /W3 /Zi /Od /I "../../srclib/apr-util/include" /I "../../srclib/apr/include" /I "../../include" /I "../generators" /D "WIN32" /D "_DEBUG" /D "_WINDOWS" /Fo"$(INTDIR)\\" /Fd"$(INTDIR)\mod_cache_socache_src" /FD /EHsc /c mod_socache_dbm.dsp:# ADD CPP /nologo /MD /W3 /O2 /Oy- /Zi /I "../../srclib/apr-util/include" /I "../../srclib/apr/include" /I "../../include" /I "../generators" /D "NDEBUG" /D "WIN32" /D "_WINDOWS" /Fd"Release\mod_socache_dbm_src" /FD /c mod_socache_dbm.dsp:# ADD CPP /nologo /MDd /W3 /EHsc /Zi /Od /I "../../srclib/apr-util/include" /I "../../srclib/apr/include" /I "../../include" /I "../generators" /D "_DEBUG" /D "WIN32" /D "_WINDOWS" /Fd"Debug\mod_socache_dbm_src" /FD /c mod_socache_dbm.mak:CPP_PROJ=/nologo /MD /W3 /Zi /O2 /Oy- /I "../../srclib/apr-util/include" /I "../../srclib/apr/include" /I "../../include" /I "../generators" /D "NDEBUG" /D "WIN32" /D "_WINDOWS" /Fo"$(INTDIR)\\" /Fd"$(INTDIR)\mod_socache_dbm_src" /FD /c mod_socache_dbm.mak:CPP_PROJ=/nologo /MDd /W3 /Zi /Od /I "../../srclib/apr-util/include" /I "../../srclib/apr/include" /I "../../include" /I "../generators" /D "_DEBUG" /D "WIN32" /D "_WINDOWS" /Fo"$(INTDIR)\\" /Fd"$(INTDIR)\mod_socache_dbm_src" /FD /EHsc /c mod_socache_memcache.dsp:# ADD CPP /nologo /MD /W3 /O2 /Oy- /Zi /I "../../srclib/apr-util/include" /I "../../srclib/apr/include" /I "../../include" /I "../generators" /D "NDEBUG" /D "WIN32" /D "_WINDOWS" /Fd"Release\mod_socache_memcache_src" /FD /c mod_socache_memcache.dsp:# ADD CPP /nologo /MDd /W3 /EHsc /Zi /Od /I "../../srclib/apr-util/include" /I "../../srclib/apr/include" /I "../../include" /I "../generators" /D "_DEBUG" /D "WIN32" /D "_WINDOWS" /Fd"Debug\mod_socache_memcache_src" /FD /c mod_socache_shmcb.dsp:# ADD CPP /nologo /MD /W3 /O2 /Oy- /Zi /I "../../srclib/apr-util/include" /I "../../srclib/apr/include" /I "../../include" /I "../generators" /D "NDEBUG" /D "WIN32" /D "_WINDOWS" /Fd"Release\mod_socache_shmcb_src" /FD /c mod_socache_shmcb.dsp:# ADD CPP /nologo /MDd /W3 /EHsc /Zi /Od /I "../../srclib/apr-util/include" /I "../../srclib/apr/include" /I "../../include" /I "../generators" /D "_DEBUG" /D "WIN32" /D "_WINDOWS" /Fd"Debug\mod_socache_shmcb_src" /FD /c mod_socache_shmcb.mak:CPP_PROJ=/nologo /MD /W3 /Zi /O2 /Oy- /I "../../srclib/apr-util/include" /I "../../srclib/apr/include" /I "../../include" /I "../generators" /D "NDEBUG" /D "WIN32" /D "_WINDOWS" /Fo"$(INTDIR)\\" /Fd"$(INTDIR)\mod_socache_shmcb_src" /FD /c mod_socache_shmcb.mak:CPP_PROJ=/nologo /MDd /W3 /Zi /Od /I "../../srclib/apr-util/include" /I "../../srclib/apr/include" /I "../../include" /I "../generators" /D "_DEBUG" /D "WIN32" /D "_WINDOWS" /Fo"$(INTDIR)\\" /Fd"$(INTDIR)\mod_socache_shmcb_src" /FD /EHsc /c NWGNUcach_socache: $(STDMOD)/generators \ NWGNUsocachdbm: $(STDMOD)/generators \ NWGNUsocachmem: $(STDMOD)/generators \ NWGNUsocachshmcb: $(STDMOD)/generators \ On Fri, Dec 16, 2016 at 9:00 AM,wrote: > Author: rjung > Date: Fri Dec 16 15:00:06 2016 > New Revision: 1774609 > > URL: http://svn.apache.org/viewvc?rev=1774609=rev > Log: > Add ../generators to include path for Windows > build of mod_socache_memcache. It now needs > mod_status.h. > > Untested but exactly analogous to what works for > mod_socache_shmcb and others. > > Modified: > httpd/httpd/trunk/modules/cache/mod_socache_memcache.dsp > > Modified: httpd/httpd/trunk/modules/cache/mod_socache_memcache.dsp > URL: http://svn.apache.org/viewvc/httpd/httpd/trunk/modules/cache > /mod_socache_memcache.dsp?rev=1774609=1774608=1774609=diff > > == > --- httpd/httpd/trunk/modules/cache/mod_socache_memcache.dsp (original) > +++ httpd/httpd/trunk/modules/cache/mod_socache_memcache.dsp Fri Dec 16 > 15:00:06 2016 > @@ -43,7 +43,7 @@ RSC=rc.exe > # PROP Ignore_Export_Lib 0 > # PROP Target_Dir "" > # ADD BASE CPP /nologo /MD /W3 /O2 /D "WIN32" /D "NDEBUG" /D "_WINDOWS" > /D
Re: svn commit: r1774010 - in /httpd/test/framework/trunk/t: conf/extra.conf.in modules/ext_filter.t
On Thu, Dec 15, 2016 at 5:21 AM, Joe Orton <jor...@redhat.com> wrote: > On Thu, Dec 15, 2016 at 02:05:32AM -0600, William A Rowe Jr wrote: > > Joe, did you forget an svn add? I see no ext_filter/ subdirectory on > trunk; > > > > https://svn.apache.org/repos/asf/httpd/test/framework/ > trunk/t/htdocs/modules/filter/ > > Oops :( Added now, sorry guys. > Confirmed test 2 now passes on 2.4.23, 2.4 branch, trunk. Users should be aware that t/TEST -clean will not pick up this change, 'perl Makefile.PL; make' is needed to provision it.
Re: svn commit: r1774010 - in /httpd/test/framework/trunk/t: conf/extra.conf.in modules/ext_filter.t
On Wed, Dec 14, 2016 at 11:30 PM, William A Rowe Jr <wr...@rowe-clan.net> wrote: > On Wed, Dec 14, 2016 at 2:51 AM, Joe Orton <jor...@redhat.com> wrote: > >> >> I adjusted the test case to cope with a sed which adds a trailing >> newline now, does that pass with OS X sed? I'm yet to work out why the >> test cases passes in trunk and fails in 2.4, mod_ext_filter is the same >> in both branches. >> > > Seeing different issues on Fedora 24 under 2.x and 2.4 branches > On 2.4 branch, everything seems to go sideways; > > t/modules/ext_filter.t .. 1/24 > # Failed test 2 in t/modules/ext_filter.t at line 26 > # Failed test 5 in t/modules/ext_filter.t at line 36 > # Failed test 6 in t/modules/ext_filter.t at line 37 > # Failed test 7 in t/modules/ext_filter.t at line 36 fail #2 > # Failed test 8 in t/modules/ext_filter.t at line 37 fail #2 > # Failed test 9 in t/modules/ext_filter.t at line 36 fail #3 > # Failed test 10 in t/modules/ext_filter.t at line 37 fail #3 > # Failed test 11 in t/modules/ext_filter.t at line 36 fail #4 > # Failed test 12 in t/modules/ext_filter.t at line 37 fail #4 > # Failed test 13 in t/modules/ext_filter.t at line 36 fail #5 > # Failed test 14 in t/modules/ext_filter.t at line 37 fail #5 > # Failed test 15 in t/modules/ext_filter.t at line 36 fail #6 > # Failed test 16 in t/modules/ext_filter.t at line 37 fail #6 > # Failed test 17 in t/modules/ext_filter.t at line 36 fail #7 > # Failed test 18 in t/modules/ext_filter.t at line 37 fail #7 > # Failed test 19 in t/modules/ext_filter.t at line 36 fail #8 > # Failed test 20 in t/modules/ext_filter.t at line 37 fail #8 > # Failed test 21 in t/modules/ext_filter.t at line 36 fail #9 > # Failed test 22 in t/modules/ext_filter.t at line 37 fail #9 > # Failed test 23 in t/modules/ext_filter.t at line 36 fail #10 > # Failed test 24 in t/modules/ext_filter.t at line 37 fail #10 > t/modules/ext_filter.t .. Failed 21/24 subtests > > I don't see anything else wrong on 2.4 branch except for > > t/modules/rewrite.t . 1/34 # Failed test 34 in > t/modules/rewrite.t at line 120 > t/modules/rewrite.t . Failed 1/34 subtests > (1 TODO test unexpectedly succeeded) > > But I have not yet caught up with the latest commits to rewrite. > > Thinking that on 2.4 branch, a poorly performing retest against 2.4.23 > is all that is needed to proceed to release, and then return to address > later as not-a-regression. > On 2.3.23, indeed this doesn't look like a regression (test 2 again fails on the missing perl script); t/modules/ext_filter.t .. 1/24 # Failed test 2 in t/modules/ext_filter.t at line 26 # Failed test 5 in t/modules/ext_filter.t at line 36 # Failed test 6 in t/modules/ext_filter.t at line 37 # Failed test 7 in t/modules/ext_filter.t at line 36 fail #2 # Failed test 8 in t/modules/ext_filter.t at line 37 fail #2 # Failed test 9 in t/modules/ext_filter.t at line 36 fail #3 # Failed test 10 in t/modules/ext_filter.t at line 37 fail #3 # Failed test 11 in t/modules/ext_filter.t at line 36 fail #4 # Failed test 12 in t/modules/ext_filter.t at line 37 fail #4 # Failed test 13 in t/modules/ext_filter.t at line 36 fail #5 # Failed test 14 in t/modules/ext_filter.t at line 37 fail #5 # Failed test 15 in t/modules/ext_filter.t at line 36 fail #6 # Failed test 16 in t/modules/ext_filter.t at line 37 fail #6 # Failed test 17 in t/modules/ext_filter.t at line 36 fail #7 # Failed test 18 in t/modules/ext_filter.t at line 37 fail #7 # Failed test 19 in t/modules/ext_filter.t at line 36 fail #8 # Failed test 20 in t/modules/ext_filter.t at line 37 fail #8 # Failed test 21 in t/modules/ext_filter.t at line 36 fail #9 # Failed test 22 in t/modules/ext_filter.t at line 37 fail #9 # Failed test 23 in t/modules/ext_filter.t at line 36 fail #10 # Failed test 24 in t/modules/ext_filter.t at line 37 fail #10 t/modules/ext_filter.t .. Failed 21/24 subtests I'd think we are fine to proceed to 2.4.24 tag, this isn't a regression.
Re: svn commit: r1774010 - in /httpd/test/framework/trunk/t: conf/extra.conf.in modules/ext_filter.t
On Wed, Dec 14, 2016 at 11:30 PM, William A Rowe Jr <wr...@rowe-clan.net> wrote: > On Wed, Dec 14, 2016 at 2:51 AM, Joe Orton <jor...@redhat.com> wrote: > >> >> I adjusted the test case to cope with a sed which adds a trailing >> newline now, does that pass with OS X sed? I'm yet to work out why the >> test cases passes in trunk and fails in 2.4, mod_ext_filter is the same >> in both branches. >> > > Seeing different issues on Fedora 24 under 2.x and 2.4 branches, neither > seems entirely good, 2.4 is especially strange. > > On 2.x branch, Test #2 fails with a failed to create process. > > [Thu Dec 15 05:07:21.578683 2016] [ext_filter:error] [pid 19326:tid > 139620838786816] (2)No such file or directory: [client 127.0.0.1:49186] > AH01458: couldn't create child process to run `/home/wrowe/dev/test/test2x- > apr20-ossl110/t/htdocs/modules/ext_filter/sleepycat.pl' > Joe, did you forget an svn add? I see no ext_filter/ subdirectory on trunk; https://svn.apache.org/repos/asf/httpd/test/framework/trunk/t/htdocs/modules/filter/
Re: svn commit: r1774010 - in /httpd/test/framework/trunk/t: conf/extra.conf.in modules/ext_filter.t
On Wed, Dec 14, 2016 at 2:51 AM, Joe Ortonwrote: > On Tue, Dec 13, 2016 at 10:14:21AM -0500, Jim Jagielski wrote: > > > > > On Dec 13, 2016, at 8:33 AM, Jim Jagielski wrote: > > > > > > This fails on all systems that have sed in /usr/bin/sed > > > or someplace other than /bin :( > > > > > > > Hmmm... even w/ the change to /usr/bin/sed I'm getting > > test #1 failing: > > Ah, sorry about that & thanks for adjusting the config. > > I adjusted the test case to cope with a sed which adds a trailing > newline now, does that pass with OS X sed? I'm yet to work out why the > test cases passes in trunk and fails in 2.4, mod_ext_filter is the same > in both branches. > Seeing different issues on Fedora 24 under 2.x and 2.4 branches, neither seems entirely good, 2.4 is especially strange. On 2.x branch, Test #2 fails with a failed to create process. # testing : slow filter process # expected: 'foobar' # received: ' # # 500 Internal Server Error # # Internal Server Error # The server encountered an internal error or # misconfiguration and was unable to complete # your request. # Please contact the server administrator at # y...@example.com to inform them of the time this error occurred, # and the actions you performed just before this error. # More information about this error may be available # in the server error log. # ' not ok 2 [Thu Dec 15 05:07:20.945238 2016] [example_hooks:notice] [pid 19326:tid 139620847179520] AH03297: x_create_connection() [Thu Dec 15 05:07:20.945327 2016] [example_hooks:notice] [pid 19326:tid 139620847179520] AH03297: x_create_request() [Thu Dec 15 05:07:20.945573 2016] [authz_core:debug] [pid 19326:tid 139620847179520] mod_authz_core.c(834): [client 127.0.0.1:49182] AH01628: authorization result: granted (no directives) [Thu Dec 15 05:07:20.945603 2016] [charset_lite:debug] [pid 19326:tid 139620847179520] mod_charset_lite.c(216): [client 127.0.0.1:49182] AH01448: incomplete configuration: src unspecified, dst unspecified [Thu Dec 15 05:07:20.945742 2016] [dialup:notice] [pid 19326:tid 139620847179520] (70023)This function has not been implemented on this platform: [client 127.0.0.1:49182] AH02637: dialup: MPM doesn't support suspending [Thu Dec 15 05:07:20.981454 2016] [optional_hook_import:error] [pid 19326:tid 139620847179520] AH01866: Optional hook test said: GET /index.html HTTP/1.1 [Thu Dec 15 05:07:20.981473 2016] [optional_fn_export:error] [pid 19326:tid 139620847179520] AH01871: Optional function test said: GET /index.html HTTP/1.1 [Thu Dec 15 05:07:21.560737 2016] [example_hooks:notice] [pid 19326:tid 139620838786816] AH03297: x_create_connection() [Thu Dec 15 05:07:21.560806 2016] [example_hooks:notice] [pid 19326:tid 139620838786816] AH03297: x_create_request() [Thu Dec 15 05:07:21.560950 2016] [authz_core:debug] [pid 19326:tid 139620838786816] mod_authz_core.c(834): [client 127.0.0.1:49186] AH01628: authorization result: granted (no directives) [Thu Dec 15 05:07:21.560967 2016] [charset_lite:debug] [pid 19326:tid 139620838786816] mod_charset_lite.c(216): [client 127.0.0.1:49186] AH01448: incomplete configuration: src unspecified, dst unspecified [Thu Dec 15 05:07:21.561006 2016] [dialup:notice] [pid 19326:tid 139620838786816] (70023)This function has not been implemented on this platform: [client 127.0.0.1:49186] AH02637: dialup: MPM doesn't support suspending [Thu Dec 15 05:07:21.572188 2016] [optional_hook_import:error] [pid 19326:tid 139620838786816] AH01866: Optional hook test said: GET /apache/extfilter/out-foo/foobar.html HTTP/1.1 [Thu Dec 15 05:07:21.572199 2016] [optional_fn_export:error] [pid 19326:tid 139620838786816] AH01871: Optional function test said: GET /apache/extfilter/out-foo/foobar.html HTTP/1.1 [Thu Dec 15 05:07:21.572237 2016] [ext_filter:error] [pid 19326:tid 139620838786816] (9)Bad file descriptor: [client 127.0.0.1:49186] AH01466: apr_file_read(child output), len 18446744073709551615 [Thu Dec 15 05:07:21.572251 2016] [ext_filter:error] [pid 19326:tid 139620838786816] (9)Bad file descriptor: [client 127.0.0.1:49186] AH01468: ef_unified_filter() failed [Thu Dec 15 05:07:21.572267 2016] [example_hooks:notice] [pid 19326:tid 139620838786816] AH03297: x_create_request() [Thu Dec 15 05:07:21.578555 2016] [authz_core:debug] [pid 19326:tid 139620838786816] mod_authz_core.c(834): [client 127.0.0.1:49186] AH01628: authorization result: granted (no directives) [Thu Dec 15 05:07:21.578578 2016] [charset_lite:debug] [pid 19326:tid 139620838786816] mod_charset_lite.c(216): [client 127.0.0.1:49186] AH01448: incomplete configuration: src unspecified, dst unspecified [Thu Dec 15 05:07:21.578616 2016] [dialup:notice] [pid 19326:tid 139620838786816] (70023)This function has not been implemented on this platform: [client 127.0.0.1:49186] AH02637: dialup: MPM doesn't support suspending [Thu Dec 15 05:07:21.578683 2016] [ext_filter:error] [pid 19326:tid 139620838786816] (2)No such file or directory: [client
Re: Absorb win32-apxs into httpd distro?
On Wed, Dec 14, 2016 at 2:20 AM, Steve Hay <steve.m@googlemail.com> wrote: > On 14 December 2016 at 08:13, William A Rowe Jr <wr...@rowe-clan.net> > wrote: > > Randy wrote http://www.apache.org/dist/perl/win32-bin/ - but I'm > wondering > > who else here at httpd is interested in helping maintain and get this > code > > into our own distribution? I've shipped this for a decade for my > commercial > > customers, for every wrinkle and wart, I think many of our win32 users > > would appreciate this. > > > > I trust the perl pmc and Randy would be good with this adoption? > > > > Randy sadly passed away some years ago, but I for one would welcome > the adoption. > I was aware. Seemed like one more small appropriate tribute. The latest version that I'm aware of lives in > https://svn.apache.org/repos/asf/perl/apxs/trunk > Thanks for the pointer, and support of bringing this into the httpd distro. Hoping you (and other perl-folk who know the code) might help review the patches and ensure it is working well, moving forwards.
Absorb win32-apxs into httpd distro?
Randy wrote http://www.apache.org/dist/perl/win32-bin/ - but I'm wondering who else here at httpd is interested in helping maintain and get this code into our own distribution? I've shipped this for a decade for my commercial customers, for every wrinkle and wart, I think many of our win32 users would appreciate this. I trust the perl pmc and Randy would be good with this adoption? Cheers, Bill
Re: svn commit: r1773865 - /httpd/httpd/trunk/modules/http/http_filters.c
On Tue, Dec 13, 2016 at 6:42 PM, Yann Ylavicwrote: > On Tue, Dec 13, 2016 at 8:37 PM, Ruediger Pluem wrote: > > > > This change is also unrelated to the bad header issue and I think > > if there is interest to address this it should be done in a separate > > patch set. > > It is actually, we could eventually avoid reading/consuming the body > of the original response (closing prematuraly any backend > connection...), but in any case we need to eat the ap_die()'s one if > we encounter a bad header on its generated response. > That's because we want to generate a minimal 500 response in this > case, with no body (even the ap_die(500)'s one not be related, > ErrorDocument can be any redirection, including proxying). > > Thus the eat-body code is needed anyway, let's also use it for the > original response then Indeed but we have to reiterate one point Fielding has raised in another forum that seems to be ignored in this conversation. In light of bad input, the http servers and user agents must present valid data to their upstream servers and downstream consumers. The spec does not define how this is accomplished, the entire response can be redacted, or the invalid headers can be redacted. No consumer can be presented with invalid request or response data. How we choose to accomplish this is entirely our choice, that was Roy's point. I took the following statement literally and accepted Yann's proposal; "This is still not great. We get the original body, a 500 status code and status line." Any 500 status with the original body would be completely nonsense, and I didn't even test to confirm that this earlier assertion was true, I just assumed the statement was accurate. But it could still be the original status with all garbage response headers redacted. It could be a proper 500 response and let httpd's mechanisms to kill the request, letting the handler still dump useless data at the filter stack does work (confirmed). Provided it presents the correct error body, and redacts bad headers at every phase, we deliver a comprehensible response in any case. Best yet, we inform the caller that the send brigade call failed, and let smart callers short-circuit streams that they are able to. As a practical matter, we can't do that in the proxy, since the entire response body must be eaten to ensure correct socket closure. But the direct response to the caller could and should be issued directly. Right now, we have Yann's proposal. Several of us would like to see patches attached to these concerns with a cleaner solution. I'm not clear that the filter stack was conceived as the origin of error responses, but this case, and many other conceivable examples proves that we should have considered that case, and made the programming pattern a little clearer.
Re: T of 2.4.24
On Dec 12, 2016 7:44 PM, "Daniel Ruggeri" <drugg...@primary.net> wrote: On 12/12/2016 12:26 AM, William A Rowe Jr wrote: > In spite of 34 registered project committee members, until other > contributors come forward to participate in the security patch review > process, we may simply have to declare all further efforts are currently > on pause. Does one have to be on PMC to review security patches? If not, can you give me a general idea on volume? This would be something I think $dayjob would be OK with me doing as part of keeping a shirt on my back and roof over the childrens' heads ;-) This is something our httpd security team has revisited a few times over the past few years. To be on *httpd* security list, we require a certain level of trust. In the past, this was based on PMC membership. We have since tweaked things to bring in proven committers who are not yet on the PMC. Also, all ASF Members have access to private archived lists; this includes any PMC private lists and security lists across the foundation. In terms of volume, there are only a handful of security issues per year, from none to a dozen, but many dozens of reports we have to evaluate and filter. It often takes probing questions of the reporter to distinguish their defect report from a vulnerablity, or to quantify and qualify the exposure and risk. The ASF-wide list is another beast, it is a massive spam trap, exceeding dozens of garbage messages per day, to capture about a dozen legitimate messages a day, and only a tiny handful of new inbound messages a day that are dispatched to the appropriate PMC's team. That list does require ASF Membership to volunteer because it has full visibility into most every defect.
Re: svn commit: r1773865 - /httpd/httpd/trunk/modules/http/http_filters.c
On Dec 12, 2016 6:07 PM, "Yann Ylavic" <ylavic@gmail.com> wrote: On Tue, Dec 13, 2016 at 12:17 AM, Jacob Champion <champio...@gmail.com> wrote: > On 12/12/2016 03:10 PM, William A Rowe Jr wrote: >> >> On Mon, Dec 12, 2016 at 5:00 PM, Jacob Champion <champio...@gmail.com >> <mailto:champio...@gmail.com>> wrote: >> >> On 12/12/2016 01:23 PM, Yann Ylavic wrote: >> >> On Mon, Dec 12, 2016 at 10:07 PM, Jacob Champion >> <champio...@gmail.com <mailto:champio...@gmail.com>> wrote: >> >> >> What's the case where this catches recursion that the >> previous logic in >> r1773861 did not handle? I'm trying to write a test that >> fails on r1773861 >> and succeeds on r1773865, but I haven't figured it out yet. >> >> >> I think it's more r1773862 that fixes your test case. >> >> >> To clarify: I can't reproduce any problems with r1773861 in the >> first place, even with ErrorDocument. I agree that r1773862 (and >> r1773865) work for me; I just don't know what makes them >> functionally different. In my attempted test cases, I can't find any >> case where the rr->pool used during the internal redirect differs >> from the original r->pool. >> >> Can you send me a config snippet that reproduces the loop with >> ErrorDocument? I'm not arguing against your followup patches; I just >> want to make sure a correct test case gets into the suite. :D >> >> >> Speaking of the test suite behavior, your mission had succeeded. My quad >> core machine was pegged, X-Windows/Gnome unresponsive. >> >> Do we want to put such tests in the framework? > > > I would say yes, definitely. Better for us to bring down a tester's machine > with a regression and fix the problem before it goes live, than to spare the > tester and end up shipping said regression. Not worried about my machine. Hanging a build machine with no feedback would be an issue >> If any of our perl gurus >> have a good suggestion to throttle the top limit of cpu/time consumed, >> that would be a good enhancement to the framework. > > > Agreed! BSD::Resource's setrlimit(RLIMIT_CPU, soft, hard)? If portable to Linux/Windows, then mission accomplished.
Re: svn commit: r1773865 - /httpd/httpd/trunk/modules/http/http_filters.c
On Mon, Dec 12, 2016 at 5:00 PM, Jacob Championwrote: > On 12/12/2016 01:23 PM, Yann Ylavic wrote: > >> On Mon, Dec 12, 2016 at 10:07 PM, Jacob Champion >> wrote: >> >> >>> What's the case where this catches recursion that the previous logic in >>> r1773861 did not handle? I'm trying to write a test that fails on >>> r1773861 >>> and succeeds on r1773865, but I haven't figured it out yet. >>> >> >> I think it's more r1773862 that fixes your test case. >> > > To clarify: I can't reproduce any problems with r1773861 in the first > place, even with ErrorDocument. I agree that r1773862 (and r1773865) work > for me; I just don't know what makes them functionally different. In my > attempted test cases, I can't find any case where the rr->pool used during > the internal redirect differs from the original r->pool. > > Can you send me a config snippet that reproduces the loop with > ErrorDocument? I'm not arguing against your followup patches; I just want > to make sure a correct test case gets into the suite. :D Speaking of the test suite behavior, your mission had succeeded. My quad core machine was pegged, X-Windows/Gnome unresponsive. Do we want to put such tests in the framework? If any of our perl gurus have a good suggestion to throttle the top limit of cpu/time consumed, that would be a good enhancement to the framework.
Re: svn commit: r1773865 - /httpd/httpd/trunk/modules/http/http_filters.c
On Mon, Dec 12, 2016 at 3:32 PM, Yann Ylavic <ylavic@gmail.com> wrote: > On Mon, Dec 12, 2016 at 10:16 PM, William A Rowe Jr <wr...@rowe-clan.net> > wrote: > > On Mon, Dec 12, 2016 at 3:07 PM, Jacob Champion <champio...@gmail.com> > > wrote:+ > >> > >> > >> What's the case where this catches recursion that the previous logic in > >> r1773861 did not handle? I'm trying to write a test that fails on > r1773861 > >> and succeeds on r1773865, but I haven't figured it out yet. > > > > > > I'm confused by a different aspect. > > > > In trashing the body-in-flight, whose headers caused us to 500-reject > > the response, have we also trashed any and all correct error documents > > or built-in short 500 response explanation? > > No, I tried (quite hard, in a second time) to honor ErrorDocument by > calling ap_die() when check_headers() fails. > > That's only if/when that ErrorDocument is caught by check_headers that > we end up generating a minimal 500 response (with Server, Date, > Connection: close and empty body), to avoid infinite recursion. > I set up a test case of a constant text line and an html error doc, and can confirm that everything is working as expected/hoped for. Thank you for all the troubleshooting and coding on this effort!
Re: svn commit: r1773865 - /httpd/httpd/trunk/modules/http/http_filters.c
On Mon, Dec 12, 2016 at 3:07 PM, Jacob Championwrote:+ > > > What's the case where this catches recursion that the previous logic in > r1773861 did not handle? I'm trying to write a test that fails on r1773861 > and succeeds on r1773865, but I haven't figured it out yet. I'm confused by a different aspect. In trashing the body-in-flight, whose headers caused us to 500-reject the response, have we also trashed any and all correct error documents or built-in short 500 response explanation? Unlike a 400-reject (which is always caused by a badly implemented client, and should never arrive from a conventional browser user-agent), the 500-reject speaks to some module or backend that isn't under the requestor's control. An error doc may be quite helpful in this situation. I've verified that the bad headers in the current patch are redacted looking at wireshark capture. Only one (first) bad header is logged, which is fine IMO, no need for excessive error.log noise.
Re: svn commit: r1773761 - /httpd/httpd/trunk/modules/http/http_filters.c
On Mon, Dec 12, 2016 at 12:59 PM, Eric Covener <cove...@gmail.com> wrote: > On Mon, Dec 12, 2016 at 1:54 PM, William A Rowe Jr <wr...@rowe-clan.net> > wrote: > >> The problem seems to be that `Headers always set` negates the header > >> removal, and the anti-recursion check doesn't seem to be working as > >> intended. > > > > > > By removal, I'm suggesting this should happen in the http output filter > > just as we are about to transmit them. > > > > So the header will be set, then it would then be un-set, but my issue > > is that I can't find the programatic pattern for apr_table_do to > manipulate > > the elts, and even if it exists, apr_table_do will quit once the first > bad > > elt > > is found and the callback first returns 0, preventing us from reviewing > the > > remaining header lines. > > We can loop over either apr_table_do or check_headers while they're > failing, as long as you are removing 1 header each time to make > progress. > Dozens of good headers followed by dozens of bad headers sounds like a DOS vector. Probably easier if we just iterate the list ourselves and skip apr_table_do(), although this sounds like a good example for an APR 1.next enhancement later on.
Making test framework and trunk approachable (was: PCRE 10 and puzzling edge cases)
On Dec 12, 2016 3:52 AM, "Petr Pisar"wrote: I think the inability to build httpd against system APR and to run test against not yet installed httpd is quite surprising. Thanks for taking the additional time to document your experiences! System APR does require the -devel package. That RHEL package aught to install the .m4 logic required by httpd, but it is on us to find that in a conventional place. Httpd trunk is already hunting for yet unreleased APR 1.6 features, but we should better document that. Not yet installed httpd is a little more complex, maybe we can add an interim apxs in the build tree to facilitate this? Working from my hardcopy of your list once our current release effort is finished. Again, thanks.
Re: svn commit: r1773761 - /httpd/httpd/trunk/modules/http/http_filters.c
On Mon, Dec 12, 2016 at 12:43 PM, Jacob Champion <champio...@gmail.com> wrote: > On 12/12/2016 10:40 AM, William A Rowe Jr wrote: > >> I expect we still must have a header unset in the 500 case for all invalid >> header names or values. >> > > The problem seems to be that `Headers always set` negates the header > removal, and the anti-recursion check doesn't seem to be working as > intended. By removal, I'm suggesting this should happen in the http output filter just as we are about to transmit them. So the header will be set, then it would then be un-set, but my issue is that I can't find the programatic pattern for apr_table_do to manipulate the elts, and even if it exists, apr_table_do will quit once the first bad elt is found and the callback first returns 0, preventing us from reviewing the remaining header lines.
Re: svn commit: r1773761 - /httpd/httpd/trunk/modules/http/http_filters.c
On Mon, Dec 12, 2016 at 12:17 PM, Jacob Championwrote: > On 12/12/2016 08:18 AM, Yann Ylavic wrote: > >> On Mon, Dec 12, 2016 at 3:32 PM, Eric Covener wrote: >> >>> On Mon, Dec 12, 2016 at 6:25 AM, Yann Ylavic >>> wrote: >>> Tested with "Header set 'X-Bad' ''" with no redirect loop. >>> >>> w/ 'Header always set...' this still repeats >>> >> >> Right, with r1773812 it looks good. >> > > Even with r1773812, I'm still getting recursion with the `Header always > set` case. ap_is_initial_req() is returning true during the ap_die() > response. > > Can anyone else confirm? I expect we still must have a header unset in the 500 case for all invalid header names or values. A simple test in Strict mode is Foo?Bar: Bash A simple test in non-strict mode is FooBar: Bash\a
Re: svn commit: r1773761 - /httpd/httpd/trunk/modules/http/http_filters.c
On Mon, Dec 12, 2016 at 10:18 AM, Yann Ylavicwrote: > On Mon, Dec 12, 2016 at 3:32 PM, Eric Covener wrote: > > On Mon, Dec 12, 2016 at 6:25 AM, Yann Ylavic > wrote: > >> Tested with "Header set 'X-Bad' ''" with no redirect > loop. > > > > w/ 'Header always set...' this still repeats > > Right, with r1773812 it looks good. > We'd only recurse if not in internal redirect already, otherwise fall > through with: > HTTP/1.1 500 Internal Server Error > Date: Mon, 12 Dec 2016 16:11:59 GMT > Server: Apache/2.5.0-dev (Unix) OpenSSL/1.0.2g > Content-Length: 0 > Connection: close > > WDYT? > Looks like the right approach, tweaking the test framework to exercise this solution. Thx!
Re: Breakage on trunk head due to PCRE
On Dec 12, 2016 9:37 AM, "Jim Jagielski"wrote: > ./configure: line 6194: with_pcre: command not found > checking for -pcre2-config... no > checking for -pcre-config... no > checking for pcre2-config... no > checking for pcre-config... no > configure: error: pcre(2)-config for libpcre not found. PCRE is required > and available from http://pcre.org/ > make: *** No targets specified and no makefile found. Stop. > Shell var assignment syntax error. Please re-./buildconf on trunk once you svn up... sorry about that.
Re: T of 2.4.24
On Thu, Dec 8, 2016 at 8:55 AM, Jim Jagielskiwrote: > Things are looking good for a T of 2.4.24 sometime late > today. > > If you have any issues or concerns, let me know asap. > Hi Jim, we may have to concede, in light of many already partially disclosed CVE's, that it is impossible to proceed. At this moment, there are 5 committers who have invested time and energy at looking at the current open issues. Of the stale issues, 2 refuse to fix the reported issued directly, while 3 others have lingering patches that would fix the core defects. There is a straightforward solution to solving such issues, but the quick-fix has issues of its own. Only three votes are required to incorporate the fix, but in the face of an objection, four are required to overrule a hold-out (assuming it is even the right solution.) Five is simply too small a number to sustain a security team at any project of this complexity. That isn't pointing fingers at any person whatsoever, it's an assessment of the situation. In spite of 34 registered project committee members, until other contributors come forward to participate in the security patch review process, we may simply have to declare all further efforts are currently on pause. Sincerely, thanks for trying to push this release forward. I hope this is all resolved quickly.
Re: 2.6/3.0
On Dec 9, 2016 21:56, "Daniel Ruggeri"wrote: On 12/9/2016 8:32 AM, Jim Jagielski wrote: > Instead, maybe we could backport all that stuff to 2.4, in a backwards > compatible fashion. That is, basically backport trunk to 2.4. This > would give us more runway to work on httpd-nextgen. > > Thoughts? Considering a lot of the changes in trunk, I'm not entirely sure it can be backported without creating some code maintenance challenges... I agree about the concern w/ branching so I guess it's really a matter of weighing the benefits - are there *enough* features to be worthy of a release, have we reached API stability to make it worth it and do we lose anything in our "sandbox" by doing this? Pressed for a response, I'd lean towards probably not but it's not a terribly strong opinion. It's also not unprecedented to have three major versions in play, too. It's been announced that 2.2 is on the "EOL" track with most folks interested in maintaining that branch in the 12 to 18 month range (June - Dec 2017ish). So, that said, it wouldn't be a super long time we maintain three branches... a year-ish from now. If that. Midyear is the 2.2 EOL. Patches for key security defects may be offered for a while longer, but no release votes. 3.0 isn't 'ready' but won't become ready unless we make an investment in alpha / beta releases and start breaking some eggs to move in that direction. Will it take 2 mos or 6 mos? Time will tell, but I would take the odds that we bless the next major GA release pretty close to the end of 2.2's lifespan, if we start now.
Re: 2.6/3.0
On Fri, Dec 9, 2016 at 8:32 AM, Jim Jagielskiwrote: > It may be weird talking about httpd 2.6/3.0 when we are stuck > in a holding pattern for 2.4.24, but actually it's not a bad > idea. > > Right now, there are quite a few improvements in trunk that > should *really* be in a releasable version... Now we have some > options on how to proceed, but my modus operandi has been to > "push" as much trunk goodness back to 2.4 as possible, leaving > trunk still a nice sandbox for development on how we really want > to architecture the next gen (slave connections, etc...). So > as much as I think we should continue that, I also think it's > a disservice to keep the async, et.al. improvements in trunk, > well, in trunk. So we could fork 2.6 from trunk and work on > releasing 2.6. Assuming that we do so in a timely fashion, that > would mean we would have 3 versions out: 2.2, 2.4, and 2.6. > Considering that 2.4 is finally showing some traction, I wonder > if that's the right move. > Well, that isn't how we work, we don't fork 2.6 before 2.5.whatnot. We release 2.5.0-alpha (or -beta). Unless you want to redesign the process and open that hard-won consensus all over again. Then another -alpha/-beta, and so on, until we have consensus to tag trunk as the 2.6.0/3.0.0 release. Then we take a deep breath and fork 2.6.x/3.0.x branch from trunk. This keeps 2.5 as close to the bleeding edge as possible during the next release process, with major API changes and broken binary compatibility between the individual alpha/beta samples, so that others in our ecosystem may adapt their modules throughout the process, and are (one hopes) ready for the 2.6.0/3.0.0 release. You and I (and others, I suspect) agree there is a lot of trunk goodness to release soon. Other things that would likely happen during the 2.5.x phases are a reworking/restructuring of utility vs server facilities (something which is a complete mess at this moment), collapsing _ex() functions into their historic function names, etc. Restoring the ability to load modules out of order after the chaos of poorly thought out new 'hub-and-spoke' modules. And there are other discussions afoot around the PMC about major API changes that will break some developer assumptions, and I believe will cause us to take this to 3.0.0, not some minor revision bump. > Instead, maybe we could backport all that stuff to 2.4, in a backwards > compatible fashion. That is, basically backport trunk to 2.4. This > would give us more runway to work on httpd-nextgen. > That is very unrealistic, given that the trunk patches have broken ABI more than once a month for four years. Adapting such patches would lead to many more inane forks like the HttpProtocol Strict backport effort. The other 'instead' is that we could throw away trunk, and branch a new trunk from 2.4.x branch, if we are not comfortable releasing the contents of trunk as an alpha or beta release. I hope that's not the case.
Re: PCRE 10 and puzzling edge cases
On Fri, Dec 9, 2016 at 8:05 AM, Petr Pisar <ppi...@redhat.com> wrote: > On Thu, Dec 08, 2016 at 11:09:42AM -0600, William A Rowe Jr wrote: > > I've beaten my head against this wall for a bit longer, and came up with > > several places where pcre2 changed return types for void *what query > > interogations (especially from int to uint32, badness on x86_64-linux). > > > > The attached patch picks up these bad void * type assignments. Still > > no tremendous improvement, missing something blatantly obvious, > > I expect. > > > After few hours of getting httpd sources and tests and hacking them to > actually obtain a pass, I applied your patch and looked what's wrong with > your > PCRE2 code. > Thanks again, if there is anything that we didn't document well about getting the tests to run, we would like to fix that and make it easier for developers to cobble together their own test environment. We certainly don't want it to take hours for an experienced newcomer to get from point a to point b. Happy to improve this based on your observations. The t/apache/expr.t failed because the pcre2_match() alwayes returned -51 > that means PCRE2_ERROR_NULL. The reason is you used the old PCRE > optimization > path and called pcre2_match_data_create(nmatch, NULL) only if nmatch was > positive. As a result, pcre2_match_data_create() was never called, so you > passed NULL matchdata to pcre2_match(), hence the failure. > Yup, that was a bad assumption on my part, reading pcre2api in parallel with pcreapi. > The tests still do not pass, but that's probably another (PCRE2) problem. > I hope I helped you at lest somehow. > I do have it working, now committed. Failing tests are likely some missing cpan modules in your setup still, or perhaps some tests that are making bogus assumptions about the supported dependency libraries. Thanks for the pointer, we seem to have succeeded!
Re: PCRE 10 and puzzling edge cases
On Dec 9, 2016 8:06 AM, "Petr Pisar" <ppi...@redhat.com> wrote: On Thu, Dec 08, 2016 at 11:09:42AM -0600, William A Rowe Jr wrote: > I've beaten my head against this wall for a bit longer, and came up with > several places where pcre2 changed return types for void *what query > interogations (especially from int to uint32, badness on x86_64-linux). > > The attached patch picks up these bad void * type assignments. Still > no tremendous improvement, missing something blatantly obvious, > I expect. > After few hours of getting httpd sources and tests and hacking them to actually obtain a pass, I applied your patch and looked what's wrong with your PCRE2 code. The t/apache/expr.t failed because the pcre2_match() alwayes returned -51 that means PCRE2_ERROR_NULL. The reason is you used the old PCRE optimization path and called pcre2_match_data_create(nmatch, NULL) only if nmatch was positive. As a result, pcre2_match_data_create() was never called, so you passed NULL matchdata to pcre2_match(), hence the failure. See the attached fix. The tests still do not pass, but that's probably another (PCRE2) problem. I hope I helped you at lest somehow That's a huge help, thanks Petr. I'm also curious along that path if adding PCRE2 (or PCRE) study pattern would also be helpful. Something to experiment with once both code paths are working. Cheers, Bill
Re: T of 2.4.24
On Thu, Dec 8, 2016 at 12:16 PM, William A Rowe Jr <wr...@rowe-clan.net> wrote: > On Thu, Dec 8, 2016 at 12:03 PM, Jim Jagielski <j...@jagunet.com> wrote: > >> AFAICT there is no consensus. But is this really a blocker? > > > I don't know, expat is at 2.2.0 and PCRE is at 8.39 with significant > vulnerability > fixes (everyone seems very enamored with fuzz generators this past few > years.) > > It doesn't block creation of httpd-2.4.24.tar.gz, obviously. > > It does raise the question again of whether the httpd project can > distribute > a source code package on www.apache.org/dist/httpd/ which is not voted > on by the project, and whether it violates the spirit of the pmc consensus > to no longer be the distributor of dependencies which frequently fall into > a poorly maintained/updated state. > @VP Legal, is this worth an escalation? You didn't see fit to respond today, but I think this falls under the purview of your committee, w.r.t. unapproved release artifacts living at www.apache.org/dist/. Did you have any thoughts or opinions one way or another?
Re: svn commit: r1773293 - /httpd/httpd/trunk/modules/http/http_filters.c
On Thu, Dec 8, 2016 at 1:57 PM,wrote: > Author: covener > Date: Thu Dec 8 19:57:57 2016 > New Revision: 1773293 > > URL: http://svn.apache.org/viewvc?rev=1773293=rev > Log: > change error handling for bad resp headers > > - avoid looping between ap_die and the http filter > - remove the header that failed the check > - keep calling apr_table_do until our fn stops matching > > > This is still not great. We get the original body, a 500 status > code and status line. > > (r1773285 + fix for first return from check_headers) > I'm not clear how the original body makes any sense to send. We should be eating the original body and sending a 500 ErrorDocument body. If it isn't possible to do that, I see two alternatives; 1. clobber the bad header, simply omit it from the headers which we transmit. 2. replace all bad characters in the header with some other value, such as space characters. In the case of bad field name characters, perhaps some permitted symbol such as '$' (no, '?' isn't in the allowed list, that would be ideal.) I don't see any obviously great solution.
Re: svn commit: r1773163 - in /httpd/httpd/branches/2.4.x-merge-http-strict: ./ modules/http/http_filters.c
It should be complete enough to apply complete to httpd-2.4.x. On Dec 8, 2016 1:32 PM, "Eric Covener"wrote: > On Wed, Dec 7, 2016 at 6:40 PM, wrote: > > Author: wrowe > > Date: Wed Dec 7 23:40:20 2016 > > New Revision: 1773163 > > > > URL: http://svn.apache.org/viewvc?rev=1773163=rev > > Log: > > After eliminating unusual whitespace in Unsafe mode (e.g. \f \v), we are > left > > with the same behavior in both of these cases. Simplify. Noted by rpluem. > > > > Backports: 1773162 > > > > Modified: > > httpd/httpd/branches/2.4.x-merge-http-strict/ (props changed) > > httpd/httpd/branches/2.4.x-merge-http-strict/modules/ > http/http_filters.c > > > > Will this branch outlive the backport? I think the backport occurred > a little before this rev. >
Re: 2.4 HEAD: some looping issue with check_headers()
On Thu, Dec 8, 2016 at 12:49 PM, Eric Covenerwrote: > On Thu, Dec 8, 2016 at 1:46 PM, Eric Covener wrote: > > This is even after trying to zap the offending header before ap_die. > > I think this is more about how I am injecting the bad characters -- It > is re-running after the ap_die. > Still, I think we want to add a guard to rip out the offending header and not ap_die() in handling a 500 error, that's quite the loop, and if you could create the problem, so can another unsuspecting admin.
Re: T of 2.4.24
On Thu, Dec 8, 2016 at 12:03 PM, Jim Jagielski <j...@jagunet.com> wrote: > AFAICT there is no consensus. But is this really a blocker? I don't know, expat is at 2.2.0 and PCRE is at 8.39 with significant vulnerability fixes (everyone seems very enamored with fuzz generators this past few years.) It doesn't block creation of httpd-2.4.24.tar.gz, obviously. It does raise the question again of whether the httpd project can distribute a source code package on www.apache.org/dist/httpd/ which is not voted on by the project, and whether it violates the spirit of the pmc consensus to no longer be the distributor of dependencies which frequently fall into a poorly maintained/updated state. So it's simply a question about the -deps package, and since that is never given a release vote, it really isn't holding up any tag & roll. > > On Dec 8, 2016, at 12:38 PM, William A Rowe Jr <wr...@rowe-clan.net> > wrote: > > > > On Thu, Dec 8, 2016 at 8:55 AM, Jim Jagielski <j...@jagunet.com> wrote: > > Things are looking good for a T of 2.4.24 sometime late > > today. > > > > If you have any issues or concerns, let me know asap. > > > > Do we have any consensus on dropping the stale and vulnerable > > expat or pcre packages from the pretending-not-to-be-released > > -deps artifact in the www.a.o/dist/httpd/ releases tree? > > > > > > > >
Re: T of 2.4.24
On Thu, Dec 8, 2016 at 8:55 AM, Jim Jagielskiwrote: > Things are looking good for a T of 2.4.24 sometime late > today. > > If you have any issues or concerns, let me know asap. > Do we have any consensus on dropping the stale and vulnerable expat or pcre packages from the pretending-not-to-be-released -deps artifact in the www.a.o/dist/httpd/ releases tree?
Re: PCRE 10 and puzzling edge cases
I've beaten my head against this wall for a bit longer, and came up with several places where pcre2 changed return types for void *what query interogations (especially from int to uint32, badness on x86_64-linux). The attached patch picks up these bad void * type assignments. Still no tremendous improvement, missing something blatantly obvious, I expect. On Mon, Dec 5, 2016 at 10:59 PM, William A Rowe Jr <wr...@rowe-clan.net> wrote: > I've written the following patch to trunk to allow us to configure, > compile and link against PCRE2 (10.x). The autoconf in particular is > streamlined for cross-compilation detection, while retaining the ability to > override the path to (and name of) pcre[2]-config. > > It isn't in a commit-ready state due to t/TEST t/apache/expr.t failures > (among others), and the defects appear to revolve around the way substring > patterns are recorded. > > Attached the test failure cases (many similar test patterns do succeed, > interestingly.) One test looks outright wrong. I'd rather not beat my head > against these if the answer is blatantly obvious. > > If anyone has patience for exploring this further, any help is welcomed. > Philip starts with this assertion; "The original, very widely deployed PCRE > library, originally released in 1997, is at version 8.39, and the API and > feature set are stable—future releases will be for bugfixes only. All new > future features will be to PCRE2, not the original PCRE 8.x series." But he > has gone on to state that many fuzzing error cases which are handled > correctly in PCRE2 cannot be realistically fixed in PCRE 8.x. I've placed > this up there with other parsing rewrites in httpd, that starting over is > simply the correct answer, and I'd like to see if we can have httpd 3.0 > choosing PCRE2 over PCRE in the near future (and perhaps backport this if > we determine behavior is consistent.) > > Cheers, > > Bill > > Index: configure.in === --- configure.in (revision 1773161) +++ configure.in (working copy) @@ -223,18 +223,18 @@ AC_ARG_WITH(pcre, APACHE_HELP_STRING(--with-pcre=PATH,Use external PCRE library)) -AC_PATH_PROG(PCRE_CONFIG, pcre-config, false) -if test -d "$with_pcre" && test -x "$with_pcre/bin/pcre-config"; then - PCRE_CONFIG=$with_pcre/bin/pcre-config -elif test -x "$with_pcre"; then - PCRE_CONFIG=$with_pcre -fi +AC_CHECK_TARGET_TOOLS(PCRE_CONFIG, [pcre2-config pcre-config], + [`which $with_pcre 2>/dev/null`], + [$with_pcre/bin:$with_pcre]) -if test "$PCRE_CONFIG" != "false"; then +if test "x$PCRE_CONFIG" != "x"; then if $PCRE_CONFIG --version >/dev/null 2>&1; then :; else -AC_MSG_ERROR([Did not find pcre-config script at $PCRE_CONFIG]) +AC_MSG_ERROR([Did not find working script at $PCRE_CONFIG]) fi case `$PCRE_CONFIG --version` in + [1[0-9].*]) +AC_DEFINE(HAVE_PCRE2, 1, [Detected PCRE2]) +;; [[1-5].*]) AC_MSG_ERROR([Need at least pcre version 6.7]) ;; @@ -244,10 +244,10 @@ esac AC_MSG_NOTICE([Using external PCRE library from $PCRE_CONFIG]) APR_ADDTO(PCRE_INCLUDES, [`$PCRE_CONFIG --cflags`]) - APR_ADDTO(PCRE_LIBS, [`$PCRE_CONFIG --libs`]) + APR_ADDTO(PCRE_LIBS, [`$PCRE_CONFIG --libs8 2>/dev/null || $PCRE_CONFIG --libs`]) APR_ADDTO(HTTPD_LIBS, [\$(PCRE_LIBS)]) else - AC_MSG_ERROR([pcre-config for libpcre not found. PCRE is required and available from http://pcre.org/]) + AC_MSG_ERROR([pcre(2)-config for libpcre not found. PCRE is required and available from http://pcre.org/]) fi APACHE_SUBST(PCRE_LIBS) Index: server/util_pcre.c === --- server/util_pcre.c (revision 1773161) +++ server/util_pcre.c (working copy) @@ -46,10 +46,18 @@ #include "httpd.h" #include "apr_strings.h" #include "apr_tables.h" + +#ifdef HAVE_PCRE2 +#define PCRE2_CODE_UNIT_WIDTH 8 +#include "pcre2.h" +#define PCREn(x) PCRE2_ ## x +#else #include "pcre.h" +#define PCREn(x) PCRE_ ## x +#endif /* PCRE_DUPNAMES is only present since version 6.7 of PCRE */ -#ifndef PCRE_DUPNAMES +#if !defined(PCRE_DUPNAMES) && !defined(HAVE_PCRE2) #error PCRE Version 6.7 or later required! #else @@ -74,11 +82,19 @@ AP_DECLARE(const char *) ap_pcre_version_string(int which) { +#ifdef HAVE_PCRE2 +static char buf[80]; +#endif switch (which) { case AP_REG_PCRE_COMPILED: -return APR_STRINGIFY(PCRE_MAJOR) "." APR_STRINGIFY(PCRE_MINOR) " " APR_STRINGIFY(PCRE_DATE); +return APR_STRINGIFY(PCREn(MAJOR)) "." APR_STRINGIFY(PCREn(MINOR)) " " APR_STRINGIFY(PCREn(DATE)); case AP_REG_PCRE_LOADED: +#ifdef HAVE_
Re: About Interim Response Headers (was: Content-Length header for 1xx status codes)
On Dec 7, 2016 6:23 PM, "Jacob Champion" <champio...@gmail.com> wrote: On 12/07/2016 04:00 PM, William A Rowe Jr wrote: > Consider for a moment the case of an HTTP/1.1 upgrade request > unrecognized by a proxy agent. > It was my understanding that this is an impossible situation for a conforming proxy, since Upgrade is hop-by-hop. What am I missing? The fact that there is no way for us to predict what new headers we are passed in the future are defined in the future to be hop-by-hop, which result in a 105 response code with a similarly imponderable conundrum. No way for RFC2068 servers to know 101 became a hop by hop unhandleable response.
Re: About Interim Response Headers (was: Content-Length header for 1xx status codes)
On Dec 7, 2016 5:04 PM, "Jacob Champion"wrote: [combining replies to your last two emails] > No, not reject -- conforming clients parse (and may discard) 1xx's they don't understand, treating them like 100 Continue, and proxies must forward as-is (unless the proxy was the agent requesting the 1xx response). Right? RFC 7231, sec. 6.2: A client MUST be able to parse one or more 1xx responses received prior to a final response, even if the client does not expect one. A user agent MAY ignore unexpected 1xx responses. A proxy MUST forward 1xx responses unless the proxy itself requested the generation of the 1xx response. For example, if a proxy adds an "Expect: 100-continue" field when it forwards a request, then it need not forward the corresponding 100 (Continue) response(s). Consider for a moment the case of an HTTP/1.1 upgrade request unrecognized by a proxy agent. The 101 switching protocols response is garbage to the proxy agent, but passed along. The proxy agent attempts to read the response and gets nothing meaningful, the originating user agent gets the 101 response and is busy in the new protocol and reads the 500 error as garbage input. The entire idea of forwarding unrecognized/in handled 1xx responses is madness.
Re: svn commit: r1772678 [2/2] - in /httpd/httpd/branches/2.4.x: ./ docs/manual/mod/ include/ modules/http/ server/
Patch incoming. More below... On Wed, Dec 7, 2016 at 4:01 PM, William A Rowe Jr <wr...@rowe-clan.net> wrote: > On Wed, Dec 7, 2016 at 2:11 PM, Ruediger Pluem <rpl...@apache.org> wrote: > >> Sorry for chiming in so late :-(. > > > Again, no worries... > > On 12/05/2016 03:34 PM, j...@apache.org wrote: >> > Modified: httpd/httpd/branches/2.4.x/server/protocol.c >> > URL: http://svn.apache.org/viewvc/httpd/httpd/branches/2.4.x/serv >> er/protocol.c?rev=1772678=1772677=1772678=diff >> > >> == >> > --- httpd/httpd/branches/2.4.x/server/protocol.c (original) >> > +++ httpd/httpd/branches/2.4.x/server/protocol.c Mon Dec 5 14:34:29 >> 2016 >> >> > +else if (r->protocol[0]) { >> > +r->assbackwards = 0; >> > +r->proto_num = HTTP_VERSION(1,0); >> > +/* Defer setting the r->protocol string till error msg is >> composed */ >> > +if (strict && deferred_error == rrl_none) >> > +deferred_error = rrl_badprotocol; >> > Rethinking this, anything other than /HTTP\/n.n/i is truly garbage. More likely than not, there was a raw SP in the URL. This must not be evaluated based on 'strict'ness. > > +else >> > +r->protocol = "HTTP/1.0"; >> >> Hm. r->protocol is not const char*. Shouldn't we use >> >> apr_pstrdup(r->pool, "HTTP/1.0"); >> > It was easier to simplify this down to an exception in rrl_failed by setting the protocol to 0.9 for the moment. > > +} >> > > +else { >> > +r->assbackwards = 1; >> > +r->protocol = "HTTP/0.9"; >> > +r->proto_num = HTTP_VERSION(0, 9); >> > +} >> > > Does it make more sense to to apply the apr_pstrdup() against > r->protocol just here, after all but one assignment is made? > > The same question occurs here with 0.9, and down at rrl_failed: > below. We can apr_pstrdup each of these three, letting the caller > manipulate the original we scanned in the successful case, or > make a copy in all cases right here, once again below for 1.0. > Also note a few cases of 0-length strings. Can someone alter our empty NULL terminator and expect any good to come of that? ... r->protocol = uri = ""; ... r->protocol = ""; But these will be wiped out shortly thereafter. > I had expected to see some warning... and mentioned it in the > initial commit log, but my maintainer-mode build did not trip on this. > Hmmm... using a constant is not unprecedented... For example... protocol.c (an unedited bit of it)... AP_DECLARE(void) ap_set_sub_req_protocol(request_rec *rnew, const request_rec *r) { rnew->the_request = r->the_request; /* Keep original request-line */ rnew->assbackwards= 1; /* Don't send headers from this. */ rnew->no_local_copy = 1; /* Don't try to send HTTP_NOT_MODIFIED for a * fragment. */ rnew->method = "GET"; rnew->method_number = M_GET; rnew->protocol= "INCLUDED"; mod_proxy_hcheck.c ... /* * Create a dummy request rec, simply so we can use ap_expr. * Use our short-lived poll for bucket_alloc */ static request_rec *create_request_rec(apr_pool_t *p1, conn_rec *conn, const char *method) { [...] r->protocol = "HTTP/1.0"; r->proto_num = HTTP_VERSION(1, 0);
Re: About Interim Response Headers (was: Content-Length header for 1xx status codes)
On Wed, Dec 7, 2016 at 4:19 PM, William A Rowe Jr <wr...@rowe-clan.net> wrote: > On Wed, Dec 7, 2016 at 2:23 PM, Jacob Champion <champio...@gmail.com> > wrote: > >> On 12/07/2016 12:03 PM, William A Rowe Jr wrote: >> >>> On Wed, Dec 7, 2016 at 10:50 AM, Jacob Champion <champio...@gmail.com >>> <mailto:champio...@gmail.com>> wrote: >>> >>> What's your end goal? I don't think we can *prohibit* modules from >>> sending extra headers in 100 responses, but it does make sense to >>> limit what we send by default, especially if we're currently sending >>> Content-Length (which IIUC is meaningless in that context). >>> >>> When we send 100 CONTINUE from the http_filters.c core read >>> filter, we send no headers. Simple. >>> >> >> 1) Are 1xx responses handled by the http_filters code? I was under the >> impression that they were not. >> > > that code generates 100 CONTINUE responses, yes. Although not > for mod_proxy_http in RFC mode, that case is a pass through of > the various 1xx responses, except for 100 if it is not expected still. > > > >> 2) This is allowed by the spec, I guess, but it seems punitively strict >> to me. Some deployments might be using something interesting in the >> headers, perhaps a precursor to the upcoming 103? > > > From a mod_proxy perspective, other 1xx's could be sent with the > headers that accompanied them, but what headers of value can > httpd add? None, IMO. mod_proxy_http should read the headers > from 1XX interims into an empty headers array and send them on > using ap_send_interim_response without extra processing, until > we finally arrive at a final status which httpd should decorate using > whatever happened to the headers_out array during pre-processing, > including mod_headers activity. > > >> mod_proxy_http shoves all existing Set-Cookie headers into the >>> >> interim response (e.g. those added by other modules), and it doesn't >>> appear that these headers are cleared after a 100 continue response. >>> The Via response is added, hop by hop headers are cleared, and >>> a whole bunch of other inappropriate processing is going on there. >>> That module looks the most problematic, send_interim_response >>> was not very well thought out and includes much internal httpd >>> decoration that is only appropriate for the final response. >>> >> I mean the logic between reading interim headers and the call to send_interim_response was inappropriate... and note that we really can't relay very many 'unexpected' 1XX codes, in fact all 1xx codes must be negotiated (even 100 with the 100-continue expectation), something that we will be ill-posed to do as a proxy intermediary. For example, the protocol negotiation itself is hop-by-hop and the mod_proxy_http can only speak protocols it knows. We may want to use 100-continue expectations to the backend even if our client is HTTP/1.0 and we cannot forward them, provide them to our client when asked even if the backend won't support them, etc. This item in particular can't be resolved for 2.4.24 IMO, it is sufficiently borked as to merit no attention until those items with concensus are completed and 2.4.24 is tagged.
Re: About Interim Response Headers (was: Content-Length header for 1xx status codes)
On Wed, Dec 7, 2016 at 2:23 PM, Jacob Champion <champio...@gmail.com> wrote: > On 12/07/2016 12:03 PM, William A Rowe Jr wrote: > >> On Wed, Dec 7, 2016 at 10:50 AM, Jacob Champion <champio...@gmail.com >> <mailto:champio...@gmail.com>> wrote: >> >> What's your end goal? I don't think we can *prohibit* modules from >> sending extra headers in 100 responses, but it does make sense to >> limit what we send by default, especially if we're currently sending >> Content-Length (which IIUC is meaningless in that context). >> >> When we send 100 CONTINUE from the http_filters.c core read >> filter, we send no headers. Simple. >> > > 1) Are 1xx responses handled by the http_filters code? I was under the > impression that they were not. > that code generates 100 CONTINUE responses, yes. Although not for mod_proxy_http in RFC mode, that case is a pass through of the various 1xx responses, except for 100 if it is not expected still. > 2) This is allowed by the spec, I guess, but it seems punitively strict to > me. Some deployments might be using something interesting in the headers, > perhaps a precursor to the upcoming 103? >From a mod_proxy perspective, other 1xx's could be sent with the headers that accompanied them, but what headers of value can httpd add? None, IMO. mod_proxy_http should read the headers from 1XX interims into an empty headers array and send them on using ap_send_interim_response without extra processing, until we finally arrive at a final status which httpd should decorate using whatever happened to the headers_out array during pre-processing, including mod_headers activity. > I'm looking at ap_send_interim_response and am somewhat confused >> by core_upgrade handler, which I'll get to in a moment. >> >> h2_h2.c check_push() applies only to HTTP/2, and notably does >> not clear headers_out before adding the Link header and pushing >> out a 103 response, but I'm not going to dig much further at this >> exact moment. >> > > Well, in that case the entire point of 103 is to send headers. That is, if > I'm understanding the spec correctly; Stefan, can you confirm? 102 PROCESSING is defined here, per the registry, https://tools.ietf.org/html/rfc2518#page-59 It doesn't seem to call for additional headers (concieveable one would want to add X-Delay: sec or something). 103 is not defined here; http://www.iana.org/assignments/http-status-codes/http-status-codes.xhtml In fact there doesn't seem to be any canonical definition. HTTP/1.1 should surely reject such, HTTP/2, ENOCLUE (but mod_proxy_http2 should not let it dribble back into HTTP/1.1 requests). > mod_proxy_http shoves all existing Set-Cookie headers into the >> interim response (e.g. those added by other modules), and it doesn't >> appear that these headers are cleared after a 100 continue response. >> The Via response is added, hop by hop headers are cleared, and >> a whole bunch of other inappropriate processing is going on there. >> That module looks the most problematic, send_interim_response >> was not very well thought out and includes much internal httpd >> decoration that is only appropriate for the final response. >> >> core_upgrade_handler (in server/core.c) *does* clear headers_out >> before adding the two necessary Upgrade and Connection headers, >> so in this context we are good. But it handles 101 UPGRADE >> interim responses without first testing that expecting-100 was >> satisfied, per spec. There appears to be no way to process more >> than one upgrade (and we know from spec, that multiple upgrades >> may be necessary; crypto vs framing, for example). >> > > Expect: 100-continue doesn't have anything to do with 101 Upgrade... and > more than one upgrade doesn't make sense IMO, as the HTTP protocol is > replaced wholesale. I think a couple of protocol pieces are being confused > here. Uhm, they can be combined and the spec is clear on how must happen.
Re: svn commit: r1772678 [2/2] - in /httpd/httpd/branches/2.4.x: ./ docs/manual/mod/ include/ modules/http/ server/
On Wed, Dec 7, 2016 at 2:11 PM, Ruediger Pluemwrote: > Sorry for chiming in so late :-(. Again, no worries... On 12/05/2016 03:34 PM, j...@apache.org wrote: > > Modified: httpd/httpd/branches/2.4.x/server/protocol.c > > URL: http://svn.apache.org/viewvc/httpd/httpd/branches/2.4.x/ > server/protocol.c?rev=1772678=1772677=1772678=diff > > > == > > --- httpd/httpd/branches/2.4.x/server/protocol.c (original) > > +++ httpd/httpd/branches/2.4.x/server/protocol.c Mon Dec 5 14:34:29 > 2016 > > > @@ -617,58 +646,269 @@ static int read_request_line(request_rec > > [...] > > + > > +/* Advance protocol pointer over leading whitespace, NUL terminate > the uri > > + * If non-SP whitespace is encountered, mark as specific error > > + */ > > +for (r->protocol = ll; apr_isspace(*r->protocol); ++r->protocol) > > +if (*r->protocol != ' ' && deferred_error == rrl_none) > > +deferred_error = rrl_badwhitespace; > > +*ll = '\0'; > > + > > +/* Scan the protocol up to the next whitespace, validation comes > later */ > > +if (!(ll = (char*) ap_scan_vchar_obstext(r->protocol))) { > > +len = strlen(r->protocol); > > +goto rrl_done; > > +} > > +len = ll - r->protocol; > > + > > +/* Advance over trailing whitespace, if found mark in error, > > + * determine if trailing text is found, unconditionally mark in > error, > > + * finally NUL terminate the protocol string > > + */ > > +if (*ll && !apr_isspace(*ll)) { > > +deferred_error = rrl_badprotocol; > > +} > > +else if (strict && *ll) { > > +deferred_error = rrl_excesswhitespace; > > } > > +else { > > +for ( ; apr_isspace(*ll); ++ll) > > +if (*ll != ' ' && deferred_error == rrl_none) > > +deferred_error = rrl_badwhitespace; > > +if (*ll && deferred_error == rrl_none) > > +deferred_error = rrl_trailingtext; > > +} > > +*((char *)r->protocol + len) = '\0'; > > + > > +rrl_done: > > +/* For internal integrety and palloc efficiency, reconstruct > the_request > > + * in one palloc, using only single SP characters, per spec. > > + */ > > +r->the_request = apr_pstrcat(r->pool, r->method, *uri ? " " : NULL, > uri, > > + *r->protocol ? " " : NULL, > r->protocol, NULL); > > > > -/* Provide quick information about the request method as soon as > known */ > > +if (len == 8 > > +&& r->protocol[0] == 'H' && r->protocol[1] == 'T' > > +&& r->protocol[2] == 'T' && r->protocol[3] == 'P' > > +&& r->protocol[4] == '/' && apr_isdigit(r->protocol[5]) > > +&& r->protocol[6] == '.' && apr_isdigit(r->protocol[7]) > > +&& r->protocol[5] != '0') { > > +r->assbackwards = 0; > > +r->proto_num = HTTP_VERSION(r->protocol[5] - '0', > r->protocol[7] - '0'); > > +} > > +else if (len == 8 > > + && (r->protocol[0] == 'H' || r->protocol[0] == 'h') > > + && (r->protocol[1] == 'T' || r->protocol[1] == 't') > > + && (r->protocol[2] == 'T' || r->protocol[2] == 't') > > + && (r->protocol[3] == 'P' || r->protocol[3] == 'p') > > + && r->protocol[4] == '/' && apr_isdigit(r->protocol[5]) > > + && r->protocol[6] == '.' && apr_isdigit(r->protocol[7]) > > + && r->protocol[5] != '0') { > > +r->assbackwards = 0; > > +r->proto_num = HTTP_VERSION(r->protocol[5] - '0', > r->protocol[7] - '0'); > > +if (strict && deferred_error == rrl_none) > > +deferred_error = rrl_badprotocol; > > +else > > +memcpy((char*)r->protocol, "HTTP", 4); > > +} > > +else if (r->protocol[0]) { > > +r->assbackwards = 0; > > +r->proto_num = HTTP_VERSION(1,0); > > +/* Defer setting the r->protocol string till error msg is > composed */ > > +if (strict && deferred_error == rrl_none) > > +deferred_error = rrl_badprotocol; > > +else > > +r->protocol = "HTTP/1.0"; > > Hm. r->protocol is not const char*. Shouldn't we use > > apr_pstrdup(r->pool, "HTTP/1.0"); > > > +} > > +else { > > +r->assbackwards = 1; > > +r->protocol = "HTTP/0.9"; > > +r->proto_num = HTTP_VERSION(0, 9); > > +} > Does it make more sense to to apply the apr_pstrdup() against r->protocol just here, after all but one assignment is made? The same question occurs here with 0.9, and down at rrl_failed: below. We can apr_pstrdup each of these three, letting the caller manipulate the original we scanned in the successful case, or make a copy in all cases right here, once again below for 1.0. I had expected to see some caution and mentioned it in the initial commit log, but my maintainer-mode build did not trip
Re: svn commit: r1772678 [1/2] - in /httpd/httpd/branches/2.4.x: ./ docs/manual/mod/ include/ modules/http/ server/
On Wed, Dec 7, 2016 at 2:11 PM, Ruediger Pluemwrote: > Sorry for chiming in so late :-(. No worries, good eye... On 12/05/2016 03:34 PM, j...@apache.org wrote: > > Author: jim > > Date: Mon Dec 5 14:34:29 2016 > > New Revision: 1772678 > > > > URL: http://svn.apache.org/viewvc?rev=1772678=rev > > Log: > > > Modified: httpd/httpd/branches/2.4.x/modules/http/http_filters.c > > URL: http://svn.apache.org/viewvc/httpd/httpd/branches/2.4.x/ > modules/http/http_filters.c?rev=1772678=1772677=1772678=diff > > > == > > --- httpd/httpd/branches/2.4.x/modules/http/http_filters.c (original) > > +++ httpd/httpd/branches/2.4.x/modules/http/http_filters.c Mon Dec 5 > 14:34:29 2016 > > > @@ -668,14 +684,83 @@ apr_status_t ap_http_filter(ap_filter_t > > return APR_SUCCESS; > > } > > > > +struct check_header_ctx { > > +request_rec *r; > > +int strict; > > +}; > > + > > +/* check a single header, to be used with apr_table_do() */ > > +static int check_header(void *arg, const char *name, const char *val) > > +{ > > +struct check_header_ctx *ctx = arg; > > +const char *test; > > + > > +if (name[0] == '\0') { > > +ap_log_rerror(APLOG_MARK, APLOG_ERR, 0, ctx->r, APLOGNO(02428) > > + "Empty response header name, aborting request"); > > +return 0; > > +} > > + > > +if (ctx->strict) { > > +test = ap_scan_http_token(name); > > +} > > +else { > > +test = ap_scan_vchar_obstext(name); > > +} > Here the spec is specific that a field name contains only 'token' characters. A much looser reading for badly written back-ends is that they must not contain CTRL characters, or spaces. > > +if (*test) { > > +ap_log_rerror(APLOG_MARK, APLOG_ERR, 0, ctx->r, APLOGNO(02429) > > + "Response header name '%s' contains invalid " > > + "characters, aborting request", > > + name); > > +return 0; > > +} > > + > > +if (ctx->strict) { > > +test = ap_scan_http_field_content(val); > > What characters are not allowed here that are allowed below? > Here the spec is specific that a field values cannot contain ctrls, excepting the whitespace tab character. This is what that corresponds to; (!c || (apr_iscntrl(c) && c != '\t')) > > +} > > +else { > > +/* Simply terminate scanning on a CTL char, allowing whitespace > */ > > +test = val; > > +do { > > +while (*test == ' ' || *test == '\t') test++; > > +test = ap_scan_vchar_obstext(test); > > +} while (*test == ' ' || *test == '\t'); > The above code else case should simply be thrown away and the ctx->strict test eliminated. This is a legacy of our choices around disallowing the RFC7230 section 3.5 oddball whitespace, including tabs like \f or \v. When we moved to be more literal about avoiding these, this code did become the strict case shown above. vchar_obstext may still be worthwhile to save, it differs from http_field_content only in not accepting HT or SP. > > +} > > +if (*test) { > > +ap_log_rerror(APLOG_MARK, APLOG_ERR, 0, ctx->r, APLOGNO(02430) > > + "Response header '%s' value of '%s' contains > invalid " > > + "characters, aborting request", > > + name, val); > > +return 0; > > +} > > +return 1; > > +} > > + > > Regards > > Rüdiger > >
About Interim Response Headers (was: Content-Length header for 1xx status codes)
On Wed, Dec 7, 2016 at 10:50 AM, Jacob Champion <champio...@gmail.com> wrote: > On 12/07/2016 08:31 AM, William A Rowe Jr wrote: > >> Does anyone have pointers to legitimizing any 100 response >> headers? >> > > Date is called out explicitly (RFC 7231, sec. 7.1.1.2): > >[...] An origin server MAY >send a Date header field if the response is in the 1xx >(Informational) or 5xx (Server Error) class of status codes. > > It was even more explicit in 2616 (sec. 14.18): > > 1. If the response status code is 100 (Continue) or 101 (Switching >Protocols), the response MAY include a Date header field, at >the server's option. > > What's your end goal? I don't think we can *prohibit* modules from sending > extra headers in 100 responses, but it does make sense to limit what we > send by default, especially if we're currently sending Content-Length > (which IIUC is meaningless in that context). > When we send 100 CONTINUE from the http_filters.c core read filter, we send no headers. Simple. I'm looking at ap_send_interim_response and am somewhat confused by core_upgrade handler, which I'll get to in a moment. h2_h2.c check_push() applies only to HTTP/2, and notably does not clear headers_out before adding the Link header and pushing out a 103 response, but I'm not going to dig much further at this exact moment. mod_proxy_http shoves all existing Set-Cookie headers into the interim response (e.g. those added by other modules), and it doesn't appear that these headers are cleared after a 100 continue response. The Via response is added, hop by hop headers are cleared, and a whole bunch of other inappropriate processing is going on there. That module looks the most problematic, send_interim_response was not very well thought out and includes much internal httpd decoration that is only appropriate for the final response. core_upgrade_handler (in server/core.c) *does* clear headers_out before adding the two necessary Upgrade and Connection headers, so in this context we are good. But it handles 101 UPGRADE interim responses without first testing that expecting-100 was satisfied, per spec. There appears to be no way to process more than one upgrade (and we know from spec, that multiple upgrades may be necessary; crypto vs framing, for example). Prior to core_upgrade_handler calling ap_send_interim_response to the upgrade, The 100 continue interim response is required, and in response to the client's reaction to the 100 continue, the request body must be read off the wire. Only then may the upgrade interim response, and ap_switch_protocol handling occur. We also seem to be failing to send Upgrade and Connection headers after the first response (c.f. the c->keepalives), in the core_upgrade_handler. Is this some "stateful" reading of the HTTP protocol? I had thought we discussed this on list once before. It appears to run in the map_to_storage hook for OPTIONS * requests, and is deferred to VERY_FIRST in the handler phase otherwise. This seems awfully late for the intermediate phases, e.g. auth requiring a TLS connection, but it doesn't seem possible to resolve this with respect to 100 continue, which the spec insists can be avoided with an error response, and they use an auth failure in that example :) > *Are* we currently sending Content-Length in 100 responses? Luca's bug > mentions 204 only. > I may have misread the top line complaint here.
Re: Content-Length header for HTTP 204 and 1xx status codes
On Wed, Dec 7, 2016 at 9:05 AM, William A Rowe Jr <wr...@rowe-clan.net> wrote: > On Nov 30, 2016 11:46 AM, "Luca Toscano" <toscano.l...@gmail.com> wrote: > > Hi everybody, > > while working on https://bz.apache.org/bugzilla/show_bug.cgi?id=51350 a > user asked why httpd send the "Content-Length: 0" header for HTTP 204 > responses given the following statement in the RFC: > > https://tools.ietf.org/html/rfc7230#page-30 > "A server MUST NOT send a Content-Length header field in any response with > a status code of 1xx (Informational) or 204 (No Content)." > > > I was looking at the spec for 101 and 100 responses and think we are going > way overboard on replying with a 100 response. Looking at the 101 example, > we should send a reply of 0 or a few very explicit header fields and save > the balance of output headers for the final response code. Otherwise these > all seem to be wasted network bytes. > The 101 Upgrading response has a very short list of necessary headers, only Connection: and Upgrade: fields are informative; https://tools.ietf.org/html/rfc7230#section-6.7 I did not find anything in these sections on useful 100 Continue response headers, and believe there are none; https://tools.ietf.org/html/rfc7231#section-6.2.1 https://tools.ietf.org/html/rfc7231#section-5.1.1 Does anyone have pointers to legitimizing any 100 response headers?
Re: Content-Length header for HTTP 204 and 1xx status codes
On Nov 30, 2016 11:46 AM, "Luca Toscano"wrote: Hi everybody, while working on https://bz.apache.org/bugzilla/show_bug.cgi?id=51350 a user asked why httpd send the "Content-Length: 0" header for HTTP 204 responses given the following statement in the RFC: https://tools.ietf.org/html/rfc7230#page-30 "A server MUST NOT send a Content-Length header field in any response with a status code of 1xx (Informational) or 204 (No Content)." I was looking at the spec for 101 and 100 responses and think we are going way overboard on replying with a 100 response. Looking at the 101 example, we should send a reply of 0 or a few very explicit header fields and save the balance of output headers for the final response code. Otherwise these all seem to be wasted network bytes. WDYT?
Re: Is 2.2.x broken?
An associated patch was likely overlooked, please go ahead and revert. On Dec 6, 2016 12:07 PM, "Jacob Champion"wrote: > I'm trying to test a backport to the 2.2.x tip, but it looks like r1758671 > might have busted our request parsing. Every test we have fails; I think > ap_rgetline() is no longer dealing with the final CRLF line correctly. > Reverting that commit fixes things. > > Can anyone confirm that it's not just something on my end? > > --Jacob >
PCRE 10 and puzzling edge cases
I've written the following patch to trunk to allow us to configure, compile and link against PCRE2 (10.x). The autoconf in particular is streamlined for cross-compilation detection, while retaining the ability to override the path to (and name of) pcre[2]-config. It isn't in a commit-ready state due to t/TEST t/apache/expr.t failures (among others), and the defects appear to revolve around the way substring patterns are recorded. Attached the test failure cases (many similar test patterns do succeed, interestingly.) One test looks outright wrong. I'd rather not beat my head against these if the answer is blatantly obvious. If anyone has patience for exploring this further, any help is welcomed. Philip starts with this assertion; "The original, very widely deployed PCRE library, originally released in 1997, is at version 8.39, and the API and feature set are stable—future releases will be for bugfixes only. All new future features will be to PCRE2, not the original PCRE 8.x series." But he has gone on to state that many fuzzing error cases which are handled correctly in PCRE2 cannot be realistically fixed in PCRE 8.x. I've placed this up there with other parsing rewrites in httpd, that starting over is simply the correct answer, and I'd like to see if we can have httpd 3.0 choosing PCRE2 over PCRE in the near future (and perhaps backport this if we determine behavior is consistent.) Cheers, Bill Index: configure.in === --- configure.in (revision 1772810) +++ configure.in (working copy) @@ -223,18 +223,18 @@ AC_ARG_WITH(pcre, APACHE_HELP_STRING(--with-pcre=PATH,Use external PCRE library)) -AC_PATH_PROG(PCRE_CONFIG, pcre-config, false) -if test -d "$with_pcre" && test -x "$with_pcre/bin/pcre-config"; then - PCRE_CONFIG=$with_pcre/bin/pcre-config -elif test -x "$with_pcre"; then - PCRE_CONFIG=$with_pcre -fi +AC_CHECK_TARGET_TOOLS(PCRE_CONFIG, [pcre2-config pcre-config], + [`which $with_pcre 2>/dev/null`], + [$with_pcre/bin:$with_pcre]) -if test "$PCRE_CONFIG" != "false"; then +if test "x$PCRE_CONFIG" != "x"; then if $PCRE_CONFIG --version >/dev/null 2>&1; then :; else -AC_MSG_ERROR([Did not find pcre-config script at $PCRE_CONFIG]) +AC_MSG_ERROR([Did not find working script at $PCRE_CONFIG]) fi case `$PCRE_CONFIG --version` in + [1[0-9].*]) +AC_DEFINE(HAVE_PCRE2, 1, [Detected PCRE2]) +;; [[1-5].*]) AC_MSG_ERROR([Need at least pcre version 6.7]) ;; @@ -244,10 +244,10 @@ esac AC_MSG_NOTICE([Using external PCRE library from $PCRE_CONFIG]) APR_ADDTO(PCRE_INCLUDES, [`$PCRE_CONFIG --cflags`]) - APR_ADDTO(PCRE_LIBS, [`$PCRE_CONFIG --libs`]) + APR_ADDTO(PCRE_LIBS, [`$PCRE_CONFIG --libs8 2>/dev/null || $PCRE_CONFIG --libs`]) APR_ADDTO(HTTPD_LIBS, [\$(PCRE_LIBS)]) else - AC_MSG_ERROR([pcre-config for libpcre not found. PCRE is required and available from http://pcre.org/]) + AC_MSG_ERROR([pcre(2)-config for libpcre not found. PCRE is required and available from http://pcre.org/]) fi APACHE_SUBST(PCRE_LIBS) Index: server/util_pcre.c === --- server/util_pcre.c (revision 1772810) +++ server/util_pcre.c (working copy) @@ -46,10 +46,18 @@ #include "httpd.h" #include "apr_strings.h" #include "apr_tables.h" + +#ifdef HAVE_PCRE2 +#define PCRE2_CODE_UNIT_WIDTH 8 +#include "pcre2.h" +#define PCREn(x) PCRE2_ ## x +#else #include "pcre.h" +#define PCREn(x) PCRE_ ## x +#endif /* PCRE_DUPNAMES is only present since version 6.7 of PCRE */ -#ifndef PCRE_DUPNAMES +#if !defined(PCRE_DUPNAMES) && !defined(HAVE_PCRE2) #error PCRE Version 6.7 or later required! #else @@ -74,11 +82,19 @@ AP_DECLARE(const char *) ap_pcre_version_string(int which) { +#ifdef HAVE_PCRE2 +static char buf[80]; +#endif switch (which) { case AP_REG_PCRE_COMPILED: -return APR_STRINGIFY(PCRE_MAJOR) "." APR_STRINGIFY(PCRE_MINOR) " " APR_STRINGIFY(PCRE_DATE); +return APR_STRINGIFY(PCREn(MAJOR)) "." APR_STRINGIFY(PCREn(MINOR)) " " APR_STRINGIFY(PCREn(DATE)); case AP_REG_PCRE_LOADED: +#ifdef HAVE_PCRE2 +pcre2_config(PCRE2_CONFIG_VERSION, buf); +return buf; +#else return pcre_version(); +#endif default: return "Unknown"; } @@ -118,7 +134,11 @@ AP_DECLARE(void) ap_regfree(ap_regex_t *preg) { -(pcre_free)(preg->re_pcre); +#ifdef HAVE_PCRE2 +pcre2_code_free(preg->re_pcre); +#else +pcre_free(preg->re_pcre); +#endif } @@ -139,34 +159,46 @@ */ AP_DECLARE(int) ap_regcomp(ap_regex_t * preg, const char *pattern, int cflags) { +#ifdef HAVE_PCRE2 +size_t erroffset; +#else const char *errorptr; int erroffset; +#endif int errcode = 0; -int options = PCRE_DUPNAMES; +int options = PCREn(DUPNAMES); if ((cflags & AP_REG_ICASE) != 0) -options |= PCRE_CASELESS; +options |=
Re: svn commit: r1772418 - /httpd/httpd/trunk/modules/http/http_filters.c
You are correct, I read the commits out of sequence. Thx! On Dec 2, 2016 18:40, "Eric Covener" <cove...@gmail.com> wrote: > On Fri, Dec 2, 2016 at 7:28 PM, William A Rowe Jr <wr...@rowe-clan.net> > wrote: > > On Fri, Dec 2, 2016 at 6:26 PM, William A Rowe Jr <wr...@rowe-clan.net> > > wrote: > >> > >> FWIW, did you actually fix this on truck and convey the backport > >> svn rev no? > > > > > > I see you sort-of have... please convey the svn commit r1772418 to your > > branch commit --revprop svn:log history, and I think we are all good. > > It's already in the log unless I'm misunderstanding. >
Re: svn commit: r1772418 - /httpd/httpd/trunk/modules/http/http_filters.c
On Fri, Dec 2, 2016 at 6:26 PM, William A Rowe Jr <wr...@rowe-clan.net> wrote: > FWIW, did you actually fix this on truck and convey the backport > svn rev no? > I see you sort-of have... please convey the svn commit r1772418 to your branch commit --revprop svn:log history, and I think we are all good. > On Fri, Dec 2, 2016 at 6:25 PM, William A Rowe Jr <wr...@rowe-clan.net> > wrote: > >> That is code I'm less familiar with, but will give it careful scrutiny >> tomorrow. >> >> >> On Fri, Dec 2, 2016 at 6:12 PM, Eric Covener <cove...@gmail.com> wrote: >> >>> probably wiser ways to fix but I didn't want to sit on it. >>> >>> On Fri, Dec 2, 2016 at 7:10 PM, <cove...@apache.org> wrote: >>> > Author: covener >>> > Date: Sat Dec 3 00:10:31 2016 >>> > New Revision: 1772418 >>> > >>> > URL: http://svn.apache.org/viewvc?rev=1772418=rev >>> > Log: >>> > loop in checking response headers >>> > >>> > w/ HTTPProtocolOptions Unsafe >>> > >>> > Modified: >>> > httpd/httpd/trunk/modules/http/http_filters.c >>> > >>> > Modified: httpd/httpd/trunk/modules/http/http_filters.c >>> > URL: http://svn.apache.org/viewvc/httpd/httpd/trunk/modules/http/ >>> http_filters.c?rev=1772418=1772417=1772418=diff >>> > >>> == >>> > --- httpd/httpd/trunk/modules/http/http_filters.c (original) >>> > +++ httpd/httpd/trunk/modules/http/http_filters.c Sat Dec 3 00:10:31 >>> 2016 >>> > @@ -667,6 +667,7 @@ static int check_header(void *arg, const >>> > /* Simply terminate scanning on a CTL char, allowing >>> whitespace */ >>> > test = val; >>> > do { >>> > +while (*test == ' ' || *test == '\t') test++; >>> > test = ap_scan_vchar_obstext(test); >>> > } while (*test == ' ' || *test == '\t'); >>> > } >>> > >>> > >>> >>> >>> >>> -- >>> Eric Covener >>> cove...@gmail.com >>> >> >> >
Re: svn commit: r1772418 - /httpd/httpd/trunk/modules/http/http_filters.c
FWIW, did you actually fix this on truck and convey the backport svn rev no? On Fri, Dec 2, 2016 at 6:25 PM, William A Rowe Jr <wr...@rowe-clan.net> wrote: > That is code I'm less familiar with, but will give it careful scrutiny > tomorrow. > > > On Fri, Dec 2, 2016 at 6:12 PM, Eric Covener <cove...@gmail.com> wrote: > >> probably wiser ways to fix but I didn't want to sit on it. >> >> On Fri, Dec 2, 2016 at 7:10 PM, <cove...@apache.org> wrote: >> > Author: covener >> > Date: Sat Dec 3 00:10:31 2016 >> > New Revision: 1772418 >> > >> > URL: http://svn.apache.org/viewvc?rev=1772418=rev >> > Log: >> > loop in checking response headers >> > >> > w/ HTTPProtocolOptions Unsafe >> > >> > Modified: >> > httpd/httpd/trunk/modules/http/http_filters.c >> > >> > Modified: httpd/httpd/trunk/modules/http/http_filters.c >> > URL: http://svn.apache.org/viewvc/httpd/httpd/trunk/modules/http/ >> http_filters.c?rev=1772418=1772417=1772418=diff >> > >> == >> > --- httpd/httpd/trunk/modules/http/http_filters.c (original) >> > +++ httpd/httpd/trunk/modules/http/http_filters.c Sat Dec 3 00:10:31 >> 2016 >> > @@ -667,6 +667,7 @@ static int check_header(void *arg, const >> > /* Simply terminate scanning on a CTL char, allowing >> whitespace */ >> > test = val; >> > do { >> > +while (*test == ' ' || *test == '\t') test++; >> > test = ap_scan_vchar_obstext(test); >> > } while (*test == ' ' || *test == '\t'); >> > } >> > >> > >> >> >> >> -- >> Eric Covener >> cove...@gmail.com >> > >
Re: svn commit: r1772418 - /httpd/httpd/trunk/modules/http/http_filters.c
That is code I'm less familiar with, but will give it careful scrutiny tomorrow. On Fri, Dec 2, 2016 at 6:12 PM, Eric Covenerwrote: > probably wiser ways to fix but I didn't want to sit on it. > > On Fri, Dec 2, 2016 at 7:10 PM, wrote: > > Author: covener > > Date: Sat Dec 3 00:10:31 2016 > > New Revision: 1772418 > > > > URL: http://svn.apache.org/viewvc?rev=1772418=rev > > Log: > > loop in checking response headers > > > > w/ HTTPProtocolOptions Unsafe > > > > Modified: > > httpd/httpd/trunk/modules/http/http_filters.c > > > > Modified: httpd/httpd/trunk/modules/http/http_filters.c > > URL: http://svn.apache.org/viewvc/httpd/httpd/trunk/modules/ > http/http_filters.c?rev=1772418=1772417=1772418=diff > > > == > > --- httpd/httpd/trunk/modules/http/http_filters.c (original) > > +++ httpd/httpd/trunk/modules/http/http_filters.c Sat Dec 3 00:10:31 > 2016 > > @@ -667,6 +667,7 @@ static int check_header(void *arg, const > > /* Simply terminate scanning on a CTL char, allowing whitespace > */ > > test = val; > > do { > > +while (*test == ' ' || *test == '\t') test++; > > test = ap_scan_vchar_obstext(test); > > } while (*test == ' ' || *test == '\t'); > > } > > > > > > > > -- > Eric Covener > cove...@gmail.com >
Re: Random AH01842 errors in mod_session_crypto
On Fri, Dec 2, 2016 at 4:28 PM, Yann Ylavicwrote: > On Fri, Dec 2, 2016 at 10:06 PM, Yann Ylavic wrote: > > On Wed, Oct 5, 2016 at 12:23 PM, Yann Ylavic > wrote: > >> > >> Patch attached, WDYT? > > > > Ping, probably is worth considering for 1.6 (even 1.5) ? > > Committed to APR trunk (r1772414), 1.6.x (r1772415) and 1.5.x (r1772416). > Excellent! Docs edits also welcome for clarification.
Re: Time for 2.4.24!
On Thu, Dec 1, 2016 at 8:28 AM, Jim Jagielskiwrote: > hello? hello? Anyone there? :) > > > On Nov 21, 2016, at 7:47 AM, Jim Jagielski wrote: > > > > We have a few items in STATUS that, imo, should be tested, voted-on > > and the committed to the httpd-2.4 branch in anticipation of a new > > release SOON! > +1. There is a proposed mitigation of the many security defects intrinsic in HTTP server implementations sitting in STATUS with one vote, I'd think the project would want to correct this and any undisclosed vulnerabilities in the queue prior to T Sooner, the better. Cheers, Bill
Re: Random AH01842 errors in mod_session_crypto
On Fri, Dec 2, 2016 at 3:06 PM, Yann Ylavicwrote: > On Wed, Oct 5, 2016 at 12:23 PM, Yann Ylavic wrote: > > > > Patch attached, WDYT? > > Ping, probably is worth considering for 1.6 (even 1.5) ? > Provided you don't *break* the API contract with 1.x.x, corrections can be applied to 1.(current). This looks like it falls in that category. Things get wonky when you have duplicates-permitted and decide that's bad, so change the behavior to duplicates-disallowed. That would actually violate any 1.x release update. But as long as we are conforming the behavior to the published description, bug fixes are welcome.
Re: svn commit: r1772339 - /httpd/test/framework/trunk/t/conf/extra.conf.in
On Fri, Dec 2, 2016 at 6:06 AM,wrote: > Author: jim > Date: Fri Dec 2 12:06:30 2016 > New Revision: 1772339 > > URL: http://svn.apache.org/viewvc?rev=1772339=rev > Log: > This isn't in 2.4.24, yet. > Hmmm. Looks like a good case to be made for implementing AP_AC_HAVE_DIRECTIVE() /ducks
Re: JSON for mod_status
On Fri, Dec 2, 2016 at 3:28 AM, Stefan Eissing <stefan.eiss...@greenbytes.de > wrote: > > > Am 01.12.2016 um 23:14 schrieb Jim Jagielski <j...@jagunet.com>: > > > > > >> On Dec 1, 2016, at 2:16 PM, William A Rowe Jr <wr...@rowe-clan.net> > wrote: > >> > >> > >> Note that mod_bmx_status entries focus on what most management > >> frameworks are looking for, so thus far, it doesn't deliver an entire > >> dataset per connection or worker thread. It certainly could, of course. > > > > Nor does it expose anything from any other beans created > > by any other module. > > > > Can it? > > > > If so, exactly how. > > > >> > >> Again, it simply iterates the entire contents of each bean presented > >> to it's own print_bean_fn callback. If you have a look at; > >> https://github.com/hyperic/mod_bmx/blob/master/modules/bmx/mod_bmx.h > >> that should give you a somewhat higher altitude perspective on Aaron's > >> API design. > > > > So mod_bmx_status has to query mod_bmx for "all beans" ?? > > It seems to me from the name of the callback that this is for "printing" > stuff, either all or only the name matching (which is probably a URL > parameter. Fair enough. > > But as Jim pointed out, it would be more powerful if one could > a) detect which beans are potentially available > +1. This is possible today simply by requesting 'all'. But an actual query of modules and property names exposed across the entire bean factory would be very interesting. > b) retrieve and aggregate them > We can on a 1:1 basis today (either an entire module scope, or only named properties across all modules, or all of the above.) It would be nice to enhance this as a many:1 response of several targeted data points desired. So, we'd have BeanProviders that take over a namespace. They register at > server start. > > ProviderRegistry["proxy/balancer"] -> registered provider hook by the > module > ProviderRegistry["proxy/type/h2"]-> registered provider hook by > mod_proxy_http2 > ProviderRegistry["proxy/type/h2c"] -> registered provider hook by > mod_proxy_http2 > > One can query this registry. > One would also register the Property names exposed by the provider, no? > For lookups, there is a BeanContext that caches beans once retrieved. So > there is not need to look them up again in the same request. That is passed > to all registered bean provider queries, so that providers can also query > for other beans while assembling their own. > > A bean produced this way is then send to the client in the requested > format using a standard set of BeanToJSON, BeanToXML, BeanToHTML > serializers. The beans just need to be a slim wrapper around > apr_table_t/apr_array_header_t/const char*/int/apr_size_t/apr_off_ > t/double. > > The registry can be mapped to URL space, so > * https://localhost/status/proxy/balancer would list the bean there, > * https://localhost/status/proxy/type would list all beans registered > with that prefix. > The current syntax is; https://localhost/status/?proxy_balancer <https://localhost/status/proxy/balancer>;* https://localhost/status/?proxy_balancer <https://localhost/status/proxy/balancer>;Pool=app1,State=Error etc. I'd have no issue with enhancing this to support your proposed syntax. As the already adopted backend of one or more management agents, I'd like to see us continue to accept the existing syntax. Use whichever is most appropriate. The existing design expects the provider to choose a name once and remain consistent. So even as new property/value pairs can be added, the behavior of existing ones remains consistent. Otherwise the interface would have no value to a management agent / snmp replacement. Bill
Re: JSON for mod_status
On Thu, Dec 1, 2016 at 4:14 PM, Jim Jagielski <j...@jagunet.com> wrote: > > > On Dec 1, 2016, at 2:16 PM, William A Rowe Jr <wr...@rowe-clan.net> > wrote: > > > > > > Note that mod_bmx_status entries focus on what most management > > frameworks are looking for, so thus far, it doesn't deliver an entire > > dataset per connection or worker thread. It certainly could, of course. > > Nor does it expose anything from any other beans created > by any other module. > > Can it? > > If so, exactly how. > > > > > Again, it simply iterates the entire contents of each bean presented > > to it's own print_bean_fn callback. If you have a look at; > > https://github.com/hyperic/mod_bmx/blob/master/modules/bmx/mod_bmx.h > > that should give you a somewhat higher altitude perspective on Aaron's > > API design. > > So mod_bmx_status has to query mod_bmx for "all beans" ?? > You are still thinking upside down. Think of mod_bmx as the data sink and mod_bmx_status as the data source. mod_bmx_status doesn't look at anything except the internals of httpd. Nobody queries mod_bmx_status directly except mod_bmx. The user agent queries mod_bmx.
Re: Time for 2.4.24!
On Thu, Dec 1, 2016 at 9:06 AM, Eric Covenerwrote: > Trying to look at the event stuff and the strict stuff. > > If any more experienced event folks are lurking, help on the review > would be great (And thanks to sf!) > I found the PR itself to be full of useful data for review (including some reporters' test results against the proposed patches).
Re: JSON for mod_status
On Thu, Dec 1, 2016 at 12:33 PM, Jim Jagielski <j...@jagunet.com> wrote: > > > On Dec 1, 2016, at 12:53 PM, William A Rowe Jr <wr...@rowe-clan.net> > wrote: > > > > Finally the query fn in mod_bmx_status performs the callback indicated > > through its invocation to unspool the data in presentation format, which > > lives back in mod_bmx and behaves identically for every bmx extension. > > Adding the properties by type takes all of the onus off of each extension > > to know how to represent them once stored. E.g. our query_hook function > > can be as simple as; > > /* create the bean */ > > bmx_bean_create(_status_bean, bmx_status_objectname, r->pool); > > > > bmx_bean_prop_add(bmx_status_bean, > > bmx_property_string_create("ServerName", > >ap_get_server_name(r), > >r->pool));print_bean_fn(r, > bmx_status_bean); > > > > > > OK, this is the point I have the question about. From what > I can see, mod_bmx_status creates bmx_status_bean, adds stuff > to it, and then prints it out. > > But how does it get access to, for example, the beans created and > populated in mod_bmx_vhost? I understand that if you KNOW what > beans you are looking for, you can query them via mod_bmx, but > how do you know what beans have been added. > The beans are individually reflected to the print_bean_fn by each of the consumers who registered against the query_hook. mod_bmx then iterates the name of the bean and the property names to filter out what is wanted. If *everything* is wanted, simply don't give any query args to the bmx-handler request. > Right now, for example, we have a simple hook that mod_socache_redis > (for example) hooks into and once mod_status is done, it loops > through all registered hooks. But I don't see any similar type of > functionality in mod_bmx_status which allows mod_bmx_status to > reproduce the full status information that mod_status currently > does. > Note that mod_bmx_status entries focus on what most management frameworks are looking for, so thus far, it doesn't deliver an entire dataset per connection or worker thread. It certainly could, of course. One thing not implemented, json (or even bean) arrays, which might become very useful for tabular data from status, balancer, etc. So yeah, other modules create their beans. Fine. But how > do those beans get "registered" in mod_bmx_status so that they > can actually be displayed? Again, it simply iterates the entire contents of each bean presented to it's own print_bean_fn callback. If you have a look at; https://github.com/hyperic/mod_bmx/blob/master/modules/bmx/mod_bmx.h that should give you a somewhat higher altitude perspective on Aaron's API design. More background; https://github.com/hyperic/mod_bmx/blob/master/README-BMX.txt
Re: JSON for mod_status
On Thu, Dec 1, 2016 at 8:08 AM, Jim Jagielskiwrote: > My question is how do the beans, for example, from mod_bmx_vhost > get displayed via mod_bmx_status? > > I understand that one queries mod_bmx for specific beans, but at > the end of the day people still want/need a comprehensize > mod_status-like display. My expectation would be that mod_bmx_status > would provide that. > mod_bmx is the hub module which handles the bmx-handler generator. Configuration is as straightforward as compiling in or loading mod_bmx.so and following a typical server-status handler config; SetHandler bmx-handler Require ip 127.0.0.1 What mod_bmx reports depends on who hooks into it. You can run it by only loading mod_bmx_status, or only mod_bmx_vhost, or any combination of multiple providers. An obvious new extension would be mod_proxy_balancer pool load reporting. mod_bmx_status hooks into this framework by setting a server-lifetime bean indicating that ServerStatus is extended or not, and then hooks into the mod_bmx query hook during the pre-config phase; APR_OPTIONAL_HOOK(bmx, query_hook, bmx_status_query_hook, NULL, NULL, APR_HOOK_MIDDLE); Our mod_bmx_status query_hook handler is a fairly typical 'handler', of data instead of content; static int bmx_status_query_hook(request_rec *r, const struct bmx_objectname *query, bmx_bean_print print_bean_fn) We don't want to refresh our data if the query indicates that the request to the bmx-handler excluded server-status data, so there's a short circuit right off the bat. Provided some subset of server-status data is needed, this function continues by populating the data for mod_bmx to present (which it can further filter by item name without help from the extensions). Finally the query fn in mod_bmx_status performs the callback indicated through its invocation to unspool the data in presentation format, which lives back in mod_bmx and behaves identically for every bmx extension. Adding the properties by type takes all of the onus off of each extension to know how to represent them once stored. E.g. our query_hook function can be as simple as; /* create the bean */ bmx_bean_create(_status_bean, bmx_status_objectname, r->pool); bmx_bean_prop_add(bmx_status_bean, bmx_property_string_create("ServerName", ap_get_server_name(r), r->pool));print_bean_fn(r, bmx_status_bean); For Stefan's and others concerns, it would probably be most useful for Doug or Aaron to explain how they came to the bean datum model for representing bmx's results. .
Re: JSON for mod_status
You are looking at one fold of monitoring data. See, for example, mod_bmx_vhost which further extends many beans by hostname. The response generator and framework lives in mod_bmx itself. On Nov 30, 2016 14:35, "Jim Jagielski"wrote: One thing that I can't understand from an initial look is how the whole hook thing is. In mod_status, we have a hook that is run in mod_status and other modules use to supplement the mod_status output. With mod_bmx it looks like instead whatever "chunk" of data you want to make available, you put into a bean, but mod_bmx_status uses a local bean with no provision to print out other beans as well. How does one "extend" the info printed by mod_bmx_status via other beans added and populated by other modules?
Sub-modules viability (was: [Vote] Considering mod_bmx for adoption by HTTP Server Project)
On Wed, Nov 30, 2016 at 2:05 PM, Jim Jagielskiwrote: > > Module sub-modules NEVER get the love and attention they need > and warrant > It would be interesting to see this applied to fcgid on a fast-track. I don't know that ftp warrants it this moment. I don't know why arm4 is still one of our repositories with never-a-release. I made the direct-to-trunk-backport-2.4 proposal based on the efforts of Stefan who has continued to maintain a buildable external third party module of mod_h2, while keeping it in-sync with httpd 2.4. Maybe it is the better model. Older httpd flavors don't get the patch attention and backport considerations they need. Independently, Stefan can maintain a viable third party plug-in module for older flavors of httpd, but it can move forward with httpd core. No waiting on any backport votes, it simply happens when Stefan has cycles to do it. Contrawise, mod_h2 plugable add-on could have become a modules sub-project and available from the ASF. Either way, there is some level of synchronization required. Since Stefan is doing that all, it's truly his choice of what is simplest. Likewise, I'm happy to ensure mod_bmx on github continues to be a viable plug-in to httpd-2.2 or 2.4, and carry back the AL commits to the trunk/2.4 branch efforts at httpd. Having maintained a few sub-modules with mixed success, I'd be happy to see us retire the model, it simply doesn't work any better than landing on the perpetual 2.4.x release side-track. It seems like the entire effort has slipped the rail, but more on that in a few weeks when 2.4.24 is put to bed. In the meantime, there seems to already be consensus to shift mod_fcgid as well to trunk/.
Re: JSON for mod_status
On Wed, Nov 30, 2016 at 2:10 PM, Jim Jagielskiwrote: > One final note: > > The reason I brought all this up is that right now, using mod_status > for actual automated status checking is downright painful. At > ApacheCon, due to, I guess, me adding the mod_status hooks to > the redis and memcache socache providers, people came up to > me and asked when, if ever, we would support JSON in mod_status. > ++1 :) S > > What I was hoping for was an implementation that could easily, > and in some reasonable timeframe, be made available to our > 2.4.x community. I don't want something that just ends up > in trunk, to sit and languish until the day when we eventually > decide on releasing 2.6/3.0. > Since it's written and is working just fine (and is part of the RHEL distribution) - let's bring bmx in, then think about how to refactor the bean data and status hooks to converge them in 2.next/3.0. Or we can play NIH. (Especially absurd since an httpd committer wrote the whole first implementation.)
Re: JSON for mod_status
On Wed, Nov 30, 2016 at 1:58 PM, Jim Jagielskiwrote: > I'm not pushing back on mod_bmx at all... I just think > that there is a better understanding here about "mod_status > producing JSON" than what the changeover to mod_bmx would > be. > > What I would push back on would be having 2 implementations, > since that's just weird :) But if the BMX data format is > "better" then we should use that. > > As Nick said, what is key is that we should produce this > internal data in a known and easily parseable exchange > format, one that we can then directly send as a response > as well as one which we can, via a provider-like interface, > xlate to HTML and other "common" formats. Right now, I > think "plain", "json" and "html" are the only ones we > should worry about. > So to be clear, it is easy to flip mod_status's core data to any one of dozens of formats without any further debate. It is not so easy to carry along the third party hooked status representations of *their* data, not being able to predict how httpd 2.4.x will keep changing behavior during it's 'maintenance' lifespan, not knowing how they presented lists or tables and how they aught to be mapped to json tokens. Contra-wise, turning everything into json-always with 2.next or 3.0 isn't a solution either, since there's no way for mod_status to predict the best representation of the json output from any hooked third party modules. So the best solution, IMO, is to map as beans all of the data to be represented as raw json (and other raw formats), which can be filtered for the 'interesting' data that downstream monitoring clients want to consume, and leave the existing status hook for third party modules to map their data into presentation html (or xml, if we want to go that way.)
[Vote] Considering mod_bmx for adoption by HTTP Server Project
I'm pleased to conversations with the principal folks that Hyperic/VMW and Pivotal are interested in passing along the mod_bmx codebase to the httpd project, if we will have it. https://github.com/hyperic/mod_bmx Scoped by the Hyperic team to represent mod_status info in a flexible and extensible bean-registry of items, which are reported over http in a json-parsable format, and replaced the need for mod_snmp, it also extends status insight into specific vhost activity, although this measurement requires locking which may be too painful for some admins. Written initially by out own Aaron Bannert contracting to Hyperic, it has been maintained by Ryan Morgan and myself, along with other odd patches, including recent fixes by Maxime Beck and folks on jfclere's team. Since we are already spanning vendors, httpd seems like the perfect commons space to further build on this work. I have a champion to shepard code grant paperwork through VMW/Hyperic/Pivotal signatories, if we choose to accepting the donation. There seem to be two paths for us to accept this donation, so I'll put this out as a three choice answer; [ ] Accept mod_bmx into the httpd project as a module sub-project [ ] Accept mod_bmx into the httpd trunk (for possible backport consideration) [ ] Decline mod_bmx donation Votes? Please feel free to switch the Subject: line to launch into side-discussions about the source code, all inquiries are welcome.
Re: JSON for mod_status
On Wed, Nov 30, 2016 at 1:41 PM, Jim Jagielski <j...@jagunet.com> wrote: > > On Nov 30, 2016, at 2:31 PM, William A Rowe Jr <wr...@rowe-clan.net> > wrote: > > > > On Wed, Nov 30, 2016 at 1:27 PM, Jim Jagielski <j...@jagunet.com> wrote: > > > > > On Nov 30, 2016, at 1:56 PM, William A Rowe Jr <wr...@rowe-clan.net> > wrote: > > > > > > > > > There's no way to anticipate the "right way" to map json > > > tables to presentation-level results. > > > > ??? > > > > Certainly that is the job of the xlate provider that > > would be used, wouldn't it. Or are you suggesting that > > JSON itself isn't suitable as a data format in this case? > > > > I'm asking how one would maintain the xslt for the half dozen > > json data providers of mod_status extensions within the stock > > distro, PLUS any mod_status extension providers in the third > > party world of modules? > > > > U the data providers that hook into mod_status simply > push JSON. Same w/ any other extension providers. That is, > ALL mod_status does is provide JSON data; mod_status_html takes > that JSON format and xlates it to HTML; mod_status_plain takes > it and xlates it to simple text. > > Where does JSON fail and bean succeed? > > Other than being something that I brought up, I'm not sure > I understand your resistance, or why/how BMX is "better" > than JSON. BMX is entireliy JSON. I have no argument with your suggestion, which is why I'm bringing it up (you are the only one on the private list to have pushed back against adopting mod_bmx, why so I'm really not clear.) As you can see in the sources, mod_bmx_status is interrogating the scoreboard, not mod_status, for the underlying data. Beans provide a completely extensible way to map additional data for mod_bmx to present. What I'm questioning is whether a transliteration of unknown-future data can ever succeed, or whether third party modules won't need both a presentation-layer and json/bean-layer to represent their data in user and machine usable formats. I'm not familiar with a good interface for json -> html, pointers would be appreciated. I'm fairly familiar with xml -> json + xml -> html using xslt. That's presuming you know the entire plane of data to represent.
Re: JSON for mod_status
On Wed, Nov 30, 2016 at 1:27 PM, Jim Jagielski <j...@jagunet.com> wrote: > > > On Nov 30, 2016, at 1:56 PM, William A Rowe Jr <wr...@rowe-clan.net> > wrote: > > > > > > There's no way to anticipate the "right way" to map json > > tables to presentation-level results. > > ??? > > Certainly that is the job of the xlate provider that > would be used, wouldn't it. Or are you suggesting that > JSON itself isn't suitable as a data format in this case? > If we have well-formed xml today out of mod_status, it's pretty straightforward to write the json transformation you want as an xslt through filtering, provided you plug in a libxslt transform.
Re: JSON for mod_status
On Wed, Nov 30, 2016 at 1:27 PM, Jim Jagielski <j...@jagunet.com> wrote: > > > On Nov 30, 2016, at 1:56 PM, William A Rowe Jr <wr...@rowe-clan.net> > wrote: > > > > > > There's no way to anticipate the "right way" to map json > > tables to presentation-level results. > > ??? > > Certainly that is the job of the xlate provider that > would be used, wouldn't it. Or are you suggesting that > JSON itself isn't suitable as a data format in this case? > I'm asking how one would maintain the xslt for the half dozen json data providers of mod_status extensions within the stock distro, PLUS any mod_status extension providers in the third party world of modules?
Re: JSON for mod_status
On Wed, Nov 30, 2016 at 1:22 PM, Jim Jagielskiwrote: > Any online examples of it running? It's been a SUPER long > time since I looked at it. And I can't recall what the bean > framework is or does... As with /server-status - not advised unless you EDONTCARE about host inspection. The bean framework allows you to expand the nodes known to the bmx data store, with new arbitrary fields and values. This 380-line module maps all of the conventional mod_status data into the status bean data for reporting through mod_bmx... https://github.com/hyperic/mod_bmx/blob/master/modules/bmx/mod_bmx_status.c Other modules can provide similar beans that can be globbed from the /bmx-status generated page based on pattern matching. There are few predefined constraints on the labeling.
Re: JSON for mod_status
On Wed, Nov 30, 2016 at 12:38 PM, Nick Kewwrote: > On Wed, 2016-11-30 at 12:54 -0500, Jim Jagielski wrote: > > I'm thinking about adding JSON support to mod_status... > > the "plain" version output really stinks and lacks parity > > w/ the info we provide via HTML, and it would be nice > > to produce a really easily parseable format. > > > > Thoughts...? > > Ideally that should be the job of an output filter. > If mod_status were to generate a single standard-ish > format, like JSON or XML, a general-purpose filter > could generate other formats. > The underlying flaw is our extensible status page output. There's no way to anticipate the "right way" to map json tables to presentation-level results. And modules may express multiple tables. See the ssl session cache for one good example, a simply key-value table of some of the current metrics, and a longer table for data. So there either needs to be an extensible data engine as Aaron created, with each status module choosing how to represent it's component data, or mod_status needs two custom hooks, one chain for presentation data and one chain for json data. Existing mod_status extentions in modules only register the presentation hook, so they would not be invoked on a json query. mod_bmx avoids this quandary by placing a different generator for json queries complete with filtering. Based on the interest expressed on this thread, I'm happy to put mod_bmx up for a vote tomorrow after letting this thread run for a bit.
Re: JSON for mod_status
On Wed, Nov 30, 2016 at 11:54 AM, Jim Jagielskiwrote: > I'm thinking about adding JSON support to mod_status... > the "plain" version output really stinks and lacks parity > w/ the info we provide via HTML, and it would be nice > to produce a really easily parseable format. > > Thoughts...? > It's right here... https://github.com/hyperic/mod_bmx/blob/master/README-BMX.txt I'd brought this to the PMC's attention to see if there were sufficient maintainers and interested parties. In short, the original author was our own Bannert. In short, Pivotal is offering this to the HTTPD project under AL 2.0 (as it's already licensed). Right now, jfclere and I are the only two really looking at the code, although it has many commercial adopters (the preferred httpd introspection agent of the Hyperic monitoring software). What bannert coded was more than converting status -> json. He build a bean framework that is extensible to any other httpd module. It seems like that bean framework might even be a candidate to move into the httpd core with 2.next? As far as extentions, his most useful was the mod_bmx_vhost plug in. Taking it to the next step, that module should be able to shoot out a nice presentation table for mod_status users, while folks inspecting the bmx json bean interface already get these vhost-specific breakdowns.
Re: httpd test suite breakage
On Tue, Nov 29, 2016 at 7:25 AM, Petr Gajdos <pgaj...@suse.cz> wrote: > On Tue, Nov 29, 2016 at 01:09:25PM +, Joe Orton wrote: > > On Mon, Nov 28, 2016 at 05:16:12PM -0600, William A Rowe Jr wrote: > > > httpd: Syntax error on line 295 of > > > /home/wrowe/dev/test/test24-apr16-ossl102/t/conf/httpd.conf: Cannot > load > > > /home/wrowe/dev/test/test24-apr16-ossl102/c-modules/test_ > session/.libs/mod_test_session.so > > > into server: > > > /home/wrowe/dev/test/test24-apr16-ossl102/c-modules/test_ > session/.libs/mod_test_session.so: > > > undefined symbol: ap_hook_session_save > > > > > > Does anyone have an uncommitted test framework patch > > > up their sleeve to remedy? > > > > Modules using the session API can't be loaded if mod_session isn't > > loaded. > Of course. Note this threw me as mod_test_ssl doesn't suffer this same issue, since it correctly relied on optional hooks to avoid load order dependency (and emits errors in the log that it can't function if mod_ssl isn't loaded.) > > To make this "work" in the test framework we'd need to surround the > > generated LoadModule for mod_test_session with > > or something. I don't know if that can be done without hacking around > > inside Apache::Test. > > I was thinking about text file along each c-module which would express > the required modules which would be translated to . > It seems like something inside Apache::Test that indicated module dependencies for compiled test modules would be worthwhile. Looked some this morning, and noticed no special sauce other than #define HTTPD_TEST_REQUIRE_APACHE 2 which isn't quite the same thing. But it does look like any special sauce should be added to Apache-Test/lib/Apache/TestConfigC.pm unless we do some dependency logic. (Or simply fixed mod_session to not be load order dependent and use the optional entry point APIs.)
Re: httpd test suite breakage
My bad.. this is not against HEAD, this is Fedora 25. /sbin/httpd -v Server version: Apache/2.4.23 (Fedora) Server built: Jul 18 2016 15:38:14 Not so urgent for me now, but worth addressing I'd guess. On Mon, Nov 28, 2016 at 5:16 PM, William A Rowe Jr <wr...@rowe-clan.net> wrote: > Against 2.4 HEAD... > > In file included from mod_test_session.c:61:0: > mod_test_session.c: In function ‘test_session_register_hooks’: > /home/wrowe/dev/test/test24-apr16-ossl102/c-modules/apache_httpd_test.h:128:36: > warning: the comparison will always evaluate as ‘true’ for the address of > ‘test_session_handler’ will never be NULL [-Waddress] > if (APACHE_HTTPD_TEST_HANDLER != NULL) \ > ^ > mod_test_session.c:348:1: note: in expansion of macro > ‘APACHE_HTTPD_TEST_MODULE’ > APACHE_HTTPD_TEST_MODULE(test_session); > ^~~~ > /home/wrowe/dev/test/test24-apr16-ossl102/c-modules/apache_httpd_test.h:136:39: > warning: the comparison will always evaluate as ‘true’ for the address of > ‘test_session_init’ will never be NULL [-Waddress] > if (APACHE_HTTPD_TEST_CHILD_INIT != NULL) \ >^ > mod_test_session.c:348:1: note: in expansion of macro > ‘APACHE_HTTPD_TEST_MODULE’ > APACHE_HTTPD_TEST_MODULE(test_session); > ^~~~ > > httpd: Syntax error on line 295 of /home/wrowe/dev/test/test24- > apr16-ossl102/t/conf/httpd.conf: Cannot load /home/wrowe/dev/test/test24- > apr16-ossl102/c-modules/test_session/.libs/mod_test_session.so into > server: /home/wrowe/dev/test/test24-apr16-ossl102/c-modules/test_ > session/.libs/mod_test_session.so: undefined symbol: ap_hook_session_save > > Does anyone have an uncommitted test framework patch > up their sleeve to remedy? > > TIA, > > Bill >
httpd test suite breakage
Against 2.4 HEAD... In file included from mod_test_session.c:61:0: mod_test_session.c: In function ‘test_session_register_hooks’: /home/wrowe/dev/test/test24-apr16-ossl102/c-modules/apache_httpd_test.h:128:36: warning: the comparison will always evaluate as ‘true’ for the address of ‘test_session_handler’ will never be NULL [-Waddress] if (APACHE_HTTPD_TEST_HANDLER != NULL) \ ^ mod_test_session.c:348:1: note: in expansion of macro ‘APACHE_HTTPD_TEST_MODULE’ APACHE_HTTPD_TEST_MODULE(test_session); ^~~~ /home/wrowe/dev/test/test24-apr16-ossl102/c-modules/apache_httpd_test.h:136:39: warning: the comparison will always evaluate as ‘true’ for the address of ‘test_session_init’ will never be NULL [-Waddress] if (APACHE_HTTPD_TEST_CHILD_INIT != NULL) \ ^ mod_test_session.c:348:1: note: in expansion of macro ‘APACHE_HTTPD_TEST_MODULE’ APACHE_HTTPD_TEST_MODULE(test_session); ^~~~ httpd: Syntax error on line 295 of /home/wrowe/dev/test/test24-apr16-ossl102/t/conf/httpd.conf: Cannot load /home/wrowe/dev/test/test24-apr16-ossl102/c-modules/test_session/.libs/mod_test_session.so into server: /home/wrowe/dev/test/test24-apr16-ossl102/c-modules/test_session/.libs/mod_test_session.so: undefined symbol: ap_hook_session_save Does anyone have an uncommitted test framework patch up their sleeve to remedy? TIA, Bill
Re: Quieting warnings?
*) Fix warnings with new compilers, warnings treated as error in maintainer-mode, standard c-89 is enforced trunk patch: http://svn.apache.org/r1702948 http://svn.apache.org/r1759415 2.4.x patch: http://home.apache.org/~ylavic/patches/httpd-2.4.x-r1702948_and_co.patch +1: ylavic, jorton, jchampion: r1702948 doesn't quite work as advertised. -Werror is never added to CFLAGS because when combined with -Wstrict-prototypes, the AC_LANG_PROGRAM won't compile (it uses a bare main()). ylavic: Maybe the -Werror case could be handled later, for now this series avoids a lot of "warning: 'aplog_module_index' defined but not used [-Wunused-const-variable=]" thanks to AP_MAYBE_UNUSED. jchampion: Fine by me. I just think the dead code should be removed from the backport in the meantime, so people don't have a false sense of security. I usually just read on beyond patches that are 'under discussion'. STATUS isn't a good bidirectional communications mechanism. jchampion, did you have a proposed alternative since you felt strongly about this issue? Cheers, Bill On Mon, Nov 28, 2016 at 1:37 PM, William A Rowe Jr <wr...@rowe-clan.net> wrote: > Typical of a long batch of error noise on 2.4 HEAD (with the > http strict merge patch applied) in --enable-maintainer-mode... > > In file included from /home/wrowe/dev/httpd-2.4/modules/mappers/mod_vhost_ > alias.c:45:0: > /home/wrowe/dev/httpd-2.4/include/http_config.h:427:24: warning: > 'aplog_module_index' defined but not used [-Wunused-const-variable=] > static int * const aplog_module_index = &(foo##_module.module_index) > ^ > /home/wrowe/dev/httpd-2.4/include/http_config.h:437:5: note: in expansion > of macro 'APLOG_USE_MODULE' > APLOG_USE_MODULE(foo); \ > ^~~~ > /home/wrowe/dev/httpd-2.4/modules/mappers/mod_vhost_alias.c:447:1: note: > in expansion of macro 'AP_DECLARE_MODULE' > AP_DECLARE_MODULE(vhost_alias) = > ^ > > Has anyone investigated far enough for us to pacify gcc? >
Quieting warnings?
Typical of a long batch of error noise on 2.4 HEAD (with the http strict merge patch applied) in --enable-maintainer-mode... In file included from /home/wrowe/dev/httpd-2.4/modules/mappers/mod_vhost_alias.c:45:0: /home/wrowe/dev/httpd-2.4/include/http_config.h:427:24: warning: 'aplog_module_index' defined but not used [-Wunused-const-variable=] static int * const aplog_module_index = &(foo##_module.module_index) ^ /home/wrowe/dev/httpd-2.4/include/http_config.h:437:5: note: in expansion of macro 'APLOG_USE_MODULE' APLOG_USE_MODULE(foo); \ ^~~~ /home/wrowe/dev/httpd-2.4/modules/mappers/mod_vhost_alias.c:447:1: note: in expansion of macro 'AP_DECLARE_MODULE' AP_DECLARE_MODULE(vhost_alias) = ^ Has anyone investigated far enough for us to pacify gcc?
Re: svn commit: r1771372 - /httpd/httpd/branches/2.4.x-merge-http-strict/modules/http/http_filters.c
Please note this branch has strictly conformed to trunk activity. Please cite the backport origin svn rev no, or create one first, and then commit the backport from the svn commit. Thx, Bill On Fri, Nov 25, 2016 at 1:55 PM,wrote: > Author: rpluem > Date: Fri Nov 25 19:55:18 2016 > New Revision: 1771372 > > URL: http://svn.apache.org/viewvc?rev=1771372=rev > Log: > * Fix numbers count in comment. > > Modified: > httpd/httpd/branches/2.4.x-merge-http-strict/modules/ > http/http_filters.c > > Modified: httpd/httpd/branches/2.4.x-merge-http-strict/modules/ > http/http_filters.c > URL: http://svn.apache.org/viewvc/httpd/httpd/branches/2.4.x- > merge-http-strict/modules/http/http_filters.c?rev= > 1771372=1771371=1771372=diff > > == > --- httpd/httpd/branches/2.4.x-merge-http-strict/modules/http/http_filters.c > (original) > +++ httpd/httpd/branches/2.4.x-merge-http-strict/modules/http/http_filters.c > Fri Nov 25 19:55:18 2016 > @@ -128,8 +128,8 @@ static apr_status_t bail_out_on_error(ht > * Parse a chunk line with optional extension, detect overflow. > * There are several error cases: > * 1) If the chunk link is misformatted, APR_EINVAL is returned. > - * 1) If the conversion would require too many bits, APR_EGENERAL is > returned. > - * 2) If the conversion used the correct number of bits, but an overflow > + * 2) If the conversion would require too many bits, APR_EGENERAL is > returned. > + * 3) If the conversion used the correct number of bits, but an overflow > * caused only the sign bit to flip, then APR_ENOSPC is returned. > * A negative chunk length always indicates an overflow error. > */ > > >
Re: svn commit: r1769677 - /httpd/httpd/branches/2.4.x/STATUS
On Mon, Nov 14, 2016 at 8:07 PM, William A Rowe Jr <wr...@rowe-clan.net> wrote: > On Mon, Nov 14, 2016 at 1:02 PM, <wr...@apache.org> wrote: > >> Author: wrowe >> Date: Mon Nov 14 19:02:29 2016 >> New Revision: 1769677 >> >> URL: http://svn.apache.org/viewvc?rev=1769677=rev >> Log: >> Promote one patch, propose one historically tangled patch >> >> + *) Propose default strict RFC7230 behavior, and HttpProtocolOptions >> directive >> + to relax or further constrain some behaviors. >> + trunk patches: too numerous to list, see >> + svn log --stop-on-copy http://svn.apache.org/repos/as >> f/httpd/httpd/branches/2.4.x-merge-http-strict/ >> + 2.4.x patch: see >> + svn diff -r1767912:HEAD http://svn.apache.org/repos/as >> f/httpd/httpd/branches/2.4.x-merge-http-strict/ >> + +1: wrowe >> > > Patch attached. There were some incremental comments to the merge branch > that may or may not still apply, please reiterate your point. > The patch suggested by rpluem has been integrated, a small broken diff to the httpd.h is corrected. So I'm attaching a refreshed net diff. > I'm aware of one possible failing of this backport which I would like to > fix on > 2.4 after the backport is in... compilers may complain of assignment of the > r->protocol to a const char*, we may need apr_pstrcpy's scattered about. > In practice I've found no error here, not even a warning in maintainer mode. > Thanks to everyone who has contributed to this work on trunk/ or simply > providing feedback on dev@. > > The earliest i will commit this backport is noon-ish Tuesday from Spain. > Looking forward to some reviewers of this net patch so we can move forward. httpd-2.4-merge-http-strict.patch Description: Binary data
Re: svn commit: r1768036 - in /httpd/httpd/branches/2.4.x-merge-http-strict: ./ CHANGES include/ap_mmn.h include/http_core.h include/httpd.h modules/http/http_filters.c server/core.c server/protocol.c
On Nov 16, 2016 20:29, "Jacob Champion" <champio...@gmail.com> wrote: > > On 11/16/2016 05:01 AM, William A Rowe Jr wrote: >> >> We need to tolerate their presence. But we should ignore the guidance that >> sf originally quoted, that the URI host supersedes the Host header, it does >> not, they are two different entities it the proxy case > > > The "different entities" interpretation made sense to me too, but after re-reading the spec and seeing what the browsers actually do, I'm not so sure. (I'm a relative noob when it comes to the proxy sides of the spec, so corrections are appreciated.) > > From section 5.4: > >A client MUST send a Host header field in all HTTP/1.1 request >messages. If the target URI includes an authority component, then a >client MUST send a field-value for Host that is identical to that >authority component, excluding any userinfo subcomponent and its "@" >delimiter (Section 2.7.1). If the authority component is missing or >undefined for the target URI, then a client MUST send a Host header >field with an empty field-value. > > That's pretty absolute, and there doesn't seem to be an exception carved out for proxies. And in practice, after giving Chromium and Firefox a try with a local netcat "proxy", the Host headers they send are indeed derived from the target URI, *not* the hostname of the HTTP proxy. I'd like to see what other browsers do as well. > > This would appear to prevent name-based hosting of multiple proxies and origin servers at the same IP address, which seems strange to me, given that the Host header would be an appropriate resolution tool in that case... so I'm hoping there's something I'm missing. > > Maybe no one needs to do that? Does anyone have experience with a client/user-agent that makes use of name-based virtual proxies? > > --Jacob Doh, that's right. This is why we had such a horrid time with unwarranted TLS SNI host headers vs Host: headers for proxy requests through httpd:. I will need to review all three and compare them to the effect of this new patch. In the interim I suggest we withdraw this comparison, and bring it back up as a post 2.4.24 change. Thoughts?
Re: svn commit: r1769965 - /httpd/httpd/trunk/server/vhost.c
On Wed, Nov 16, 2016 at 1:52 PM, Ruediger Pluemwrote: > > On 11/16/2016 01:05 PM, wr...@apache.org wrote: > > Author: wrowe > > Date: Wed Nov 16 12:05:53 2016 > > New Revision: 1769965 > > > > URL: http://svn.apache.org/viewvc?rev=1769965=rev > > Log: > > Actually cause the Host header to be overridden, as noted by rpluem, > > and simplify now that there isn't a log-only mode. > > > > I believe this logic to be busted. Given this request; > > > > GET http://distant-host.com/ HTTP/1.1 > > Host: proxy-host > > > > we would now fail to evaluate the proxy-host virtual host rules. > > > > This seems like a breaking change to our config. mod_proxy already > > follows this rule of RFC7230 section 5.4; > > > >When a proxy receives a request with an absolute-form of > >request-target, the proxy MUST ignore the received Host header field > >(if any) and instead replace it with the host information of the > >request-target. A proxy that forwards such a request MUST generate a > >new Host field-value based on the received request-target rather than > >forward the received Host field-value. > > > > Section 5.5 of RFC7230 has this to say; > > > >Once the effective request URI has been constructed, an origin server > >needs to decide whether or not to provide service for that URI via > >the connection in which the request was received. For example, the > >request might have been misdirected, deliberately or accidentally, > >such that the information within a received request-target or Host > >header field differs from the host or port upon which the connection > >has been made. If the connection is from a trusted gateway, that > >inconsistency might be expected; otherwise, it might indicate an > >attempt to bypass security filters, trick the server into delivering > >non-public content, or poison a cache. See Section 9 for security > >considerations regarding message routing. > > > > Section 5.3.1 states; > > > >To allow for transition to the absolute-form for all requests in some > >future version of HTTP, a server MUST accept the absolute-form in > >requests, even though HTTP/1.1 clients will only send them in > >requests to proxies. > > > > It seems to me we should simply trust the Host: header and dump this > whole > > mess. If we want to reject requests in absolute form after the proxy > modules > > have had a chance to accept them, that wouldn't be a bad solution. > > > > Modified: > > httpd/httpd/trunk/server/vhost.c > > > > Modified: httpd/httpd/trunk/server/vhost.c > > URL: http://svn.apache.org/viewvc/httpd/httpd/trunk/server/ > vhost.c?rev=1769965=1769964=1769965=diff > > > == > > --- httpd/httpd/trunk/server/vhost.c (original) > > +++ httpd/httpd/trunk/server/vhost.c Wed Nov 16 12:05:53 2016 > > @@ -1165,13 +1165,11 @@ AP_DECLARE(void) ap_update_vhost_from_he > > * request line. > > */ > > if (have_hostname_from_url && host_header != NULL) { > > -const char *info = "Would replace"; > > -const char *new = construct_host_header(r, is_v6literal); > > -apr_table_set(r->headers_in, "Host", r->hostname); > > IMHO the old code was wrong because r->hostname misses the surrounding [] > in case of IPV6 literals, > but otherwise I see no change in logic here: Host part of the request > still takes precedence over Host header. > Ok, I misread your original post, I thought you were pointing out that r->hostname is the Host: header value. > > -info = "Replacing"; > > +const char *repl = construct_host_header(r, is_v6literal); > > +apr_table_set(r->headers_in, "Host", repl); > > ap_log_rerror(APLOG_MARK, APLOG_DEBUG, 0, r, APLOGNO(02417) > > - "%s Host header '%s' with host from request > uri: " > > - "'%s'", info, host_header, new); > > + "Replacing host header '%s' with host '%s' > given " > > + "in the request uri", host_header, repl); > > } > > } > > > > > > > > > > Doesn't this need to get added to the large conformance backport proposal? Added, or discarded entirely, let's resume on the other discussion thread.
Re: svn commit: r1768036 - in /httpd/httpd/branches/2.4.x-merge-http-strict: ./ CHANGES include/ap_mmn.h include/http_core.h include/httpd.h modules/http/http_filters.c server/core.c server/protocol.c
On Wed, Nov 16, 2016 at 1:32 PM, Ruediger Pluem <rpl...@apache.org> wrote: > > On 11/16/2016 01:08 PM, William A Rowe Jr wrote: > > > > Here's why I think the whole logic is busted and the preserve > r->hostname is > > the right thing to do for the outer request (not a child/client request > to any > > backend host)... > > > > I believe this logic to be busted. Given this request; > > > > GET http://distant-host.com/ HTTP/1.1 > > Host: proxy-host > > > > we would now fail to evaluate the proxy-host virtual host rules. > > > > This seems like a breaking change to our config. mod_proxy already > > follows this rule of RFC7230 section 5.4; > > > >When a proxy receives a request with an absolute-form of > >request-target, the proxy MUST ignore the received Host header field > >(if any) and instead replace it with the host information of the > >request-target. A proxy that forwards such a request MUST generate a > >new Host field-value based on the received request-target rather than > >forward the received Host field-value. > > > > Section 5.5 of RFC7230 has this to say; > > > >Once the effective request URI has been constructed, an origin server > >needs to decide whether or not to provide service for that URI via > >the connection in which the request was received. For example, the > >request might have been misdirected, deliberately or accidentally, > >such that the information within a received request-target or Host > >header field differs from the host or port upon which the connection > >has been made. If the connection is from a trusted gateway, that > >inconsistency might be expected; otherwise, it might indicate an > >attempt to bypass security filters, trick the server into delivering > >non-public content, or poison a cache. See Section 9 for security > >considerations regarding message routing. > > > > Section 5.3.1 states; > > > >To allow for transition to the absolute-form for all requests in some > >future version of HTTP, a server MUST accept the absolute-form in > >requests, even though HTTP/1.1 clients will only send them in > >requests to proxies. > > > > It seems to me we should simply trust the Host: header and dump this > whole > > mess. If we want to reject requests in absolute form after the proxy > modules > > have had a chance to accept them, that wouldn't be a bad solution. > > > Hm. I am confused now. 5.3.1 seem to state that we need to accept them in > any > case, even for non proxy requests. For the proxy case 5.4 seems to be > pretty clear > what to todo: Forget about the Host header, take it from the request. > For the other case we can IMHO decide what we want to do accept it (but > what has > precedence in this case host or request) or drop / error out. I guess both > behaviours > can make sense. The later one as a security check, the former one if we > are on a backend > system and have a trusted proxy / reverse proxy aka gateway in front of us. > We need to tolerate their presence. But we should ignore the guidance that sf originally quoted, that the URI host supersedes the Host header, it does not, they are two different entities it the proxy case, and in the non-proxy local resource case, a mismatch makes no sense in the first place. We could reject non-proxy requests in absolute form if they were to some local resource, but I think that is punitive. I'm not aware of any server that would refuse a request for http://example.com/x.html ... Host:example.com We could reject non-proxy requests in absolute form if the URI host did not match the specified Host: value. I'd initially wished it would, since these are logged as 404's and typically searches for open proxies. That confuses every new user/admin. But what about a client which resolved the uri as submitted but then generated the Host: after some dns fixup? Or we could continue to discard the target host specified in the local resource uri, as we do today. Can we get more eyeballs on this? Before backporting anything, I'd like to eliminate this specific test altogether from the backport, or at least refine it to be something meaningful.
Better ./configure --help wording for paths?
Can we agree on a keyword/wording convention here for httpd-2.5-dev? --with-apr=PATH prefix for installed APR or the full path to --with-pcre=PATHUse external PCRE library --with-valgrind[=DIR] Enable code to reduce valgrind false positives (optionally: set path to valgrind headers) --with-distcache=PATH Distcache installation directory --with-z=PATH use a specific zlib library --with-libxml2=DIR Specify location of libxml2 --with-brotli=PATH Brotli installation directory --with-lua=PATH Path to the Lua 5.2/5.1 prefix --with-serf=PATHSerf client library --with-ssl=PATH OpenSSL installation directory --with-nghttp2=PATH nghttp2 installation directory directory. Contrast to apr-2.0 and the autoconf keywords and text; Installation directories: --prefix=PREFIX install architecture-independent files in PREFIX [/usr/local/apr] --exec-prefix=EPREFIX install architecture-dependent files in EPREFIX [PREFIX] By default, `make install' will install all the files in `/usr/local/apr/bin', `/usr/local/apr/lib' etc. You can specify an installation prefix other than `/usr/local/apr' using `--prefix', for instance `--prefix=$HOME'. For better control, use the options below. Fine tuning of the installation directories: --bindir=DIRuser executables [EPREFIX/bin] --sbindir=DIR system admin executables [EPREFIX/sbin] --libexecdir=DIRprogram executables [EPREFIX/libexec] --sysconfdir=DIRread-only single-machine data [PREFIX/etc] --sharedstatedir=DIRmodifiable architecture-independent data [PREFIX/com] --localstatedir=DIR modifiable single-machine data [PREFIX/var] --libdir=DIRobject code libraries [EPREFIX/lib] --includedir=DIRC header files [PREFIX/include] --oldincludedir=DIR C header files for non-gcc [/usr/include] --datarootdir=DIR read-only arch.-independent data root [PREFIX/share] --datadir=DIR read-only architecture-independent data [DATAROOTDIR] --infodir=DIR info documentation [DATAROOTDIR/info] --localedir=DIR locale-dependent data [DATAROOTDIR/locale] --mandir=DIRman documentation [DATAROOTDIR/man] --docdir=DIRdocumentation root [DATAROOTDIR/doc/PACKAGE] --htmldir=DIR html documentation [DOCDIR] --dvidir=DIRdvi documentation [DOCDIR] --pdfdir=DIRpdf documentation [DOCDIR] --psdir=DIR ps documentation [DOCDIR] --with-sysroot=DIR Search for dependent libraries within DIR --with-installbuilddir=DIR location to store APR build files (defaults to '${datadir}/build') --with-efence[=DIR] path to Electric Fence installation --with-valgrind[=DIR] Enable code to teach valgrind about apr pools (optionally: set path to valgrind headers) --with-egd[=DIR]use EGD-compatible socket --with-openssl=DIR specify location of OpenSSL --with-nss=DIR specify location of NSS --with-commoncrypto=DIR specify location of CommonCrypto --with-gdbm=DIR enable GDBM support --with-ndbm=PATHFind the NDBM header and library in `PATH/include' and `PATH/lib'. If PATH is of the form `HEADER:LIB', library in LIB. If you omit the `=PATH' part --with-berkeley-db=PATH Find the Berkeley DB header and library in `PATH/include' and `PATH/lib'. If PATH is of the `=PATH' part completely, the configure script will --with-pgsql=DIRspecify PostgreSQL location --with-mysql=DIRenable MySQL DBD driver --with-sqlite3=DIR enable sqlite3 DBD driver --with-sqlite2=DIR enable sqlite2 DBD driver --with-oracle-include=DIR path to Oracle include files --with-oracle=DIR enable Oracle DBD driver; giving ORACLE_HOME as DIR --with-odbc=DIR specify ODBC location --with-libxml2=DIR specify libxml2 location --with-expat=DIRspecify Expat location --with-iconv=DIRpath to iconv installation
Re: svn commit: r1768036 - in /httpd/httpd/branches/2.4.x-merge-http-strict: ./ CHANGES include/ap_mmn.h include/http_core.h include/httpd.h modules/http/http_filters.c server/core.c server/protocol.c
On Tue, Nov 8, 2016 at 1:39 PM, Ruediger Pluemwrote: > > On 11/04/2016 03:20 PM, wr...@apache.org wrote: > > Author: wrowe > > Date: Fri Nov 4 14:20:16 2016 > > New Revision: 1768036 > > > > URL: http://svn.apache.org/viewvc?rev=1768036=rev > > Log: > > Add an option to enforce stricter HTTP conformance > > > > This is a first stab, the checks will likely have to be revised. > > For now, we check > > > > * if the request line contains control characters > > * if the request uri has fragment or username/password > > * that the request method is standard or registered with > RegisterHttpMethod > > * that the request protocol is of the form HTTP/[1-9]+.[0-9]+, > >or missing for 0.9 > > * if there is garbage in the request line after the protocol > > * if any request header contains control characters > > * if any request header has an empty name > > * for the host name in the URL or Host header: > >- if an IPv4 dotted decimal address: Reject octal or hex values, > require > > exactly four parts > >- if a DNS host name: Reject non-alphanumeric characters besides '.' > and > > '-'. As a side effect, this rejects multiple Host headers. > > * if any response header contains control characters > > * if any response header has an empty name > > * that the Location response header (if present) has a valid scheme and > is > >absolute > > > > If we have a host name both from the URL and the Host header, we replace > the > > Host header with the value from the URL to enforce RFC conformance. > > > > There is a log-only mode, but the loglevels of the logged messages need > some > > thought/work. Currently, the checks for incoming data log for 'core' > and the > > checks for outgoing data log for 'http'. Maybe we need a way to > configure the > > loglevels separately from the core/http loglevels. > > > > change protocol number parsing in strict mode according to HTTPbis draft > > - only accept single digit version components > > - don't accept white-space after protocol specification > > > > Clean up comment, fix log tags. > > Submitted by: sf > > Backports: r1426877, r1426879, r1426988, r1426992 > > Modified: httpd/httpd/branches/2.4.x-merge-http-strict/server/vhost.c > > URL: http://svn.apache.org/viewvc/httpd/httpd/branches/2.4.x- > merge-http-strict/server/vhost.c?rev=1768036= > 1768035=1768036=diff > > > == > > --- httpd/httpd/branches/2.4.x-merge-http-strict/server/vhost.c > (original) > > +++ httpd/httpd/branches/2.4.x-merge-http-strict/server/vhost.c Fri > Nov 4 14:20:16 2016 > > +if (conf->http_conformance & AP_HTTP_CONFORMANCE_STRICT) { > > +/* > > + * If we have both hostname from an absoluteURI and a Host > header, > > + * we must ignore the Host header (RFC 2616 5.2). > > + * To enforce this, we reset the Host header to the value from > the > > + * request line. > > + */ > > +if (have_hostname_from_url && host_header != NULL) { > > +const char *info = "Would replace"; > > +const char *new = construct_host_header(r, is_v6literal); > > +if (!(conf->http_conformance & > AP_HTTP_CONFORMANCE_LOGONLY)) { > > +apr_table_set(r->headers_in, "Host", r->hostname); > > Hm, why don't we use "new" here instead of r->hostname > > > +info = "Replacing"; > > +} > > +ap_log_rerror(APLOG_MARK, APLOG_DEBUG, 0, r, APLOGNO(02417) > > + "%s Host header '%s' with host from request > uri: " > > + "'%s'", info, host_header, new); > > +} > > +} Good question, with LOGONLY no longer an option, all that logic got simpler. Here's why I think the whole logic is busted and the preserve r->hostname is the right thing to do for the outer request (not a child/client request to any backend host)... I believe this logic to be busted. Given this request; GET http://distant-host.com/ HTTP/1.1 Host: proxy-host we would now fail to evaluate the proxy-host virtual host rules. This seems like a breaking change to our config. mod_proxy already follows this rule of RFC7230 section 5.4; When a proxy receives a request with an absolute-form of request-target, the proxy MUST ignore the received Host header field (if any) and instead replace it with the host information of the request-target. A proxy that forwards such a request MUST generate a new Host field-value based on the received request-target rather than forward the received Host field-value. Section 5.5 of RFC7230 has this to say; Once the effective request URI has been constructed, an origin server needs to decide whether or not to provide service for that URI via the connection in which the request was received. For example, the request might have been misdirected, deliberately or accidentally, such
Re: Query on linking Apache Mailing list with GitHUub Commits
More importantly, the commit message for those lines modified should credit the original submitter(s)... Who may or may not even be committers to this project. On Nov 15, 2016 17:21, "Eric Covener"wrote: > On Tue, Nov 15, 2016 at 11:18 AM, Mehvish.Rashid > wrote: > > Am assuming that commit on the file means only that person has worked on > the > > code. Is it possible that more than one person is working on the code > before > > one single commit? How can I know how many people worked on a code file > > before the commit? > > "svn blame" on a file will tell you who has touched each line. svn > log will tell you similar. An archive of c...@httpd.apache.org would > tell you the same in more free-form way. >
Re: svn commit: r1769677 - /httpd/httpd/branches/2.4.x/STATUS
On Mon, Nov 14, 2016 at 1:02 PM,wrote: > Author: wrowe > Date: Mon Nov 14 19:02:29 2016 > New Revision: 1769677 > > URL: http://svn.apache.org/viewvc?rev=1769677=rev > Log: > Promote one patch, propose one historically tangled patch > > + *) Propose default strict RFC7230 behavior, and HttpProtocolOptions > directive > + to relax or further constrain some behaviors. > + trunk patches: too numerous to list, see > + svn log --stop-on-copy http://svn.apache.org/repos/ > asf/httpd/httpd/branches/2.4.x-merge-http-strict/ > + 2.4.x patch: see > + svn diff -r1767912:HEAD http://svn.apache.org/repos/ > asf/httpd/httpd/branches/2.4.x-merge-http-strict/ > + +1: wrowe > > Patch attached. There were some incremental comments to the merge branch that may or may not still apply, please reiterate your point. I'm aware of one possible failing of this backport which I would like to fix on 2.4 after the backport is in... compilers may complain of assignment of the r->protocol to a const char*, we may need apr_pstrcpy's scattered about. Thanks to everyone who has contributed to this work on trunk/ or simply providing feedback on dev@. The earliest i will commit this backport is noon-ish Tuesday from Spain. Index: docs/manual/mod/core.xml === --- docs/manual/mod/core.xml (revision 1767912) +++ docs/manual/mod/core.xml (revision 1769676) @@ -1239,6 +1239,82 @@ +HttpProtocolOptions +Modify restrictions on HTTP Request Messages +HttpProtocolOptions [Strict|Unsafe] [RegisteredMethods|LenientMethods] + [Allow0.9|Require1.0] +HttpProtocolOptions Strict LenientMethods Allow0.9 +server config +virtual host +2.2.32 or 2.4.24 and later + + +This directive changes the rules applied to the HTTP Request Line +(https://tools.ietf.org/html/rfc7230#section-3.1.1; + >RFC 7230 3.1.1) and the HTTP Request Header Fields +(https://tools.ietf.org/html/rfc7230#section-3.2; + >RFC 7230 3.2), which are now applied by default or using +the Strict option. Due to legacy modules, applications or +custom user-agents which must be deperecated the Unsafe +option has been added to revert to the legacy behaviors. These rules +are applied prior to request processing, so must be configured at the +global or default (first) matching virtual host section, by IP/port +interface (and not by name) to be honored. + +Prior to the introduction of this directive, the Apache HTTP Server +request message parsers were tolerant of a number of forms of input +which did not conform to the protocol. +https://tools.ietf.org/html/rfc7230#section-9.4; + >RFC 7230 9.4 Request Splitting and +https://tools.ietf.org/html/rfc7230#section-9.5; + >9.5 Response Smuggling call out only two of the potential +risks of accepting non-conformant request messages, while +https://tools.ietf.org/html/rfc7230#section-3.5; + >RFC 7230 3.5 "Message Parsing Robustness" identify the +risks of accepting obscure whitespace and request message formatting. +As of the introduction of this directive, all grammer rules of the +specification are enforced in the default Strict operating +mode, and the strict whitespace suggested by section 3.5 is enforced +and cannot be relaxed. + +Users are strongly cautioned against toggling the Unsafe +mode of operation, particularly on outward-facing, publicly accessible +server deployments. If an interface is required for faulty monitoring +or other custom service consumers running on an intranet, users should +toggle the Unsafe option only on a specific virtual host configured +to service their internal private network. + +Reviewing the messages logged to the ErrorLog, +configured with LogLevel debug level, +can help identify such faulty requests along with their origin. +Users should pay particular attention to the 400 responses in the access +log for invalid requests which were unexpectedly rejected. + +https://tools.ietf.org/html/rfc7231#section-4.1; + >RFC 7231 4.1 "Request Methods" "Overview" requires that +origin servers shall respond with an error when an unsupported method +is encountered in the request line. This already happens when the +LenientMethods option is used, but administrators may wish +to toggle the RegisteredMethods option and register any +non-standard methods using the RegisterHttpMethod +directive, particularly if the Unsafe option has been toggled. +The RegisteredMethods option should not +be toggled for forward proxy hosts, as the methods supported by the +origin servers are unknown to the proxy server. + +https://tools.ietf.org/html/rfc2616#section-19.6; + >RFC 2616 19.6 "Compatibility With Previous Versions" had +encouraged HTTP servers to support legacy HTTP/0.9 requests. RFC 7230 +superceeds this
Re: [PATCH 60223] non-empty OpenSSL error queue preventing non-blocking reads through mod_ssl
On Fri, Nov 11, 2016 at 9:01 AM, Paul Spanglerwrote: > On 10/17/2016 2:04 PM, Paul Spangler wrote: > >> Hello, >> >> Due to the way OpenSSL stores errors in a per-thread queue, functions >> such as SSL_read followed by SSL_get_error may not produce the desired >> result if the error queue is not empty prior to calling SSL_read[1]. For >> example, a non-blocking read reports that no data is available by >> setting up SSL_get_error to return SSL_ERROR_WANT_READ. But if an error >> is already present in the queue, SSL_get_error sees that error instead >> and returns SSL_ERROR_SSL. >> >> I found at least one case where the error queue may be non-empty prior >> to a non-blocking read[2] that involves combining mod_session_crypto >> (which leaves the error queue non-empty) and the third-party >> mod_websocket (which uses non-blocking reads), resulting in the >> connection being closed. I included a potential patch to mod_ssl for >> consideration on the bug report that simply clears the error queue prior >> to any of the three SSL_* calls that mod_ssl makes. An ideal fix might >> be to keep the error queue empty at all times (i.e. patch the APR crypto >> library), but I propose that this patch is more robust in a modular >> environment. >> >> Thanks for your consideration. >> >> [1] >> https://www.openssl.org/docs/manmaster/ssl/SSL_get_error.html#DESCRIPTION >> [2] https://bz.apache.org/bugzilla/show_bug.cgi?id=60223 >> >> Anyone have any thoughts on this small patch? It addresses an issue with > OpenSSL's per-thread state causing connections to fail. > > > Regards, > Paul Spangler > LabVIEW R > National Instruments > The patch looks reasonable to me. As you point out, fixing this in APR might be worthwhile, but doesn't protect mod_ssl from other OpenSSL consuming logic such as OpenLDAP auth, database connections or other plug-ins that may leave a lingering SSL_error state lying about.