Re: Need confirmation of memory leak using Apache 2.2.2.
I get it on Apache 2.0.59 as well. :-( I will thus be interested to see what others get, as appears to be an existing mod_python issue. BTW, this is with worker MPM. Graham Graham Dumpleton wrote .. I am using Apache 2.2.2 and when using mod_python in a certain way, I am seeing significant ongoing increases in memory use by Apache child processes. Initially I thought I had stuffed up recent changes in mod_python 3.3 out of subversion trunk that I had been making, but I went back to mod_python 3.2.10 and am seeing the same problem. Can some one please run the following test and use top or some other means of monitoring memory use to see if you can duplicate the problem. The test is really quite simple actually. # .htaccess PythonFixupHandler handlers AddHandler mod_python .py PythonHandler handlers # handlers.py from mod_python import apache def handler(req): req.content_type = 'text/plain' req.write('handler') return apache.OK def fixuphandler(req): return apache.OK The command I have been using to test is: ab -c 1 -n 5000 http://localhost:8082/~grahamd/MODPYTHON-155/hello.py What is strange is that if I have only the fixup handler or response handler invoked and not both, then memory leak isn't present. It is only when both are being invoked that it leaks memory. This occurs for me on Mac OS X 10.4, Python 2.3.5 and Apache 2.2.2, with either mod_python 3.2.10 or mod_python 3.3. I haven't yet tested with older Apache 2.0.58 as yet as don't have it on my box at present, but will download it and see if that makes a difference. I guess what I am trying to working out is if this is an issue with Apache 2.2.2 or whether it is mod_python and problem has been there for a while. Thus if people with both Apache 2.0.X and Apache 2.2.X could test it it would be great. Any help appreciated. Thanks. Graham
Re: Need confirmation of memory leak using Apache 2.2.2.
Okay, found the source of the memory leak. The problem goes right back to 3.1.4 which also has the problem when tested. The problem code is in python_handler() in 'src/mod_python.c'. Specifically the code does: if (!hle) { /* create a handler list object from dynamically registered handlers */ request_obj-hlo = (hlistobject *)MpHList_FromHLEntry (dynhle, 1); } else { /* create a handler list object */ request_obj-hlo = (hlistobject *)MpHList_FromHLEntry(hle, 1); /* add dynamically registered handlers, if any */ if (dynhle) { MpHList_Append(request_obj-hlo, dynhle); } } Problem is that request_obj-hlo can already be set by a prior phase's handler and by simply assigning to request_obj-hlo you get a memory leak as it is a Python object and it isn't being decref'd. Thus, before this 'if' statement, it would appear that the following should be inserted. if (request_obj-hlo) Py_DECREF(request_obj-hlo); or: Py_XDECREF(request_obj-hlo); Still need to do some more checking, but adding that seems to get rid of the problem. Now what do we do about 3.2.10? Given that this thing leaks really badly when triggered shows that no one must be using multiple handler phases at the same time, so may be safe to still release 3.2.10 and we fix it in next backport release and 3.3. Comments. Graham On 31/07/2006, at 7:24 PM, Graham Dumpleton wrote: The good news is that this isn't a problem introduced with mod_python 3.2.10 so doesn't necessarily have to stop that being released. The bad news though is that is is also in mod_python 3.2.8, so it has been there for some time. The question now is whether anyone else sees it on other platforms or whether it is something specific to my machine. Graham On 31/07/2006, at 4:53 PM, Graham Dumpleton wrote: I get it on Apache 2.0.59 as well. :-( I will thus be interested to see what others get, as appears to be an existing mod_python issue. BTW, this is with worker MPM. Graham Graham Dumpleton wrote .. I am using Apache 2.2.2 and when using mod_python in a certain way, I am seeing significant ongoing increases in memory use by Apache child processes. Initially I thought I had stuffed up recent changes in mod_python 3.3 out of subversion trunk that I had been making, but I went back to mod_python 3.2.10 and am seeing the same problem. Can some one please run the following test and use top or some other means of monitoring memory use to see if you can duplicate the problem. The test is really quite simple actually. # .htaccess PythonFixupHandler handlers AddHandler mod_python .py PythonHandler handlers # handlers.py from mod_python import apache def handler(req): req.content_type = 'text/plain' req.write('handler') return apache.OK def fixuphandler(req): return apache.OK The command I have been using to test is: ab -c 1 -n 5000 http://localhost:8082/~grahamd/MODPYTHON-155/ hello.py What is strange is that if I have only the fixup handler or response handler invoked and not both, then memory leak isn't present. It is only when both are being invoked that it leaks memory. This occurs for me on Mac OS X 10.4, Python 2.3.5 and Apache 2.2.2, with either mod_python 3.2.10 or mod_python 3.3. I haven't yet tested with older Apache 2.0.58 as yet as don't have it on my box at present, but will download it and see if that makes a difference. I guess what I am trying to working out is if this is an issue with Apache 2.2.2 or whether it is mod_python and problem has been there for a while. Thus if people with both Apache 2.0.X and Apache 2.2.X could test it it would be great. Any help appreciated. Thanks. Graham
[jira] Created: (MODPYTHON-181) Memory leak when using handlers in multiple phases at same time.
Memory leak when using handlers in multiple phases at same time. Key: MODPYTHON-181 URL: http://issues.apache.org/jira/browse/MODPYTHON-181 Project: mod_python Issue Type: Bug Components: core Affects Versions: 3.2.8, 3.1.4, 3.3 Reporter: Graham Dumpleton Assigned To: Graham Dumpleton When using handlers against multiple phases, ie., # .htaccess PythonFixupHandler handlers AddHandler mod_python .py PythonHandler handlers # handlers.py from mod_python import apache def handler(req): req.content_type = 'text/plain' req.write('handler') return apache.OK def fixuphandler(req): return apache.OK mod_python will leak memory on each request, which Apache child process sizes blowing out quite quickly. The problem code is in python_handler() in 'src/mod_python.c'. Specifically the code does: if (!hle) { /* create a handler list object from dynamically registered handlers */ request_obj-hlo = (hlistobject *)MpHList_FromHLEntry(dynhle, 1); } else { /* create a handler list object */ request_obj-hlo = (hlistobject *)MpHList_FromHLEntry(hle, 1); /* add dynamically registered handlers, if any */ if (dynhle) { MpHList_Append(request_obj-hlo, dynhle); } } Problem is that request_obj-hlo can already be set by a prior phase's handler and by simply assigning to request_obj-hlo you get a memory leak as it refers to an existing Python object and it isn't being decref'd. Thus, before this 'if' statement, it would appear that the following should be inserted. if (request_obj-hlo) Py_DECREF(request_obj-hlo); or: Py_XDECREF(request_obj-hlo); -- This message is automatically generated by JIRA. - If you think it was sent incorrectly contact one of the administrators: http://issues.apache.org/jira/secure/Administrators.jspa - For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] Work started: (MODPYTHON-181) Memory leak when using handlers in multiple phases at same time.
[ http://issues.apache.org/jira/browse/MODPYTHON-181?page=all ] Work on MODPYTHON-181 started by Graham Dumpleton. Memory leak when using handlers in multiple phases at same time. Key: MODPYTHON-181 URL: http://issues.apache.org/jira/browse/MODPYTHON-181 Project: mod_python Issue Type: Bug Components: core Affects Versions: 3.2.8, 3.1.4, 3.3 Reporter: Graham Dumpleton Assigned To: Graham Dumpleton When using handlers against multiple phases, ie., # .htaccess PythonFixupHandler handlers AddHandler mod_python .py PythonHandler handlers # handlers.py from mod_python import apache def handler(req): req.content_type = 'text/plain' req.write('handler') return apache.OK def fixuphandler(req): return apache.OK mod_python will leak memory on each request, which Apache child process sizes blowing out quite quickly. The problem code is in python_handler() in 'src/mod_python.c'. Specifically the code does: if (!hle) { /* create a handler list object from dynamically registered handlers */ request_obj-hlo = (hlistobject *)MpHList_FromHLEntry(dynhle, 1); } else { /* create a handler list object */ request_obj-hlo = (hlistobject *)MpHList_FromHLEntry(hle, 1); /* add dynamically registered handlers, if any */ if (dynhle) { MpHList_Append(request_obj-hlo, dynhle); } } Problem is that request_obj-hlo can already be set by a prior phase's handler and by simply assigning to request_obj-hlo you get a memory leak as it refers to an existing Python object and it isn't being decref'd. Thus, before this 'if' statement, it would appear that the following should be inserted. if (request_obj-hlo) Py_DECREF(request_obj-hlo); or: Py_XDECREF(request_obj-hlo); -- This message is automatically generated by JIRA. - If you think it was sent incorrectly contact one of the administrators: http://issues.apache.org/jira/secure/Administrators.jspa - For more information on JIRA, see: http://www.atlassian.com/software/jira
Re: Core Vote (Re: mod_python 3.2.9 available for testing)
We decided to fix the memory leak in parse_qsl and move on to 3.2.10, which has been tested and currently has +3 core votes. All we need now is the official release. One of these days I'll sort out my GPG keys so I can sign these things myself but in the mean time we'll need your help, Grisha. Main 3.2.10 feature is Apache 2.2 support, plus bug fixes. Tarball and Win binaries are here: http://people.apache.org/~jgallacher/mod_python/dist/ http://nicolas.lehuen.com/download/mod_python/ Jim Gregory (Grisha) Trubetskoy wrote: Sorry for the late response - I was trying to have a vacation - that's when you are geographically in a different place with slow internet access and read only some of your e-mail ;-) +1 for core vote (with the note about the 2.2.2 XP SP2 issue). Grisha On Sat, 8 Jul 2006, Graham Dumpleton wrote: On 08/07/2006, at 4:26 AM, Jim Gallacher wrote: Hi Grisha, Here is the tally: +1 FreeBSD 6.1-RELEASE-p2, Apache 2.2 (mpm-prefork), Python 2.4.3 +1 Linux Debian Sid, Apache 2.0.55 (mpm-worker), Python 2.3.5 +1 Linux Debian Sid, Apache 2.2.0 (mpm-worker), Python 2.4.2 +1 Linux Fedora Core 5 (i386), Apache 2.2.0 (mpm-prefork), Python 2.4.3 +1 Linux Slackware 10.2, Apache 2.0.55 (mpm-prefork), Python 2.4.1 +1 Linux Ubuntu 6.06, Apache 2.0.55 (mpm-worker), Python 2.4.3 +1 MacOSX 10.4.6 PPC, Apache-2.0.55 (mpm/worker), Python-2.3.5 +1 MacOSX 10.4.6 PPC, Apache-2.2.1 (mpm/worker), Python-2.3.5 +1 MacOSX 10.4.7 Intel, Apache-2.0.55 (mpm/prefork), Python-2.4.2 +1 Windows XP SP2, Apache 2.0.58 (mpm_winnt), Python 2.4.3 -1 Windows XP SP2, Apache 2.2.2 (mpm_winnt), Python 2.4.3 The -1 was from Nicolas, with the following comment: Only two tests fail but with a segfault, it's test_srv_register_cleanup and test_apache_register_cleanup. This is not really surprising... I think we should go ahead and release the 3.2.9 version, while filing a known bug regarding the fact that we drop the support for those two functions. If we accept this, then it's a +1. The issue with server cleanups failing and why is covered by: http://issues.apache.org/jira/browse/MODPYTHON-109 The last test results were submitted July 1, so I think we may as well have a core vote. Jim Gregory (Grisha) Trubetskoy wrote: I'm barely keeping my head above water right now with work, so not really following the list - if someone could please ping me when/if you think we're ready for the core group vote and we have a tally. Thanks! Grisha -- Forwarded message -- Date: Sat, 01 Jul 2006 23:18:05 -0400 From: Jorey Bump [EMAIL PROTECTED] To: python-dev@httpd.apache.org Subject: Re: mod_python 3.2.9 available for testing +1 Linux Slackware 10.2, Apache 2.0.55 (mpm-prefork), Python 2.4.1 Jim Gallacher wrote: The mod_python 3.2.9 tarball is available for testing. This tarball is unchanged from 3.2.9-rc3, but should be retested anyway - just in case something went pair-shaped in the process of tagging and packaging. Here are the rules: In order for a file to be officially announced, it has to be tested by developers on the dev list. Anyone subscribed to this list can (and should feel obligated to :-) ) test it, and provide feedback *to _this_ list*! (Not the [EMAIL PROTECTED] list, and preferably not me personally). The files are (temporarily) available here: http://people.apache.org/~jgallacher/mod_python/dist/ http://people.apache.org/~jgallacher/mod_python/dist/ mod_python-3.2.9.tgz http://people.apache.org/~jgallacher/mod_python/dist/ mod_python-3.2.9.tgz.md5 Please download it, then do the usual $ ./configure --with-apxs=/wherever/it/is $ make $ (su) # make install Then (as non-root user!) $ cd test $ python test.py And see if any tests fail. If they pass, send a +1 to the list, if they fail, send the details (the versions of OS, Apache, Apache-mpm, Python, the test output, and suggestions, if any). Please present your test results in the following format: +1 OS version, Apache version (apache mpm), Python Version For example: +1 Linux Debian Sid, Apache 2.0.55 (mpm-worker), Python 2.3.5 Presenting your information in a consistent format will help in tabulating the results. You can include additional information in each section, just don't use extra commas. There is no need to include the mod_python version in this string as that information is available in the email subject. Who knows, one day I may actually write a script to extract this information automatically. :) Thank you for your assistance, Jim Gallacher
Re: Need confirmation of memory leak using Apache 2.2.2.
Graham Dumpleton wrote: Okay, found the source of the memory leak. The problem goes right back to 3.1.4 which also has the problem when tested. ... Now what do we do about 3.2.10? Given that this thing leaks really badly when triggered shows that no one must be using multiple handler phases at the same time, so may be safe to still release 3.2.10 and we fix it in next backport release and 3.3. Since this is a confirmed non-regression, especially one which has existed for a long time, I don't think it makes sense to call off 3.2.10 - especially not at this late stage when all that is still needed is official rubber-stamping of the release. There are two very important new features coming in this release: * Apache 2.2 support * don't die in configure when using a recent bash Max. signature.asc Description: OpenPGP digital signature
Re: Need confirmation of memory leak using Apache 2.2.2.
Here is further confirmation that it leaks like crazy for: mod_python 3.2.10, Linux Ubuntu 6.06, Apache 2.0.55 (mpm-worker), Python 2.4.3 Jim Graham Dumpleton wrote: I get it on Apache 2.0.59 as well. :-( I will thus be interested to see what others get, as appears to be an existing mod_python issue. BTW, this is with worker MPM. Graham Graham Dumpleton wrote .. I am using Apache 2.2.2 and when using mod_python in a certain way, I am seeing significant ongoing increases in memory use by Apache child processes. Initially I thought I had stuffed up recent changes in mod_python 3.3 out of subversion trunk that I had been making, but I went back to mod_python 3.2.10 and am seeing the same problem. Can some one please run the following test and use top or some other means of monitoring memory use to see if you can duplicate the problem. The test is really quite simple actually. # .htaccess PythonFixupHandler handlers AddHandler mod_python .py PythonHandler handlers # handlers.py from mod_python import apache def handler(req): req.content_type = 'text/plain' req.write('handler') return apache.OK def fixuphandler(req): return apache.OK The command I have been using to test is: ab -c 1 -n 5000 http://localhost:8082/~grahamd/MODPYTHON-155/hello.py What is strange is that if I have only the fixup handler or response handler invoked and not both, then memory leak isn't present. It is only when both are being invoked that it leaks memory. This occurs for me on Mac OS X 10.4, Python 2.3.5 and Apache 2.2.2, with either mod_python 3.2.10 or mod_python 3.3. I haven't yet tested with older Apache 2.0.58 as yet as don't have it on my box at present, but will download it and see if that makes a difference. I guess what I am trying to working out is if this is an issue with Apache 2.2.2 or whether it is mod_python and problem has been there for a while. Thus if people with both Apache 2.0.X and Apache 2.2.X could test it it would be great. Any help appreciated. Thanks. Graham
Re: Need confirmation of memory leak using Apache 2.2.2.
Max Bowsher wrote: Graham Dumpleton wrote: Okay, found the source of the memory leak. The problem goes right back to 3.1.4 which also has the problem when tested. ... Now what do we do about 3.2.10? Given that this thing leaks really badly when triggered shows that no one must be using multiple handler phases at the same time, so may be safe to still release 3.2.10 and we fix it in next backport release and 3.3. Since this is a confirmed non-regression, especially one which has existed for a long time, I don't think it makes sense to call off 3.2.10 - especially not at this late stage when all that is still needed is official rubber-stamping of the release. There are two very important new features coming in this release: * Apache 2.2 support * don't die in configure when using a recent bash Agreed. Hopefully Grisha will have a chance to sign and upload the 3.2.10 tarball in the near future. Jim
Re: Core Vote (Re: mod_python 3.2.9 available for testing)
Nicolas Lehuen wrote .. Note that the problem with Apache 2.2 on Windows XP SP2 seems to have disappeared, though I can't see how this is possible, unless Graham fixed something :). The problem was more probably due to an Apache 2.2 setup glitch. Not necessarily a glitch. The whole problem with the registration of server cleanup functions was that whether it would cause a problem/hang/crash was random. It just depended on what the Apache child process was doing at the time the signal came from the parent process to shut it down. In some respects the Win32 platform would have fared a bit better than UNIX as on Win32 a signal handler is actually executed as a distinct thread where as on UNIX it just suspends all running code and then blindly executes the signal handler code. We still need to disable the server cleanup execution, it just has too much potential for problems. Graham Regards, Nicolas 2006/7/31, Jim Gallacher [EMAIL PROTECTED]: We decided to fix the memory leak in parse_qsl and move on to 3.2.10, which has been tested and currently has +3 core votes. All we need now is the official release. One of these days I'll sort out my GPG keys so I can sign these things myself but in the mean time we'll need your help, Grisha. Main 3.2.10 feature is Apache 2.2 support, plus bug fixes. Tarball and Win binaries are here: http://people.apache.org/~jgallacher/mod_python/dist/ http://nicolas.lehuen.com/download/mod_python/ Jim Gregory (Grisha) Trubetskoy wrote: Sorry for the late response - I was trying to have a vacation - that's when you are geographically in a different place with slow internet access and read only some of your e-mail ;-) +1 for core vote (with the note about the 2.2.2 XP SP2 issue). Grisha On Sat, 8 Jul 2006, Graham Dumpleton wrote: On 08/07/2006, at 4:26 AM, Jim Gallacher wrote: Hi Grisha, Here is the tally: +1 FreeBSD 6.1-RELEASE-p2, Apache 2.2 (mpm-prefork), Python 2.4.3 +1 Linux Debian Sid, Apache 2.0.55 (mpm-worker), Python 2.3.5 +1 Linux Debian Sid, Apache 2.2.0 (mpm-worker), Python 2.4.2 +1 Linux Fedora Core 5 (i386), Apache 2.2.0 (mpm-prefork), Python 2.4.3 +1 Linux Slackware 10.2, Apache 2.0.55 (mpm-prefork), Python 2.4.1 +1 Linux Ubuntu 6.06, Apache 2.0.55 (mpm-worker), Python 2.4.3 +1 MacOSX 10.4.6 PPC, Apache-2.0.55 (mpm/worker), Python-2.3.5 +1 MacOSX 10.4.6 PPC, Apache-2.2.1 (mpm/worker), Python-2.3.5 +1 MacOSX 10.4.7 Intel, Apache-2.0.55 (mpm/prefork), Python-2.4.2 +1 Windows XP SP2, Apache 2.0.58 (mpm_winnt), Python 2.4.3 -1 Windows XP SP2, Apache 2.2.2 (mpm_winnt), Python 2.4.3 The -1 was from Nicolas, with the following comment: Only two tests fail but with a segfault, it's test_srv_register_cleanup and test_apache_register_cleanup. This is not really surprising... I think we should go ahead and release the 3.2.9 version, while filing a known bug regarding the fact that we drop the support for those two functions. If we accept this, then it's a +1. The issue with server cleanups failing and why is covered by: http://issues.apache.org/jira/browse/MODPYTHON-109 The last test results were submitted July 1, so I think we may as well have a core vote. Jim Gregory (Grisha) Trubetskoy wrote: I'm barely keeping my head above water right now with work, so not really following the list - if someone could please ping me when/if you think we're ready for the core group vote and we have a tally. Thanks! Grisha -- Forwarded message -- Date: Sat, 01 Jul 2006 23:18:05 -0400 From: Jorey Bump [EMAIL PROTECTED] To: python-dev@httpd.apache.org Subject: Re: mod_python 3.2.9 available for testing +1 Linux Slackware 10.2, Apache 2.0.55 (mpm-prefork), Python 2.4.1 Jim Gallacher wrote: The mod_python 3.2.9 tarball is available for testing. This tarball is unchanged from 3.2.9-rc3, but should be retested anyway - just in case something went pair-shaped in the process of tagging and packaging. Here are the rules: In order for a file to be officially announced, it has to be tested by developers on the dev list. Anyone subscribed to this list can (and should feel obligated to :-) ) test it, and provide feedback *to _this_ list*! (Not the [EMAIL PROTECTED] list, and preferably not me personally). The files are (temporarily) available here: http://people.apache.org/~jgallacher/mod_python/dist/ http://people.apache.org/~jgallacher/mod_python/dist/ mod_python-3.2.9.tgz http://people.apache.org/~jgallacher/mod_python/dist/ mod_python-3.2.9.tgz.md5 Please download it, then do the usual $ ./configure --with-apxs=/wherever/it/is $ make $ (su) # make install Then (as non-root user!) $ cd test $ python test.py And
[jira] Resolved: (MODPYTHON-181) Memory leak when using handlers in multiple phases at same time.
[ http://issues.apache.org/jira/browse/MODPYTHON-181?page=all ] Graham Dumpleton resolved MODPYTHON-181. Fix Version/s: 3.3 Resolution: Fixed Memory leak when using handlers in multiple phases at same time. Key: MODPYTHON-181 URL: http://issues.apache.org/jira/browse/MODPYTHON-181 Project: mod_python Issue Type: Bug Components: core Affects Versions: 3.1.4, 3.3, 3.2.8 Reporter: Graham Dumpleton Assigned To: Graham Dumpleton Fix For: 3.3 When using handlers against multiple phases, ie., # .htaccess PythonFixupHandler handlers AddHandler mod_python .py PythonHandler handlers # handlers.py from mod_python import apache def handler(req): req.content_type = 'text/plain' req.write('handler') return apache.OK def fixuphandler(req): return apache.OK mod_python will leak memory on each request, which Apache child process sizes blowing out quite quickly. The problem code is in python_handler() in 'src/mod_python.c'. Specifically the code does: if (!hle) { /* create a handler list object from dynamically registered handlers */ request_obj-hlo = (hlistobject *)MpHList_FromHLEntry(dynhle, 1); } else { /* create a handler list object */ request_obj-hlo = (hlistobject *)MpHList_FromHLEntry(hle, 1); /* add dynamically registered handlers, if any */ if (dynhle) { MpHList_Append(request_obj-hlo, dynhle); } } Problem is that request_obj-hlo can already be set by a prior phase's handler and by simply assigning to request_obj-hlo you get a memory leak as it refers to an existing Python object and it isn't being decref'd. Thus, before this 'if' statement, it would appear that the following should be inserted. if (request_obj-hlo) Py_DECREF(request_obj-hlo); or: Py_XDECREF(request_obj-hlo); -- This message is automatically generated by JIRA. - If you think it was sent incorrectly contact one of the administrators: http://issues.apache.org/jira/secure/Administrators.jspa - For more information on JIRA, see: http://www.atlassian.com/software/jira
Re: mod_python 3.2.10 core vote
Core +1 from me. I will take care of the signing, etc, some time tomorrow. P.S. In order for you to be able to sign you need to meet in person someone (or probably more than one person) from ASF. ApacheCon is the best place, and members do not have to pay the conference fee (at least I think that it is still true) ;-) Grisha On Thu, 27 Jul 2006, Nicolas Lehuen wrote: Just to make sure I've reinstalled my Python 2.3 test environment... So even if I've already voted, I've got an additional +1 Windows 2000 Server SP4, Apache 2.0.58 (mpm-winnt), Python 2.3.5 Regards, Nicolas 2006/7/26, Nicolas Lehuen [EMAIL PROTECTED]: +1 too. 2006/7/26, Jim Gallacher [EMAIL PROTECTED]: I think it's time for a core vote on the 3.2.10 release, as no more test results have appeared since Saturday. This vote is for the mod_python core only (Jim, Graham, Grisha and Nicolas). I am: +1 release now Jim Test summary: +1 Fedora Core 5, Apache 2.2.0 (mpm-prefork), Python 2.4.3 +1 FreeBSD 6.1-RELEASE-p2 (i386), Apache 2.2.2(mpm-prefork), python-2.4.3 +1 Gentoo 2006.0 (x86_64), Apache 2.2.2 (mpm-prefork), python-2.4.3 +1 Linux Slackware 10.1, Apache 2.0.55 (mpm-prefork), Python 2.4.1 +1 Linux Debian Sid, Apache 2.0.55 (mpm-prefork), Python 2.3.5 +1 Linux Debian Sid, Apache 2.2.0 (mpm-worker), Python 2.4.2 +1 Linux Ubuntu 6.06 Dapper Drake, Apache 2.0.55 (mpm-worker), Python 2.4.3 +1 MacOSX 10.4.7 Intel, Apache 2.0.55 (mpm-prefork), Python 2.4.3 +1 MacOSX 10.4.7 PPC, Apache 2.2.1 (mpm-prefork), Python 2.3.5 +1 MacOSX 10.4.7 PPC, Apache 2.2.1 (mpm-worker), Python 2.3.5 +1 Windows XP SP2, Apache 2.0.58 (mpm-winnt), Python 2.4.3
Re: [RELEASE CANDIDATE] libapreq2 2.08-RC4
On Tue, 25 Jul 2006, Randy Kobes wrote: On Tue, 25 Jul 2006, Steve Hay wrote: Yes, that works for me! I tried the individual test and the whole test suite dozens of times over and didn't get a single failure. I'm not sure how it makes any difference, though, or exactly what it does. I searched the whole of my httpd-2.2.2 folder and only found one use of it (actually, of its new name, APR_FOPEN_SHARELOCK) relating to sdbm files. What am I missing? I'm baffled now, too - as far as I can see too, apr only uses APR_FOPEN_SHARELOCK in sdbm files, and neither mod_perl nor librapreq2 seems to use it. But it does make a difference - although I don't see as many failures as you do, without APR_FOPEN_SHARELOCK I definitely get temp files left over. Is the change safe, or does it introduce any security issues with temporary spool files being shared somehow? That I'm not sure of, especially now that I'm not sure what it's affecting ... I still haven't been able to track down why the use of APR_FOPEN_SHARELOCK works in cleaning up the temp files. I did try a simple C apr-based program that just opens a temp file in the same way as is done within apreq_file_mktemp(), with the registered apreq_file_cleanup(), writes some random text to it, and then closes it - in this the temp files were cleaned up with or without APR_FOPEN_SHARELOCK, and also with or without APR_FILE_NOCLEANUP. So something more complex is involved. Nevertheless, unless someone objects in the next day or so, I'd like to commit this change, as I think leaving temp files lying around is a worse problem. -- best regards, Randy
Re: [RELEASE CANDIDATE] libapreq2 2.08-RC4
Nevertheless, unless someone objects in the next day or so, I'd like to commit this change, as I think leaving temp files lying around is a worse problem. No objection here :) -- Philip M. Gollucci ([EMAIL PROTECTED]) 323.219.4708 Consultant / http://p6m7g8.net/Resume/resume.shtml Senior Software Engineer - TicketMaster - http://ticketmaster.com 1024D/A79997FA F357 0FDD 2301 6296 690F 6A47 D55A 7172 A799 97F It takes a minute to have a crush on someone, an hour to like someone, and a day to love someone, but it takes a lifetime to forget someone...
Re: New Windows build - Apache 2.2.3
Oh sry that i wasn't very clear on the unix sournce ^^ It's a bit out dated but this should help you: http://www.blackdot.be/?inc=apache/unix2win/index.htm you need to manual add apr,apr-utils... and convert the source On 7/30/06, hunter [EMAIL PROTECTED] wrote: On 7/30/06, Jorge Schrauwen [EMAIL PROTECTED] wrote: ah using visual studio 2005? Try using the unix source instead... wrowe posted a notice about this. On 7/30/06, hunter [EMAIL PROTECTED] wrote: I am getting an error while building Apache 2.2.3 for Windows... Creating library .\Release\mod_unique_id.lib and object .\Release\mod_unique_id.exp nmake -nologo -f mod_usertrack.mak CFG=mod_usertrack - Win32 Release RECURSE=0 rc.exe /l 0x409 /foDebug/mod_usertrack.res /i ../../include /i ../../srclib/apr/include /i \asf -build\build-2.2.3\build\win32 /d _DEBUG /d BIN_NAME=mod_usertrack.so /d LONG_NAME=usertrack_module for Apache ..\..\build\win32\httpd.rc fatal error RC1109: error creating Debug/mod_usertrack.res NMAKE : fatal error U1077: 'rc.exe' : return code '0x1' Stop. NMAKE : fatal error U1077: 'C:\MSVC7\Vc7\bin\nmake.exe' : return code '0x2' Stop. NMAKE : fatal error U1077: 'C:\MSVC7\Vc7\bin\nmake.exe' : return code '0x2' Stop. Any suggestions... Chris Lewis -- ~Jorge I may not be correct but I understood Bill's email to say that the problem affected those who compile with the IDE, while I compile from the command line. I tried compiling with SDK 2003 R2 (newer complier) but that made no difference. Trying to compile with unix source results with an immediate failure to find srclib due to forward slashes that are not recognized by Windows. I think there is something wrong with mod_usertrack.mak - notice the /d _DEBUG. I have tried to fix the makefile but I have still not fixed it. Chris Lewis -- ~Jorge
Re: New Windows build - Apache 2.2.3
Thanks Chris .. researching. This is what I was talking about by having too many versions of Visual Studio, each with peculiar quirky requirement for /d VAR=Long String Value syntax. Only a custom build step, I'm thinking, will save us from this rc hell. hunter wrote: I am getting an error while building Apache 2.2.3 for Windows... rc.exe /l 0x409 /foDebug/mod_usertrack.res /i ../../include /i ../../srclib/apr/include /i \asf -build\build-2.2.3\build\win32 /d _DEBUG /d BIN_NAME=mod_usertrack.so /d LONG_NAME=usertrack_module for Apache ..\..\build\win32\httpd.rc fatal error RC1109: error creating Debug/mod_usertrack.res NMAKE : fatal error U1077: 'rc.exe' : return code '0x1' Stop. NMAKE : fatal error U1077: 'C:\MSVC7\Vc7\bin\nmake.exe' : return code '0x2' Stop.
Re: svn commit: r426604 - in /httpd/httpd/branches/httpd-proxy-scoreboard: modules/proxy/ support/
Jim Jagielski wrote: I thought that this was about abstracting out scoreboard so that other modules could have scoreboard-like access without mucking around with the real scoreboard... +1. The proxy could just use this mechanism. We need to separate the two issues. I am all in favor of a generic scoreboard, that, in the future, the real scoreboard might use. -- Brian Akins Chief Operations Engineer Turner Digital Media Technologies
Re: svn commit: r426604 - in /httpd/httpd/branches/httpd-proxy-scoreboard:
Brian Akins wrote: Jim Jagielski wrote: I thought that this was about abstracting out scoreboard so that other modules could have scoreboard-like access without mucking around with the real scoreboard... +1. The proxy could just use this mechanism. We need to separate the two issues. I am all in favor of a generic scoreboard, that, in the future, the real scoreboard might use. Agreed. -- === Jim Jagielski [|] [EMAIL PROTECTED] [|] http://www.jaguNET.com/ If you can dodge a wrench, you can dodge a ball.
load balancer cluster set
I'm trying to figure out which impl of the the LB cluster set makes the most sense and would appreciate the feedback. Basically, I see 2 different methods: 1. Members in all cluster sets which have the same or lower set numbers are checked 2. Only members is a specific set number are checked. If none are usable, skip to the next cluster set. In other words, lets assume members a, b and c are in set 0 and d, e and f are in set 1 and g, h and i are in set 2. We check a, b and c and they are not usable, so we now start checking set 1. Should we re-check the members in set 0 (maybe they are usable now) or just check members of set 1 (logically, the question is whether we doing a = set# or == set#). I have both methods coded and am flip-flopping on which makes the most sense. I'm leaning towards #1 (=set#). Comments?
Backport PCKS#7 patch to 2.2?
Will it be OK to do this? Cheers, Ben. -- http://www.apache-ssl.org/ben.html http://www.links.org/ There is no limit to what a man can do or how far he can go if he doesn't mind who gets the credit. - Robert Woodruff
Re: load balancer cluster set
On Mon, 2006-31-07 at 10:08 -0400, Jim Jagielski wrote: I'm trying to figure out which impl of the the LB cluster set makes the most sense and would appreciate the feedback. snip Comments? Are you implementing load balancing/clustering in Apache HTTP Server ? Why ? -- --gh
Re: Backport PCKS#7 patch to 2.2?
Please add it to the STATUS file of 2.2.x for voting. Regards Rüdiger -Ursprüngliche Nachricht- Von: Ben Laurie Gesendet: Montag, 31. Juli 2006 16:13 An: Apache List Betreff: Backport PCKS#7 patch to 2.2? Will it be OK to do this? Cheers, Ben. -- http://www.apache-ssl.org/ben.html http://www.links.org/ There is no limit to what a man can do or how far he can go if he doesn't mind who gets the credit. - Robert Woodruff
Re: load balancer cluster set
-Ursprüngliche Nachricht- Von: Jim Jagielski In other words, lets assume members a, b and c are in set 0 and d, e and f are in set 1 and g, h and i are in set 2. We check a, b and c and they are not usable, so we now start checking set 1. Should we re-check the members in set 0 (maybe they are usable now) or just check members of set 1 (logically, the question is whether we doing a = set# or == set#). I have both methods coded and am flip-flopping on which makes the most sense. I'm leaning towards #1 (=set#). I would also lean to #1 as this means that once cluster set 0 failed and is back again we are using it again, which seems natural to me. OTH I guess we need to consider session stickyness in this case. So sessions that have been migrated to set 1 should stay there until they vanish or someone knocks them out by disabling this cluster set (BTW: feature-creep will it be possible to disable complete cluster sets via the manager? /feature-creep )and thus forcing them back to cluster set 0. Regards Rüdiger
Re: load balancer cluster set
On Mon, July 31, 2006 4:29 pm, Guy Hulbert wrote: Are you implementing load balancing/clustering in Apache HTTP Server ? It was implemented quite a while ago. Why ? Because it's useful? Regards, Graham --
Re: load balancer cluster set
On Jul 31, 2006, at 10:51 AM, Plüm, Rüdiger, VF EITO wrote: -Ursprüngliche Nachricht- Von: Jim Jagielski In other words, lets assume members a, b and c are in set 0 and d, e and f are in set 1 and g, h and i are in set 2. We check a, b and c and they are not usable, so we now start checking set 1. Should we re-check the members in set 0 (maybe they are usable now) or just check members of set 1 (logically, the question is whether we doing a = set# or == set#). I have both methods coded and am flip-flopping on which makes the most sense. I'm leaning towards #1 (=set#). (BTW: feature-creep will it be possible to disable complete cluster sets via the manager? /feature-creep )and thus forcing them back to cluster set 0. At present, we are more member-centric than set centric. You can disable ind members of a set, but not a whole set. If useful, this could be added at some point...
Re: load balancer cluster set
On Jul 31, 2006, at 10:29 AM, Guy Hulbert wrote: On Mon, 2006-31-07 at 10:08 -0400, Jim Jagielski wrote: I'm trying to figure out which impl of the the LB cluster set makes the most sense and would appreciate the feedback. snip Comments? Are you implementing load balancing/clustering in Apache HTTP Server ? This is part of the Apache 2.2.x release Why ? People want it.
Re: load balancer cluster set
On Mon, 2006-31-07 at 11:18 -0400, Jim Jagielski wrote: Why ? People want it. Thought so :-( -- --gh
Re: load balancer cluster set
Guy Hulbert wrote: On Mon, 2006-31-07 at 11:18 -0400, Jim Jagielski wrote: Why ? People want it. Thought so :-( Why :-( ?? -- === Jim Jagielski [|] [EMAIL PROTECTED] [|] http://www.jaguNET.com/ If you can dodge a wrench, you can dodge a ball.
Re: load balancer cluster set
On Mon, 2006-31-07 at 16:54 +0200, Graham Leggett wrote: Why ? Because it's useful? Nope. Load balancing really belongs at the network layer. IBM released free load-balancing software for linux and windows about 1997. My former employer's integration group (about 3 people) got a fully redundant implementation running (on 4 pcs) in about 4 months. The company abandoned the free s/w version for hardware implementations on Cisco gear (and others) within about 6 months. I'm sure the price for proprietary hardware has dropped substantially since then. But, I suppose, if people want it ... -- --gh
Re: load balancer cluster set
On Mon, July 31, 2006 5:32 pm, Guy Hulbert wrote: People want it. Thought so :-( Why the :-(...? httpd tries to deliver what people will find useful, and load balancing is a very useful part of a multi tier webserver architecture. Regards, Graham --
Re: load balancer cluster set
On Mon, 2006-31-07 at 17:42 +0200, Graham Leggett wrote: On Mon, July 31, 2006 5:32 pm, Guy Hulbert wrote: People want it. Thought so :-( Why the :-(...? httpd tries to deliver what people will find useful, and load balancing is a very useful part of a multi tier webserver architecture. Despite the technical criticism I already posted, I suppose I might find it useful to have a cheap load-balancing solution at some time in the future. However, I see the 'perchild' mpm as a much more pressing need. I looked into WebDav about 12 months ago and several people were looking for this functionality. I have looked at the alternatives and none of them are really attractive. I would also like to see some low-level technical documentation but I am afraid that I will find the code is that. I will write some before I do any work on 'perchild' ... assuming I actually try to do this ... it won't be a quick project for me. Regards, Graham -- -- --gh
Re: load balancer cluster set
On Mon, July 31, 2006 5:42 pm, Guy Hulbert wrote: Nope. Load balancing really belongs at the network layer. I disagree. Load balancing should happen at the layer most capable of making the most effective balancing decisions. At the network layer, your metrics are pretty much volume of data or response time of TCP transaction, and for many purposes these metrics are fine. For many other purposes, lots of data or a long time does not mean a loaded server, and you need a better tuned metric that more accurately represents your real load. But, I suppose, if people want it ... One size doesn't fit all. Regards, Graham --
Re: load balancer cluster set
Graham Leggett wrote: On Mon, July 31, 2006 5:32 pm, Guy Hulbert wrote: People want it. Thought so :-( Why the :-(...? httpd tries to deliver what people will find useful, and load balancing is a very useful part of a multi tier webserver architecture. Still not sure why that's a bad thing -- === Jim Jagielski [|] [EMAIL PROTECTED] [|] http://www.jaguNET.com/ If you can dodge a wrench, you can dodge a ball.
Re: load balancer cluster set
Guy Hulbert wrote: On Mon, 2006-31-07 at 16:54 +0200, Graham Leggett wrote: Why ? Because it's useful? Nope. Load balancing really belongs at the network layer. IBM released free load-balancing software for linux and windows about 1997. My former employer's integration group (about 3 people) got a fully redundant implementation running (on 4 pcs) in about 4 months. The company abandoned the free s/w version for hardware implementations on Cisco gear (and others) within about 6 months. I'm sure the price for proprietary hardware has dropped substantially since then. But, I suppose, if people want it ... People want to simplify things. -- === Jim Jagielski [|] [EMAIL PROTECTED] [|] http://www.jaguNET.com/ If you can dodge a wrench, you can dodge a ball.
Re: load balancer cluster set
Jim Jagielski wrote: I'm trying to figure out which impl of the the LB cluster set makes the most sense and would appreciate the feedback. Basically, I see 2 different methods: 1. Members in all cluster sets which have the same or lower set numbers are checked 2. Only members is a specific set number are checked. If none are usable, skip to the next cluster set. In other words, lets assume members a, b and c are in set 0 and d, e and f are in set 1 and g, h and i are in set 2. We check a, b and c and they are not usable, so we now start checking set 1. Should we re-check the members in set 0 (maybe they are usable now) or just check members of set 1 (logically, the question is whether we doing a = set# or == set#). I have both methods coded and am flip-flopping on which makes the most sense. I'm leaning towards #1 (=set#). Comments? Something I planned to implement: Proxy balancer://clusterName#groupRoute1 BalancerMember .. 1.1 BalancerMember .. 1.2 /Proxy Proxy balancer://clusterName#groupRoute2 BalancerMember .. 2.1 BalancerMember .. 2.2 /Proxy Proxy balancer://clusterName#groupRoute3 BalancerMember .. 3.1 BalancerMember .. 3.2 /Proxy In case you have session stickyness, where jvmRoute is equal for all group members and all members from groupRoute1 fails, the groupRoute1 will always be favored depending on the retry timeout. Now if all members from groupRoute2 fails, the next election will still first try to check the corresponding sticky route members (if they are ready for retry). So, if you always first try the corresponding members of the session route balancer, you will always favor them over the others. Think this is close to your #1. Next step is to add the shared memory slot for balancer, so it can be dynamically maintained. -- Mladen.
Re: load balancer cluster set
On Mon, 2006-31-07 at 12:04 -0400, Jim Jagielski wrote: Nope. Load balancing really belongs at the network layer. snip But, I suppose, if people want it ... People want to simplify things. The simple solution is to buy a bigger piece of hardware or outsource the problem to the relevent experts. Trying to do meaningful load-balancing within an application will not be simple. At the router it is simple. All the required data is present in one spot. Look. I really don't want to discourage you. Especially, since it has been claimed that the work has already been done. The real danger, I see, is that you try to become all things to all people when there does not seem to be resources to solve problems which are very specific to the core application. -- --gh
Re: load balancer cluster set
On Mon, July 31, 2006 6:16 pm, Guy Hulbert wrote: At the network layer, your metrics are pretty much volume of data or Nope. Routers can look at anything in the packets which is not encrypted. They can also measure server response (by packet stats) directly or via SNMP. There are all sorts of things that *cannot* be done on the server without introducing all sorts of p2p communications requirements. I'm sure they can. This doesn't make them the right solution for all cases. In a multi tier architecture, you already have front end servers implementing URL strategies, common logging, all sorts of other things. Adding an extra router layer to handle load balancing, when your already existing frontend can do this job is not only extra cost, but extra complexity and an additional point of failure. Regards, Graham --
Re: load balancer cluster set
On Mon, Jul 31, 2006 at 12:22:03PM -0400, Guy Hulbert wrote: The simple solution is to buy a bigger piece of hardware or outsource the problem to the relevent experts. Trying to do meaningful load-balancing within an application will not be simple. At the router it is simple. All the required data is present in one spot. Load-balancing can be implemented at any arbitrary point in the stack (Ethernet/IP/DNS/TCP/HTTP/Application) and each has its own problems and features. There is nothing particularly appealing about doing it at the routing layer (though it does present a few novel options like using anycast or a TCP redirect), and doing it there has a few problems of its own. Either way, the more options and the more flexibility, the better. In the real world, you may find that many operations use multiple load-balancing techniques toghether (e.g. Google uses DNS, L2, L3 and L4 load-balancing). -- Colm MacCárthaighPublic Key: [EMAIL PROTECTED]
Re: load balancer cluster set
On Mon, July 31, 2006 6:22 pm, Guy Hulbert wrote: The real danger, I see, is that you try to become all things to all people when there does not seem to be resources to solve problems which are very specific to the core application. Apache httpd is capable not only of switching things off, but removing unnecessary features entirely as the admin sees fit, so this is a non problem. I get the sense that you would rather the developers scratch your itch (in the form of perchild), rather than theirs (in the form of lb). Getting perchild going would be great, but I don't see it as any more or less improtant than lb. Regards, Graham --
Re: load balancer cluster set
Guy Hulbert wrote: However, you may not be able to wait until the linux router project picks this up (but it might be worth looking to see what is available). Most of the load-balancing we are discussing on this list is not for directly customer facing applications. These are proxies for application servers generally, but they need to be highly available. We are not trying to replace Cisco CSM's. But a hardware HTTP-only aware $20k device is not needed when I just need to load balance an app across 4 tomcat instances, for example. -- Brian Akins Chief Operations Engineer Turner Digital Media Technologies
Re: load balancer cluster set
Graham. I already accept that this seems to fait-accomplis. So I am just arguing for entertainment purposes. If the solution is a p2p one then it might be somewhat interesting. Otherwise, it just seems (to me) to be re-inventing the wheel ... potentially very badly. Adding load-balancing/clustering to software projects seems to be popular (i know of others :-) lately ... it seems like the idea that every end-user application adds features until it can do email. On Mon, 2006-31-07 at 18:26 +0200, Graham Leggett wrote: On Mon, July 31, 2006 6:16 pm, Guy Hulbert wrote: At the network layer, your metrics are pretty much volume of data or Nope. Routers can look at anything in the packets which is not encrypted. They can also measure server response (by packet stats) directly or via SNMP. There are all sorts of things that *cannot* be done on the server without introducing all sorts of p2p communications requirements. I'm sure they can. This doesn't make them the right solution for all cases. In a multi tier architecture, you already have front end servers implementing URL strategies, common logging, all sorts of other things. The 1997 system I referenced was already a multi-tier architecture. The integration group was implementing systems on large world-wide private networks. Adding an extra router layer to handle load balancing, when your already existing frontend can do this job is not only extra cost, but extra complexity and an additional point of failure. Without knowing the specific network involved this is just wanking. The implementation of the IBM software solution software solution I described previously required 4 PCs precisely because of the problem of redundancy, monitoring and failover. The PCs were paired with a heartbeat running on loop-back interfaces. Do you know anyone running an apache service over the internet without a router somewhere? I doubt that IP via carrier pigeon has sufficient bandwidth. My only interest in this is you are putting all the additional complexity into the Apache server. Regards, Graham -- -- --gh
Re: load balancer cluster set
I didn't read this very carefully. On Mon, 2006-31-07 at 18:26 +0200, Graham Leggett wrote: I'm sure they can. This doesn't make them the right solution for all cases. In a multi tier architecture, you already have front end servers implementing URL strategies, common logging, all sorts of other things. Adding an extra router layer to handle load balancing, when your already snip This seems reasonable. Given paragraph 2 (URL strategies etc) Not for the reasons I've omitted (and responded to separately). However, I still don't think this will scale the way router-based solutions can (already :-). -- --gh
Re: load balancer cluster set
My only interest in this is you are putting all the additional complexity into the Apache server. Considering the very common usage of Apache being used as a reverse proxy and the need for URL-specific forwarding, adding a cluster-like ability to Apache is the obvious next step. Will it remove the need for others? Not at all. -- === Jim Jagielski [|] [EMAIL PROTECTED] [|] http://www.jaguNET.com/ If you can dodge a wrench, you can dodge a ball.
Re: load balancer cluster set
On Mon, 2006-31-07 at 18:31 +0200, Graham Leggett wrote: I get the sense that you would rather the developers scratch your itch Their itch is not a problem for me ... and it isn't something I would necessarily use apache for ... though for a small to medium scale setup it might be very useful. (in the form of perchild), rather than theirs (in the form of lb). Absolutely :-). I have no intention of writing any code for perchild if someone else (undoubtedly far more qualified than I) happens to want to do it. After looking at the code from subversion and having thought a little more about 'perchild' I can see a few difficulties and I can see good reasons why it may not have been worked on. The reason I am interested in perchild is that combined with WebDav and Reiser4 it will be possible to create general business applications which make subversion look like a toy. Perchild looks like the missing piece. It is extremely inconvenient for everything on the back-end to be owned by one user. The client-side support for WebDav has been present in windows since 1998. For some reason, Microsoft just seems to have stopped there. -- --gh
Re: load balancer cluster set
On Mon, July 31, 2006 6:39 pm, Guy Hulbert wrote: I already accept that this seems to fait-accomplis. So I am just arguing for entertainment purposes. Which in turn means you're just wasting people's time. Regards, Graham --
Re: load balancer cluster set
On Mon, 2006-31-07 at 12:50 -0400, Jim Jagielski wrote: My only interest in this is you are putting all the additional complexity into the Apache server. Considering the very common usage of Apache being used as a reverse proxy and the need for URL-specific forwarding, adding a cluster-like ability to Apache is the obvious next step. Oh well. If it is obvious then ok :-). Will it remove the need for others? Not at all. If it is p2p then it might ... in the long run. -- --gh
Re: load balancer cluster set
Guy Hulbert wrote: Absolutely :-). I have no intention of writing any code for perchild if someone else (undoubtedly far more qualified than I) happens to want to do it. After looking at the code from subversion and having thought a little more about 'perchild' I can see a few difficulties and I can see good reasons why it may not have been worked on. perchild is an MPM that would be very useful if it was ever done. However, to make it very portable is also not trivial, and it requires additional APR capability which would need to be added as well... One reason for a generic scoreboard would be to help make perchild easier, since we could store the passed fd's in this location alleviating some of the current problems. -- === Jim Jagielski [|] [EMAIL PROTECTED] [|] http://www.jaguNET.com/ If you can dodge a wrench, you can dodge a ball.
Re: load balancer cluster set
On Mon, 2006-31-07 at 17:30 +0100, Colm MacCarthaigh wrote: Either way, the more options and the more flexibility, the better. This is not true. There is always a limit. The difficult part is to know when you've reached it, of course. Also, it is a design choice. For example, perl (TMOWTDI) versus python. -- --gh
Re: load balancer cluster set
On Mon, 2006-31-07 at 19:00 +0200, Graham Leggett wrote: On Mon, July 31, 2006 6:39 pm, Guy Hulbert wrote: I already accept that this seems to fait-accomplis. So I am just arguing for entertainment purposes. Which in turn means you're just wasting people's time. It's your choice whether to respond or not. The exchange is very valuable to me since I am learning a lot more about the project than I would in any other way. If I do decide to put work into perchild it will be a very big investment from my pov ... Regards, Graham -- -- --gh
Re: svn commit: r427172 - in /httpd/httpd/trunk: CHANGES modules/proxy/mod_proxy.c
Mladen Turk wrote: [EMAIL PROTECTED] wrote: Author: jim Compiles/builds clean: passes test framework as well as more normal usage tests ;) -chartimeout_set; +chartimeout_set; -characquire_set; -apr_size_t recv_buffer_size; -charrecv_buffer_size_set; -apr_size_t io_buffer_size; -chario_buffer_size_set; -charkeepalive; -charkeepalive_set; +characquire_set; +apr_size_t recv_buffer_size; +charrecv_buffer_size_set; +apr_size_t io_buffer_size; +chario_buffer_size_set; +charkeepalive; +charkeepalive_set; Any particular reason for this formatting :) ? just lining up with the ones above it... Check out the elements above that and you'll see that after some longer ones were pushed to the right, we forgot to reset to the left. -- === Jim Jagielski [|] [EMAIL PROTECTED] [|] http://www.jaguNET.com/ If you can dodge a wrench, you can dodge a ball.
Re: load balancer cluster set
On Mon, 2006-31-07 at 13:05 -0400, Jim Jagielski wrote: One reason for a generic scoreboard would be to help make perchild easier, since we could store the passed fd's in this location alleviating some of the current problems. Thanks. I've seen all the traffic on the scoreboard and this is very useful context ... -- --gh
Scoreboard was Re: load balancer cluster set
I've seen all the traffic on the scoreboard and this is very useful context ... Also, I am using a similar scoreboard mechanism to collect lots of per worker stats without the extendedstatus overhead. -- Brian Akins Chief Operations Engineer Turner Digital Media Technologies
Re: Backport PCKS#7 patch to 2.2?
Plüm wrote: Please add it to the STATUS file of 2.2.x for voting. Done. Regards Rüdiger -Ursprüngliche Nachricht- Von: Ben Laurie Gesendet: Montag, 31. Juli 2006 16:13 An: Apache List Betreff: Backport PCKS#7 patch to 2.2? Will it be OK to do this? Cheers, Ben. -- http://www.apache-ssl.org/ben.html http://www.links.org/ There is no limit to what a man can do or how far he can go if he doesn't mind who gets the credit. - Robert Woodruff -- http://www.apache-ssl.org/ben.html http://www.links.org/ There is no limit to what a man can do or how far he can go if he doesn't mind who gets the credit. - Robert Woodruff
Re: load balancer cluster set
On Mon, July 31, 2006 6:43 pm, Guy Hulbert wrote: This seems reasonable. Given paragraph 2 (URL strategies etc) Not for the reasons I've omitted (and responded to separately). However, I still don't think this will scale the way router-based solutions can (already :-). Users of mod_backhand (for httpd v1.3) would disagree, it's a similar solution that has been around for years. The lb support in v2.x will hopefully eventually allow users of mod_backhand to migrate to v2.x from v1.3. Regards, Graham --
Re: Scoreboard was Re: load balancer cluster set
On Mon, 2006-31-07 at 13:21 -0400, Brian Akins wrote: I've seen all the traffic on the scoreboard and this is very useful context ... Also, I am using a similar scoreboard mechanism to collect lots of per worker stats without the extendedstatus overhead. I've been following discussion as much as I am able. What (I think) I really need to understand is how the request handling and thread pool code interacts. For 'perchild' I would also need to understand how setuid and threading works together. Looking at the example configs, I've been guessing that the 'perchild' server forks several threaded process as each require UID but at least one comment I saw recently indicates I might be entirely wrong. I wonder if there is any apache-specific documentation available at this level of detail? I have unix-specific references. -- --gh
Re: load balancer cluster set
On Mon, 2006-31-07 at 19:34 +0200, Graham Leggett wrote: On Mon, July 31, 2006 6:43 pm, Guy Hulbert wrote: This seems reasonable. Given paragraph 2 (URL strategies etc) Not for the reasons I've omitted (and responded to separately). However, I still don't think this will scale the way router-based solutions can (already :-). Users of mod_backhand (for httpd v1.3) would disagree, it's a similar solution that has been around for years. The lb support in v2.x will hopefully eventually allow users of mod_backhand to migrate to v2.x from v1.3. Is google using mod_backhand ? That's the ultimate case, after all :-) Regards, Graham -- -- --gh
Re: load balancer cluster set
Guy Hulbert wrote: That's the ultimate case, after all :-) Not necessarily. Google's answer is to throw tons of hardware at stuff. Which is great if you have unlimited space, power, and cooling. Some other sites do some rather interesting things with a relatively small number of servers -- Brian Akins Chief Operations Engineer Turner Digital Media Technologies
Re: load balancer cluster set
On Mon, 2006-31-07 at 19:34 +0200, Graham Leggett wrote: Users of mod_backhand (for httpd v1.3) would disagree, it's a similar Greenspun: http://philip.greenspun.com/scratch/scaling.adp Asks the right question: How are load balancers actually built? and suggests: zeus, mod_backhand, and router solutions but unfortunately does not give a direct answer. However, two paragraphs down: Failover from a broken load balancer to a working one is essentially a network configuration challenge, beyond the scope of this textbook. Basically what is required are two identical load balancers and cooperation with the next routing link in the chain that connects your server farm to the public Internet. Those upstream routers must know how to route requests for the same IP address to one or the other load balancer depending upon which is up and running. What keeps this from becoming an endless spiral of load balancing is that the upstream routers aren't actually looking into the TCP packets to find the GET request. They're doing the much simpler job of IP routing. This points up the difficulty of trying to solve the problem at the application level. My point was that free routing solutions to this problem were already available since 1997. solution that has been around for years. The lb support in v2.x will The mod_backhand site seems to date since 2000 and Greenspun's article is dated 2003, which also seems to be the latest release of mod_backhand ... hopefully eventually allow users of mod_backhand to migrate to v2.x from v1.3. ... it certainly seems to be important to create the migration path but you have yet to convince me that the scalability is the same. However, you have certainly convinced me to try the apache solution once it is available ... I have a customer who might need it in a year or so. -- --gh
Re: load balancer cluster set
On Mon, 2006-31-07 at 13:54 -0400, Brian Akins wrote: Guy Hulbert wrote: That's the ultimate case, after all :-) Not necessarily. Google's answer is to throw tons of hardware at stuff. The point of contention was scalability ... from a human point of view it is really annoying to have to solve a problem twice but from the business pov, outgrowing your load balancer might only be a good thing. -- --gh
Re: load balancer cluster set
On 7/31/06, Guy Hulbert [EMAIL PROTECTED] wrote: On Mon, 2006-31-07 at 13:54 -0400, Brian Akins wrote: Guy Hulbert wrote: That's the ultimate case, after all :-) Not necessarily. Google's answer is to throw tons of hardware at stuff. The point of contention was scalability ... from a human point of view it is really annoying to have to solve a problem twice but from the business pov, outgrowing your load balancer might only be a good thing. Oh please, 99.% of users have nowhere near the scalability constraints that google operates under. Are you saying that because some do we shouldn't provide solutions that work for the rest? -garrett
Re: load balancer cluster set
Guy Hulbert wrote: The point of contention was scalability ... from a human point of view it is really annoying to have to solve a problem twice but from the business pov, outgrowing your load balancer might only be a good thing. Yes. But most load balancer can only do layer 7 load balancing. Sometimes it is necessary to have very application specific routing. Also, in general, most hardware load balancers base their algorithms on things such as response time. Sometimes, it is necessary to know the general health of the backend servers. -- Brian Akins Chief Operations Engineer Turner Digital Media Technologies
Re: load balancer cluster set
Jim Jagielski wrote: I'm trying to figure out which impl of the the LB cluster set makes the most sense and would appreciate the feedback. Basically, I see 2 different methods: 1. Members in all cluster sets which have the same or lower set numbers are checked 2. Only members is a specific set number are checked. If none are usable, skip to the next cluster set. We have two different use cases for grouping. On is the case, where the targets keep some state and replicate the state only to some of the other targets. If the set of targets is split into disjoint replication groups, it would make sense to use any other member of the same replication group, in case the sticky member is dead. This situation might be used e.g. for a tomcat cluster, where we only can do one to all replication. So a huge cluster needs to be split into disjoint replication groups. So for a sticky situation and a request that contains a target ID, I think 2 makes the most sense. In case the backends use a more elaborate replication scheme, mod_proxy_balancer would need some additional way of getting the information about replication members, like encoding them into the Cookie. Unfoirtunately, theres no standard for this. If we are in a non-sticky session, or the request has no target ID, we are back to pure load-balancing (no routing). In this case I think there should be a way of expressing preferences for target workers. That's closer to number 1. For mod_jk 1.2.18 we included distance as a measurement of preference for the non-sticky case (and the case, where we are sticky, but the wohle cluster set is down), and we have domain since about 2 years to configure replication sets. I assume his is, what Mladen is after. So my answer would be: some of 1. and some of 2, depending on the request info and the target status. I would love, if someone came up with a more consistent model. Rainer
Re: load balancer cluster set
On Mon, 2006-31-07 at 14:02 -0400, Garrett Rooney wrote: On 7/31/06, Guy Hulbert [EMAIL PROTECTED] wrote: On Mon, 2006-31-07 at 13:54 -0400, Brian Akins wrote: Guy Hulbert wrote: That's the ultimate case, after all :-) Not necessarily. Google's answer is to throw tons of hardware at stuff. The point of contention was scalability ... from a human point of view snip Oh please, 99.% of users have nowhere near the scalability constraints that google operates under. Are you saying that because some do we shouldn't provide solutions that work for the rest? -garrett Nope. Graham asserted that mod_backhand was sufficiently scalable ... which I inferred to mean sufficiently scalable to make a router-based solution unnecessary. For practical use, it seems to be the best solution available for a small-scale site. The commercial solutions do not seem to have changed since 1997 ... it is a more disappointing that the linux-router project does not seem to have come far enough yet to solve this problem properly. At least it did not turn up obviously in the responses to 'google: mod_backhand scalable. -- --gh
Re: load balancer cluster set
My experience: some organisations have a network group, that is able to understand application communication behaviour and do a very good job in making most of these features available via there load balancer appliances and then benefit from their central administration, GUIs etc. On the other hand in some organisations there is a deep split between the server/app guys and the network guys, and you will not succeed in making the network use the high-level features of their gear. So in principle most can be done on both sides, but often it's the experience of the people, that decides on where to actually build the solution. I did both solutions successfully and even had companies move from on to the other when they changed their organization. I think it's not worth to technically discuss, where the features belong to. In practise, it's not really a technical question. Just my point of view. Rainer Brian Akins wrote: Guy Hulbert wrote: The point of contention was scalability ... from a human point of view it is really annoying to have to solve a problem twice but from the business pov, outgrowing your load balancer might only be a good thing. Yes. But most load balancer can only do layer 7 load balancing. Sometimes it is necessary to have very application specific routing. Also, in general, most hardware load balancers base their algorithms on things such as response time. Sometimes, it is necessary to know the general health of the backend servers.
Re: load balancer cluster set
On Mon, 2006-31-07 at 20:15 +0200, Rainer Jung wrote: So in principle most can be done on both sides, but often it's the experience of the people, that decides on where to actually build the solution. Yup. I did both solutions successfully and even had companies move from on to the other when they changed their organization. Yup. I think it's not worth to technically discuss, where the features belong to. In practise, it's not really a technical question. Yup. It seems that linux router is the wrong name. Here is the correct project: http://www.linuxvirtualserver.org/ I really have not looked seriously at load balancing for about 10 years. It seems that mod_backhand is a good solution that is out there but needs to be ported to apache2 because people already need it. If it were *me* writing the code, I would still look to see whether there is a reasonable alternative ... Anyhow, I apologize for the long digression ... it's keeping me from working too :-). -- --gh
Re: load balancer cluster set
FWIW, this seems much more likely: http://www.ultramonkey.org/about.shtml In particular: http://www.ultramonkey.org/3/installation-debian.sarge.html On Mon, 2006-31-07 at 14:29 -0400, Guy Hulbert wrote: It seems that linux router is the wrong name. Here is the correct project: http://www.linuxvirtualserver.org/ snip If it were *me* writing the code, I would still look to see whether there is a reasonable alternative ... Anyhow, I apologize for the long digression ... it's keeping me from working too :-). again -- --gh
Re: svn commit: r427172 - in /httpd/httpd/trunk: CHANGES modules/proxy/mod_proxy.c modules/proxy/mod_proxy.h modules/proxy/mod_proxy_balancer.c
On 07/31/2006 07:01 PM, [EMAIL PROTECTED] wrote: Author: jim Date: Mon Jul 31 10:01:40 2006 New Revision: 427172 URL: http://svn.apache.org/viewvc?rev=427172view=rev Log: Add in a very simple balancer set concept, which allows for members to be assigned to a particular cluster set such that members in lower-numbered sets are checked/used before those in higher ones. Also bundled in this are some HTML cleanups for the balancer manager UI. Sorry for the mixins :) Compiles/builds clean: passes test framework as well as more normal usage tests ;) +do { +while (!mycandidate !checked_standby) { +worker = (proxy_worker *)balancer-workers-elts; +for (i = 0; i balancer-workers-nelts; i++, worker++) { +if (!checking_standby) {/* first time through */ +if (worker-s-lbset max_lbset) +max_lbset = worker-s-lbset; +} +if (worker-s-lbset cur_lbset) +continue; +if ( (checking_standby ? !PROXY_WORKER_IS_STANDBY(worker) : PROXY_WORKER_IS_STANDBY(worker)) ) +continue; +/* If the worker is in error state run + * retry on that worker. It will be marked as + * operational if the retry timeout is elapsed. + * The worker might still be unusable, but we try + * anyway. + */ +if (!PROXY_WORKER_IS_USABLE(worker)) +ap_proxy_retry_worker(BALANCER, worker, r-server); +/* Take into calculation only the workers that are + * not in error state or not disabled. + */ +if (PROXY_WORKER_IS_USABLE(worker)) { +mytraffic = (worker-s-transferred/worker-s-lbfactor) + +(worker-s-read/worker-s-lbfactor); +if (!mycandidate || mytraffic curmin) { +mycandidate = worker; +curmin = mytraffic; +} } } +checked_standby = checking_standby++; } -checked_standby = checking_standby++; -} +cur_lbset++; +} while (cur_lbset max_lbset !mycandidate); Shouldn't that be while (cur_lbset = max_lbset !mycandidate); (same question also for the other algorithm)? I guess otherwise we would not check for the workers with the lbset max_lbset. Regards Rüdiger
Re: svn commit: r427172 - in /httpd/httpd/trunk: CHANGES modules/proxy/mod_proxy.c
Ruediger Pluem wrote: Shouldn't that be while (cur_lbset = max_lbset !mycandidate); (same question also for the other algorithm)? I guess otherwise we would not check for the workers with the lbset max_lbset. No, since we do the test at the end, after we've incremented. If the current set is 3 and the max is 3, we want to stop. -- === Jim Jagielski [|] [EMAIL PROTECTED] [|] http://www.jaguNET.com/ If you can dodge a wrench, you can dodge a ball.
Re: svn commit: r427172 - in /httpd/httpd/trunk: CHANGES modules/proxy/mod_proxy.c
On 07/31/2006 09:53 PM, Jim Jagielski wrote: Ruediger Pluem wrote: Shouldn't that be while (cur_lbset = max_lbset !mycandidate); (same question also for the other algorithm)? I guess otherwise we would not check for the workers with the lbset max_lbset. No, since we do the test at the end, after we've incremented. If the current set is 3 and the max is 3, we want to stop. Maybe I am confused, but we have not run the loop with current set 3 as we have just incremented at at the end. So I try an example (maybe I prove myself wrong and make a fool out of me, but that would help also :-) Let's assume the maximum lbset is 1 1. We start the outer while loop with cur_lbset and max_lbset 0 2. In the inner while loop 1. we iterate over all workers of the balancer, but consider only those who are not standby and who have an lbset of 0 2. we iterate over all workers of the balancer, but consider only those who are standby and who have an lbset of 0 3. Now we leave the inner while loop and lets assume that we found no candidate 4. Now we increase cur_lbset to 1 5. Now we have to leave the outer while loop, because cur_lbset max_lbset is not true because cur_lbset = max_lbset So we did not check for the workers with lbset 1. BTW: Don't we need to reset checked_standby and checking_standby to zero in the outer while loop? +do { +while (!mycandidate !checked_standby) { +worker = (proxy_worker *)balancer-workers-elts; +for (i = 0; i balancer-workers-nelts; i++, worker++) { +if (!checking_standby) {/* first time through */ +if (worker-s-lbset max_lbset) +max_lbset = worker-s-lbset; +} +if (worker-s-lbset cur_lbset) +continue; +if ( (checking_standby ? !PROXY_WORKER_IS_STANDBY(worker) : PROXY_WORKER_IS_STANDBY(worker)) ) +continue; +/* If the worker is in error state run + * retry on that worker. It will be marked as + * operational if the retry timeout is elapsed. + * The worker might still be unusable, but we try + * anyway. + */ +if (!PROXY_WORKER_IS_USABLE(worker)) +ap_proxy_retry_worker(BALANCER, worker, r-server); +/* Take into calculation only the workers that are + * not in error state or not disabled. + */ +if (PROXY_WORKER_IS_USABLE(worker)) { +mytraffic = (worker-s-transferred/worker-s-lbfactor) + +(worker-s-read/worker-s-lbfactor); +if (!mycandidate || mytraffic curmin) { +mycandidate = worker; +curmin = mytraffic; +} } } +checked_standby = checking_standby++; } -checked_standby = checking_standby++; -} +cur_lbset++; +} while (cur_lbset max_lbset !mycandidate); Regards Rüdiger
Re: New Windows build - Apache 2.2.3
At AL there are reports that also with VC2005-IDE the 2.2.3 Windows source gives issues. Is it an idea to revert back to the 2.2.2 method ? there we had no reports like this. Indeed the unix source builds fine with VC2005 IDE. Steffen - Original Message - From: William A. Rowe, Jr. [EMAIL PROTECTED] To: dev@httpd.apache.org Sent: Monday, July 31, 2006 08:55 Subject: Re: New Windows build - Apache 2.2.3 Thanks Chris .. researching. This is what I was talking about by having too many versions of Visual Studio, each with peculiar quirky requirement for /d VAR=Long String Value syntax. Only a custom build step, I'm thinking, will save us from this rc hell. hunter wrote: I am getting an error while building Apache 2.2.3 for Windows... rc.exe /l 0x409 /foDebug/mod_usertrack.res /i ../../include /i ../../srclib/apr/include /i \asf -build\build-2.2.3\build\win32 /d _DEBUG /d BIN_NAME=mod_usertrack.so /d LONG_NAME=usertrack_module for Apache ..\..\build\win32\httpd.rc fatal error RC1109: error creating Debug/mod_usertrack.res NMAKE : fatal error U1077: 'rc.exe' : return code '0x1' Stop. NMAKE : fatal error U1077: 'C:\MSVC7\Vc7\bin\nmake.exe' : return code '0x2' Stop.
Re: New Windows build - Apache 2.2.3
Steffen wrote: At AL there are reports that also with VC2005-IDE the 2.2.3 Windows source gives issues. Is it an idea to revert back to the 2.2.2 method ? there we had no reports like this. Nope - the old version required awk to -build- the sources. Now, awk is only needed to customize the .conf scripts, and in it's absense, well do it yourself by hand. awk's no longer required. The solution as mentioned several times is a custom-build step to invoke rc.exe, where none of the versions of visualstudio will corrupt the cmd. Indeed the unix source builds fine with VC2005 IDE. So does the windows bundle; grab http://svn.apache.org/repos/asf/apr/apr/trunk/build/cvtdsp.pl - and in the httpd-2.2.3 tree, just invoke perl cvtdsp.pl -2005 This won't be necessary in 2.2.4 - wouldn't have been in 2.2.3 if this security issue hadn't popped up. Bill
Re: New Windows build - Apache 2.2.3
I understand now and I missed the script. I have reports that all now is building fine with VC2005-IDE with the Windows source after executing the script. Thanks! Steffen - Original Message - From: William A. Rowe, Jr. [EMAIL PROTECTED] To: dev@httpd.apache.org Sent: Monday, July 31, 2006 23:25 Subject: Re: New Windows build - Apache 2.2.3 Steffen wrote: At AL there are reports that also with VC2005-IDE the 2.2.3 Windows source gives issues. Is it an idea to revert back to the 2.2.2 method ? there we had no reports like this. Nope - the old version required awk to -build- the sources. Now, awk is only needed to customize the .conf scripts, and in it's absense, well do it yourself by hand. awk's no longer required. The solution as mentioned several times is a custom-build step to invoke rc.exe, where none of the versions of visualstudio will corrupt the cmd. Indeed the unix source builds fine with VC2005 IDE. So does the windows bundle; grab http://svn.apache.org/repos/asf/apr/apr/trunk/build/cvtdsp.pl - and in the httpd-2.2.3 tree, just invoke perl cvtdsp.pl -2005 This won't be necessary in 2.2.4 - wouldn't have been in 2.2.3 if this security issue hadn't popped up. Bill
Re: svn commit: r427172 - in /httpd/httpd/trunk: CHANGES modules/proxy/mod_proxy.c
Let me double check... -- === Jim Jagielski [|] [EMAIL PROTECTED] [|] http://www.jaguNET.com/ If you can dodge a wrench, you can dodge a ball.
Re: svn commit: r427172 - in /httpd/httpd/trunk: CHANGES modules/proxy/mod_proxy.c
Good catch by Ruediger. Fixed. Jim Jagielski wrote: Let me double check... -- === Jim Jagielski [|] [EMAIL PROTECTED] [|] http://www.jaguNET.com/ If you can dodge a wrench, you can dodge a ball.
Re: New Windows build - Apache 2.2.3
On 7/31/06, Steffen [EMAIL PROTECTED] wrote: I understand now and I missed the script. I have reports that all now is building fine with VC2005-IDE with the Windows source after executing the script. Thanks! Steffen - Original Message - From: William A. Rowe, Jr. [EMAIL PROTECTED] To: dev@httpd.apache.org Sent: Monday, July 31, 2006 23:25 Subject: Re: New Windows build - Apache 2.2.3 Steffen wrote: At AL there are reports that also with VC2005-IDE the 2.2.3 Windows source gives issues. Is it an idea to revert back to the 2.2.2 method ? there we had no reports like this. Nope - the old version required awk to -build- the sources. Now, awk is only needed to customize the .conf scripts, and in it's absense, well do it yourself by hand. awk's no longer required. The solution as mentioned several times is a custom-build step to invoke rc.exe, where none of the versions of visualstudio will corrupt the cmd. Indeed the unix source builds fine with VC2005 IDE. So does the windows bundle; grab http://svn.apache.org/repos/asf/apr/apr/trunk/build/cvtdsp.pl - and in the httpd-2.2.3 tree, just invoke perl cvtdsp.pl -2005 This won't be necessary in 2.2.4 - wouldn't have been in 2.2.3 if this security issue hadn't popped up. Bill The error(s) that I am getting are not caused by the defines. I still get the error I indicated in my first email. Creating library .\Release\mod_unique_id.lib and object .\Release\mod_unique_id.exp nmake -nologo -f mod_usertrack.mak CFG=mod_usertrack - Win32 Release RECURSE=0 rc.exe /l 0x409 /foDebug/mod_usertrack.res /i ../../include /i ../../srclib/apr/include /i \asf -build\build-2.2.3\build\win32 /d _DEBUG /d BIN_NAME=mod_usertrack.so /d LONG_NAME=usertrack_module for Apache ..\..\build\win32\httpd.rc fatal error RC1109: error creating Debug/mod_usertrack.res NMAKE : fatal error U1077: 'rc.exe' : return code '0x1' Stop. NMAKE : fatal error U1077: 'C:\MSVC7\Vc7\bin\nmake.exe' : return code '0x2' Stop. NMAKE : fatal error U1077: 'C:\MSVC7\Vc7\bin\nmake.exe' : return code '0x2' Stop. Notice that it is trying to build debug during a release build. I fixed the makefile and it continued until it got a similar error with htpasswd.mak. I fixed this makefile and it failed in makefile.win with this error message. D:\build\httpd-2.2.3(copy docs\conf\extra\httpd-vhosts.conf.in c:\apache\conf\extra\httpd-vhosts.conf.defaul t 0.y awk -f script.awk docs/conf/extra/httpd-vhosts.conf.in c:\apache 1c:\apache\conf\extra\h ttpd-vhosts.conf.default ) 1 file(s) copied. NMAKE : fatal error U1077: 'for' : return code '0x1' Stop. NMAKE : fatal error U1077: 'C:\MSVC7\Vc7\bin\nmake.exe' : return code '0x2' Stop. I have attached the fixed makefiles. I still need help with the last error, I don't understand what the makefile is doing - the build is complete and it is editing the confs. But I am looking at to see if I can figure it out. Chris Lewis # Microsoft Developer Studio Generated NMAKE File, Based on mod_usertrack.dsp !IF $(CFG) == CFG=mod_usertrack - Win32 Release !MESSAGE No configuration specified. Defaulting to mod_usertrack - Win32 Release. !ENDIF !IF $(CFG) != mod_usertrack - Win32 Release $(CFG) != mod_usertrack - Win32 Debug !MESSAGE Invalid configuration $(CFG) specified. !MESSAGE You can specify a configuration when running NMAKE !MESSAGE by defining the macro CFG on the command line. For example: !MESSAGE !MESSAGE NMAKE /f mod_usertrack.mak CFG=mod_usertrack - Win32 Release !MESSAGE !MESSAGE Possible choices for configuration are: !MESSAGE !MESSAGE mod_usertrack - Win32 Release (based on Win32 (x86) Dynamic-Link Library) !MESSAGE mod_usertrack - Win32 Debug (based on Win32 (x86) Dynamic-Link Library) !MESSAGE !ERROR An invalid configuration is specified. !ENDIF !IF $(OS) == Windows_NT NULL= !ELSE NULL=nul !ENDIF !IF $(CFG) == mod_usertrack - Win32 Release OUTDIR=.\Release INTDIR=.\Release # Begin Custom Macros OutDir=.\Release # End Custom Macros !IF $(RECURSE) == 0 ALL : $(OUTDIR)\mod_usertrack.so !ELSE ALL : libhttpd - Win32 Release libaprutil - Win32 Release libapr - Win32 Release $(OUTDIR)\mod_usertrack.so !ENDIF !IF $(RECURSE) == 1 CLEAN :libapr - Win32 ReleaseCLEAN libaprutil - Win32 ReleaseCLEAN libhttpd - Win32 ReleaseCLEAN !ELSE CLEAN : !ENDIF [EMAIL PROTECTED] $(INTDIR)\mod_usertrack.obj [EMAIL PROTECTED] $(INTDIR)\mod_usertrack.res [EMAIL PROTECTED] $(INTDIR)\mod_usertrack_src.idb [EMAIL PROTECTED] $(INTDIR)\mod_usertrack_src.pdb [EMAIL PROTECTED] $(OUTDIR)\mod_usertrack.exp [EMAIL PROTECTED] $(OUTDIR)\mod_usertrack.lib [EMAIL PROTECTED] $(OUTDIR)\mod_usertrack.pdb [EMAIL PROTECTED] $(OUTDIR)\mod_usertrack.so $(OUTDIR) : if not exist $(OUTDIR)/$(NULL) mkdir $(OUTDIR) CPP=cl.exe CPP_PROJ=/nologo /MD /W3 /Zi /O2 /Oy- /I ../../include /I ../../srclib/apr/include /I ../../srclib/apr-util/include /D NDEBUG /D WIN32 /D