[STATUS] (flood) Wed Mar 12 23:45:47 EST 2003
flood STATUS: -*-text-*- Last modified at [$Date: 2002/09/06 10:24:42 $] Release: 1.0: Released July 23, 2002 milestone-03: Tagged January 16, 2002 ASF-transfer: Released July 17, 2001 milestone-02: Tagged August 13, 2001 milestone-01: Tagged July 11, 2001 (tag lost during transfer) RELEASE SHOWSTOPPERS: * Everything needs to work perfectly Other bugs that need fixing: * I get a SIGBUS on Darwin with our examples/round-robin-ssl.xml config, on the second URL. I'm using OpenSSL 0.9.6c 21 dec 2001. * iPlanet sends Content-length - there is a hack in there now to recognize it. However, all HTTP headers need to be normalized before checking their values. This isn't easy to do. Grr. * OpenSSL 0.9.6 Segfaults under high load. Upgrade to OpenSSL 0.9.6b. Aaron says: I just found a big bug that might have been causing this all along (we weren't closing ssl sockets). How can I reproduce the problem you were seeing to verify if this was the fix? * SEGVs when /tmp/.rnd doesn't exist are bad. Make it configurable and at least bomb with a good error message. (See Doug's patch.) Status: This is fixed, no? * If APR has disabled threads, flood should as well. We might want to have an enable/disable parameter that does this also, providing an error if threads are desired but not available. * flood needs to clear pools more often. With a long running test it can chew up memory very quickly. We should just bite the bullet and create/destroy/clear pools for each level of our model: farm, farmer, profile, url/request-cycle, etc. * APR needs to have a unified interface for ephemeral port exhaustion, but aparently Solaris and Linux return different errors at the moment. Fix this in APR then take advantage of it in flood. * The examples/analyze-relative scripts fail when there are less than 5 unique URLs. Other features that need writing: * More analysis and graphing scripts are needed * Write robust tool (using tethereal perhaps) to take network dumps and convert them to flood's XML format. Status: Justin volunteers. Aaron had a script somewhere that is a start. * Get chunked encoding support working. Status: Justin volunteers. He got sidetracked by the httpd implementation of input filtering and never finished this. This is a stopgap until apr-serf is completed. * Maybe we should make randfile and capath runtime directives that come out of the XML, instead of autoconf parameters. * We are using apr_os_thread_current() and getpid() in some places when what we really want is a GUID. The GUID will be used to correlate raw output data with each farmer. We may wish to print a unique ID for each of farm, farmer, profile, and url to help in postprocessing. * We are using strtol() in some places and strtoll() in others. Pick one (Aaron says strtol(), but he's not sure). * Validation of responses (known C-L, specific strings in response) Status: Justin volunteers * HTTP error codes (ie. teach it about 302s) Justin says: Yeah, this won't be with round_robin as implemented. Need a linked list-based profile where we can insert new URLs into the sequence. * Farmer (Single-thread, multiple profiles) Status: Aaron says: If you have threads, then any Farmer can be run as part of any Farm. If you don't have threads, you can currently only run one Farmer named Joe right now (this will be changed so that if you don't have threads, flood will attempt to run all Farmers in serial under one process). * Collective (Single-host, multiple farms) This is a number of Farms that have been fork()ed into child processes. * Megaconglomerate (Multiple hosts each running a collective) This is a number of Collectives running on a number of hosts, invoked via RSH/SSH or maybe even some proprietary mechanism. * Other types of urllists a) Random / Random-weighted b) Sequenced (useful with cookie propogation) c) Round-robin d) Chaining of the above strategies Status: Round-robin is complete. * Other types of reports Status: Aaron says: simple reports are functional. Justin added a new type that simply prints the approx. timestamp when the test was run, and the result as OK/FAIL; it is called easy reports (see flood_easy_reports.h). Furthermore, simple_reports and easy_reports both print out the current requesting URI line. Documentation that needs writing: * Documentation?
Re: flood proxy (was: [STATUS] (flood))
--On Thursday, March 13, 2003 10:46 AM +0100 Jacek Prucia [EMAIL PROTECTED] wrote: Wouldn't it be better, if we use proxy instead of all-purpose network software? I was thinking about mod_proxy_flood.so with some function attached to request forwarding and a simple response handler which could allow users to: Yup. It would be cool if we could extend mod_proxy, but if this is impossible (because of some technical issues I'm not aware of) we might do our own thing. Writting a small, customized proxy in C/APR, Perl, Python, whatever schouldn't be all that hard. If you use httpd-2.0 as a framework, I think you could get away with something like the following: install an input filter that intercepts all inbound data and writes the input body to a file in a special format that is essentially like flood's URL XML syntax for a urllist. If you'd like to pursue this, let me know and I can try to give more specifics. -- justin
Re: flood proxy (was: [STATUS] (flood))
On Thursday, March 13, 2003, at 01:46 AM, Jacek Prucia wrote: * Write robust tool (using tethereal perhaps) to take network dumps and convert them to flood's XML format. Status: Justin volunteers. Aaron had a script somewhere that is a start. Wouldn't it be better, if we use proxy instead of all-purpose network software? I was thinking about mod_proxy_flood.so with some function attached to request forwarding and a simple response handler which could allow users to: 1. enable/disable flood proxy 2. edit gathered urls (only delete for now, later full edit) 3. dump flood file Not a bad idea. things like tethereal and tcptrace are definitately like you say all-purpose, but for just collecting URLs and timestamps, that's sounds like a good idea to me. -aaron
RE: [PATCH] openssl versions?
-Original Message- From: William A. Rowe, Jr. [mailto:[EMAIL PROTECTED] Yes - all the way back. They provided patches for the older versions, but RSA seems to be less and less enthusiastic about patching the ancient 2001 and prior releases, e.g 1.2/1.3. :) Yup. I had a similar experience. I'd be happy to see us support the '2.0 generation' - if we focus on 2.3, yet provide mechanics to deal with the fixups to 2.1/2.2 and maybe even 2.0, then I'd be happy with that. How about just 2.3 ONLY ?. The problem is there are no data points available to see if anybody (other than you and me) is interested in getting mod_ssl to work with 2.3. Further, I'd not be surprised if RSA tries to push ppl to use the newer releases - in which case, we can just enable for 2.3 and later. Anyways, nice patch - I'd prefer if you would follow the Fix One Thing rule of committing this patch; e.g. take it back apart and have each commit labeled as to it's single purpose. But at least here, I'm +1 for this to go into 2.1-dev and I'll help continue to review/improve it in-tree. Oh yes definitely - I'll break it into small patches and commit them seperately. I had no intention of committing it to the 2.0-dev. -Madhu
RE: [PATCH] openssl configuration (v2)
-Original Message- From: Justin Erenkrantz [mailto:[EMAIL PROTECTED] Also, when you commit, please just toss the old macro. There is zero sense in keeping the cruft around. -- justin Sure.. will do. P.S. Madhu, *please*, *please*, *please* use unified diffs in the future. Sorry about that. The unified diff for that particular piece was pretty confusing. So, I just used the regular diff, AND, added the unified diff as a attachment. Point taken, will try to use unified diffs for future posts. -Madhu
Re: cvs commit: httpd-2.0 CHANGES
* Aaron Bannert ([EMAIL PROTECTED]) wrote : On Wednesday, March 12, 2003, at 12:51 PM, Oden Eriksson wrote: Anyway..., despite this enormous disregard or what it is called, mandrake Linux will be the first distribution shipping apache2 (my packaging), _plus_ a whole bunch of other server like stuff I have packed. Um, hasn't RH8.0 been shipping Apache 2 for a few months in the standard distro? *cough*, debian since 2.0.35 or so *cough* ;-) Anyway, enough of this. packaging is useful, although most ASF *coders* will necessarily be using source builds, so packages may slip under their radar. Cheers, -Thom
[PATCH] resend: fix fd leaks
[Resend. There are currently two outstanding fixes for public security issues in the 2.0 stable branch: this and escaping of untrusted request data in mod_log_config which Andre forward-ported from 1.3] Hi, here is a version of the patch in #17206 which removes the current the fd leaks. Most of these were introduced in this commit http://marc.theaimsgroup.com/?l=apache-cvsm=99531770520998w=2 though the pod leak has been around longer. I haven't checked whether the mod_file_cache change in that commit should be reverted as well. The patch is against 2.0 HEAD. Submitted by: Christian Kratzer, Bjoern A. Zeeb Index: modules/loggers/mod_log_config.c === RCS file: /store/cvs/root/httpd-2.0/modules/loggers/mod_log_config.c,v retrieving revision 1.99 diff -u -r1.99 mod_log_config.c --- modules/loggers/mod_log_config.c14 Feb 2003 04:17:34 - 1.99 +++ modules/loggers/mod_log_config.c6 Mar 2003 15:38:07 - @@ -1291,7 +1291,6 @@ could not open transfer log file %s., fname); return NULL; } -apr_file_inherit_set(fd); return fd; } } Index: modules/mappers/mod_rewrite.c === RCS file: /store/cvs/root/httpd-2.0/modules/mappers/mod_rewrite.c,v retrieving revision 1.148 diff -u -r1.148 mod_rewrite.c --- modules/mappers/mod_rewrite.c 1 Mar 2003 18:35:50 - 1.148 +++ modules/mappers/mod_rewrite.c 6 Mar 2003 15:38:08 - @@ -3429,7 +3429,6 @@ file %s, fname); exit(1); } -apr_file_inherit_set(conf-rewritelogfp); } return; } Index: server/log.c === RCS file: /store/cvs/root/httpd-2.0/server/log.c,v retrieving revision 1.130 diff -u -r1.130 log.c --- server/log.c10 Feb 2003 16:27:28 - 1.130 +++ server/log.c6 Mar 2003 15:38:08 - @@ -320,8 +320,6 @@ ap_server_argv0, fname); return DONE; } - -apr_file_inherit_set(s-error_log); } return OK; Index: server/mpm_common.c === RCS file: /store/cvs/root/httpd-2.0/server/mpm_common.c,v retrieving revision 1.103 diff -u -r1.103 mpm_common.c --- server/mpm_common.c 3 Feb 2003 17:53:19 - 1.103 +++ server/mpm_common.c 6 Mar 2003 15:38:08 - @@ -410,6 +410,10 @@ apr_sockaddr_info_get((*pod)-sa, ap_listeners-bind_addr-hostname, APR_UNSPEC, ap_listeners-bind_addr-port, 0, p); +/* close these before exec. */ +apr_file_unset_inherit((*pod)-pod_in); +apr_file_unset_inherit((*pod)-pod_out); + return APR_SUCCESS; } Index: server/mpm/worker/pod.c === RCS file: /store/cvs/root/httpd-2.0/server/mpm/worker/pod.c,v retrieving revision 1.7 diff -u -r1.7 pod.c --- server/mpm/worker/pod.c 3 Feb 2003 17:53:26 - 1.7 +++ server/mpm/worker/pod.c 6 Mar 2003 15:38:08 - @@ -76,6 +76,10 @@ */ (*pod)-p = p; +/* close these before exec. */ +apr_file_unset_inherit((*pod)-pod_in); +apr_file_unset_inherit((*pod)-pod_out); + return APR_SUCCESS; }
Advanced Mass Hosting Module
Resending this to this list as I got no response on users list. Currently, we are using flat config files generated by our website provisioning software to support our mass hosted customers. The reason for doing it this way, and not using the mod_vhost_alias module is because we need to be able to turn on/off CGI, PHP, Java, shtml etc on a per vhost basis. We need the power that having a distinct VirtualHost directive for each site gives you. Is there a better way? What I have in mind is a module that fits in with our current LDAP based infrastructure. Currently, LDAP services our mail users, and I would like to see the Apache mass hosting configuration held in LDAP as well. In this way, we can just scale by adding more apache servers, mounting the shared docroot and pointing them to the LDAP server. The LDAP entry would look something like this: # www.example.com, base dn: uid=www.example.com, o=base siteGidNumber: 10045 siteUidNumber: 10045 objectClass: top objectClass: apacheVhost serverName: www.example.com serverAlias: example.com serverAlias: another.example.com docRoot: /data/web/04/09/example.com/www vhostStatus: enabled phpStatus: enabled shtmlStatus: enabled cgiStatus: enabled dataOutSoftLimit: 100 (in bytes per month) dataOutHardLimit: 1000 dataInSoftLimit: 100 dataInHardLimit: 1000 dataThrottleRate: 100 (in bits/sec) Then, as a request came in, the imaginary mod_advanced_masshosting module would first check to see if it had the information about the domain already cached in memory (to avoid hitting LDAP for every HTTP request, which would be a Bad Idea) and then if not, it would grab the entry from LDAP, cache it, and service the incoming requests. The cache itself would need to be shared among the actual child apache processes somehow. In addition to these features, the module would keep track of the amount of data transferred in out for each vhost and apply a soft/hard limit when the limits defined in the LDAP entry were reached. The amount of actual data transferred would periodically be written to either a GDBM file or even to an LDAP entry (not sure what is best - probably LDAP for consistency) and the data would also need to be shared among any servers in a cluster somehow. This would enable ISPs to bill on a per vhost basis fairly accurately, and limit abusive sites. Now, I've looked around for something like this, and as far as I can see, there isn't anything that does vhosting quite like this, except for the commercial systems out there such as Zeus. Do people think this is a good approach? Will another method give me what I want? (LDAP is not a dependency, just a nice-to-have) Finally, I am thinking about starting an Open Source project to write this module. My C is pretty primitive right now, though I have got simple LDAP lookup code working already (just not in Apache, yet). Would anyone else see this as a worthwhile project for Apache? It certainly would solve our problems, but it sometimes feels like I'm trying to fix a simple problem with something very heavy - though implemented correctly, I don't think performance will be a problem. Comments gratefully received :) Regards, Nathan. -- Nathan Ollerenshaw - Systems Engineer - Shared Hosting ValueCommerce Japan - http://www.valuecommerce.ne.jp If you think nobody cares if you're alive, try missing a couple of car payments.
Re: why installing *.exp files on platforms where they aren't needed?
Stas Bekman wrote: I get these 3 files installed together with other httpd-2/apr files on linux: ~/httpd/prefork/modules/httpd.exp ~/httpd/prefork/lib/apr.exp ~/httpd/prefork/lib/aprutil.exp Won't it make sense to not install them on OSes where they have no use? These are needed only on AIX.
Re: AuthLDAPCertDBPath ???
Trevor Hurst wrote: I'm wondering if I should see mod_ldap in the static listing of the modules I compiled in? I don;t see mod_ldap but a few others such as mod_auth_ldap.c and util_ldap.c. Am I missing something? util_ldap.c is the same as mod_ldap. It's just named funny. Look in the source code for util_ldap.c - you should see the directives towards the end of the file - if you don't, you need to check out the latest version from CVS. Regards, Graham -- - [EMAIL PROTECTED] There's a moon over Bourbon Street tonight...
Re: Kerberos authentication
Dirk-Willem van Gulik wrote: On Wed, 12 Mar 2003, Bill Stoddard wrote: Anyone have any first hand experience with kerberos authentication in the server? .. well - we have ripped code out of telnet(d) from KTH-their Heimdal's on *BSD to do this for a finance customer - who had some (silly but golden) policy which made kerberos the only acceptable auth method across certain internal network boundaries. But we only did auth; nothing else; and only between an apache server and an apache proxy. Not between server and client. Nor did we anything like the '-x' from telnetd for encryption. It worked well, fast and reliable - which was a surprize as the use you now make of Kerberos is quite different than say, for telnet or an x-display; lots of concurrent auths for lots of connections. See also http://modauthkerb.sourceforge.net/ which is a local kerb auth (i.e. the password goes basic auth over http) and http://meta.cesnet.cz/software/heimdal/mod_auth_kerb.c which is a hack on the above for the real thing. (It is listed on that page - but not linked in). Do you need it for anything specific ? Can I help ? I got a question from a collegue about getting 'Negotiate' working with IE. My short answer was 'I have no idea' but it looked interesting enough to ask the folks on [EMAIL PROTECTED] Bill
Re: [PATCH] openssl versions?
Hi there, Thanks for filling in the SSL-C bits Madhu, looks like a clean fit. * MATHIHALLI,MADHUSUDAN (HP-Cupertino,ex1) ([EMAIL PROTECTED]) wrote: -Original Message- From: William A. Rowe, Jr. [mailto:[EMAIL PROTECTED] [snip] Anyways, nice patch - I'd prefer if you would follow the Fix One Thing rule of committing this patch; e.g. take it back apart and have each commit labeled as to it's single purpose. But at least here, I'm +1 for this to go into 2.1-dev and I'll help continue to review/improve it in-tree. Oh yes definitely - I'll break it into small patches and commit them seperately. I had no intention of committing it to the 2.0-dev. I would, for my part, be quite cautious about putting the new detection code into any existing stable branch. There's a certain amount of apples and oranges involved here - the new detection is essentially autoconf all the way and the existing stuff contains lots of hard-coded paths and *file* checks (as opposed to $(CPP)/$(CC) tests). The autoconf approach is clearly preferable, but whether this has the capacity to bite anyone whose build system or installation target is dependant on the oddities of the existing behaviour is another question. Cheers, Geoff -- Geoff Thorpe [EMAIL PROTECTED] http://www.geoffthorpe.net/
Re: cvs commit: httpd-2.0/server/mpm/winnt child.c mpm_winnt.c mpm_winnt.h
[EMAIL PROTECTED] wrote: ake 2003/03/04 14:15:52 Modified:.CHANGES server/mpm/winnt child.c mpm_winnt.c mpm_winnt.h Log: Added the WindowsSocketsWorkaroud directive for Windows NT/2000/XP to work around problems with certain VPN and Firewall products that have buggy AcceptEx implementations Revision ChangesPath 1.1103+5 -0 httpd-2.0/CHANGES snip static const command_rec winnt_cmds[] = { LISTEN_COMMANDS, @@ -224,6 +243,9 @@ Number of threads each child creates ), AP_INIT_TAKE1(ThreadLimit, set_thread_limit, NULL, RSRC_CONF, Maximum worker threads in a server for this run of Apache), +AP_INIT_TAKE1(WindowsSocketsWorkaround, set_sockets_workaround, NULL, RSRC_CONF, + Set \on\ to work around buggy Winsock provider implementations of certain VPN or Firewall software), + { NULL } }; Rather than WindowsSocketsWorkaround, why not WinUseWinsock1 or ??. It would be better I think if the directive somehow indicated exactly what it was doing (causing the winnt mpm to use the select/accept winsock1 calls rather than AcceptEx, a winsock2 call). Bill
Re: cvs commit: httpd-2.0/server/mpm/winnt child.c mpm_winnt.c mpm_winnt.h
At 12:57 PM 3/13/2003, Bill Stoddard wrote: [EMAIL PROTECTED] wrote: ake 2003/03/04 14:15:52 Modified:.CHANGES server/mpm/winnt child.c mpm_winnt.c mpm_winnt.h Log: Added the WindowsSocketsWorkaroud directive for Windows NT/2000/XP to work around problems with certain VPN and Firewall products that have buggy AcceptEx implementations Revision ChangesPath 1.1103+5 -0 httpd-2.0/CHANGES snip static const command_rec winnt_cmds[] = { LISTEN_COMMANDS, @@ -224,6 +243,9 @@ Number of threads each child creates ), AP_INIT_TAKE1(ThreadLimit, set_thread_limit, NULL, RSRC_CONF, Maximum worker threads in a server for this run of Apache), +AP_INIT_TAKE1(WindowsSocketsWorkaround, set_sockets_workaround, NULL, RSRC_CONF, + Set \on\ to work around buggy Winsock provider implementations of certain VPN or Firewall software), + { NULL } }; Rather than WindowsSocketsWorkaround, why not WinUseWinsock1 or ??. It would be better I think if the directive somehow indicated exactly what it was doing (causing the winnt mpm to use the select/accept winsock1 calls rather than AcceptEx, a winsock2 call). That would be a misnomer - since our handle inheritance requires winsock2. What about WindowsUseAcceptEx on|off? That's really descriptive of what the workaround does. Or, we could call it WindowsFastSockets on|off, which is the effect of the workaround. I was also looking at this entire patch - it seems silly to retest for both the version of windows and this flag throughout the code. Why not simply initialize it in the register_hooks() call based on the OS version? Then let the directive override its default value. We should prevent Win9x users from enabling the flag, however :-) Bill
Re: cvs commit: httpd-2.0/server/mpm/winnt child.c mpm_winnt.c mpm_winnt.h
William A. Rowe, Jr. wrote: At 12:57 PM 3/13/2003, Bill Stoddard wrote: [EMAIL PROTECTED] wrote: ake 2003/03/04 14:15:52 Modified:.CHANGES server/mpm/winnt child.c mpm_winnt.c mpm_winnt.h Log: Added the WindowsSocketsWorkaroud directive for Windows NT/2000/XP to work around problems with certain VPN and Firewall products that have buggy AcceptEx implementations Revision ChangesPath 1.1103+5 -0 httpd-2.0/CHANGES snip static const command_rec winnt_cmds[] = { LISTEN_COMMANDS, @@ -224,6 +243,9 @@ Number of threads each child creates ), AP_INIT_TAKE1(ThreadLimit, set_thread_limit, NULL, RSRC_CONF, Maximum worker threads in a server for this run of Apache), +AP_INIT_TAKE1(WindowsSocketsWorkaround, set_sockets_workaround, NULL, RSRC_CONF, + Set \on\ to work around buggy Winsock provider implementations of certain VPN or Firewall software), + { NULL } }; Rather than WindowsSocketsWorkaround, why not WinUseWinsock1 or ??. It would be better I think if the directive somehow indicated exactly what it was doing (causing the winnt mpm to use the select/accept winsock1 calls rather than AcceptEx, a winsock2 call). That would be a misnomer - since our handle inheritance requires winsock2. What about WindowsUseAcceptEx on|off? That's really descriptive of what the workaround does. Or, we could call it WindowsFastSockets on|off, which is the effect of the workaround. I was also looking at this entire patch - it seems silly to retest for both the version of windows and this flag throughout the code. Why not simply initialize it in the register_hooks() call based on the OS version? Then let the directive override its default value. That might be cool. Post a patch and I'll review it. We should prevent Win9x users from enabling the flag, however :-) Bill WindowsUseAcceptEx is much better I think. Since it is on by default on systems which support it. Another (better?) suggestion: WindowsDisableAcceptEx (with no arguments)? Bill
Re: cvs commit: httpd-2.0/server/mpm/winnt child.c mpm_winnt.c mpm_winnt.h
Rather than WindowsSocketsWorkaround, why not WinUseWinsock1 or ??. It would be better I think if the directive somehow indicated exactly what it was doing (causing the winnt mpm to use the select/accept winsock1 calls rather than AcceptEx, a winsock2 call). We're still usng winsock2 with this directive, it's our use of some of MS winsock extension calls (mswsock.dll) that are biting us. In fact I wrestled with the same question myself before deciding that otherBills suggestion was probably the best. If we were to be more to the point maybe something like WindowsSocketsDontUseAcceptEx, but apart from the length how many webmasters are likely to know what AcceptEx is? I'm open to renaming if we can come up with something more suitable, Allan
Servlet API changes and mod_proxy...
The new upcoming servlet API will include some new methods to retrieve the connection IP and PORT in case of proxied HTTP requests. It will be basically required to obtain the following: - IP + PORT of the remote client - IP + PORT of requested by the client - IP + PORT where the request was received by the server (the latter are different in case you have a proxied request, because you can get both the IP + PORT of the proxy server where the original request was received and the IP + PORT of the servlet container where the proxied request was forwarded to). Would it make any sense to extend the already-present capability of mod_proxy with X-Forwarded-... headers to include the original client port number and the IP + PORT of the Apache instance receiving the original request? The X-Forwarded-For would contain something like 192.168.1.2:19876 (instead of just the first bit before the ':'), and I would need to add another header containing the IP of the client connection (I was thinking about something like: X-Forwarded-Address: 10.0.0.1:80 What do you guys think? pier
Re: Servlet API changes and mod_proxy...
Pier Fumagalli [EMAIL PROTECTED] wrote: The new upcoming servlet API will include some new methods to retrieve the connection IP and PORT in case of proxied HTTP requests. It will be basically required to obtain the following: - IP + PORT of the remote client - IP + PORT of requested by the client - IP + PORT where the request was received by the server (the latter are different in case you have a proxied request, because you can get both the IP + PORT of the proxy server where the original request was received and the IP + PORT of the servlet container where the proxied request was forwarded to). Would it make any sense to extend the already-present capability of mod_proxy with X-Forwarded-... headers to include the original client port number and the IP + PORT of the Apache instance receiving the original request? The X-Forwarded-For would contain something like 192.168.1.2:19876 (instead of just the first bit before the ':'), and I would need to add another header containing the IP of the client connection (I was thinking about something like: X-Forwarded-Address: 10.0.0.1:80 What do you guys think? Forget about the second part... That should be the Via header, right? Pier
Sending multiple buffers using writev and WSASend
When sending multiple buckets on a socket does writev and WSASend create a packet per buffer, or a single packet for all buffers? I would assume the answer might depend on the Operating system or maybe the size of the buffers. I'm interested in Windows 2000 and linux. Juan C. Rivera Citrix Systems, Inc Tel: (954)229-6391
Re: Servlet API changes and mod_proxy...
Pier Fumagalli wrote: Pier Fumagalli [EMAIL PROTECTED] wrote: The X-Forwarded-For would contain something like 192.168.1.2:19876 (instead of just the first bit before the ':'), and I would need to add another header containing the IP of the client connection (I was thinking about something like: X-Forwarded-Address: 10.0.0.1:80 What do you guys think? I'm not sure if this is a good thing, especially if this header or the 'via' one has been around for a long time, and we might break things by changing what they expect. it might just be easier to add X-forwarded-port
need to read error_logs from httpd ?
Hi, is there a need that we open error_log for reading from within httpd ? --- httpd-2.0/server/log.c.orig Thu Mar 13 19:47:20 2003 +++ httpd-2.0/server/log.c Thu Mar 13 19:47:48 2003 @@ -313,7 +313,7 @@ return DONE; } if ((rc = apr_file_open(s-error_log, fname, - APR_APPEND | APR_READ | APR_WRITE | APR_CREATE, + APR_APPEND | APR_WRITE | APR_CREATE, APR_OS_DEFAULT, p)) != APR_SUCCESS) { ap_log_error(APLOG_MARK, APLOG_STARTUP, rc, NULL, %s: could not open error log file %s., -- Bjoern A. Zeeb bzeeb at Zabbadoz dot NeT 56 69 73 69 74 http://www.zabbadoz.net/
Remove entries or Move entries in CHANGES
Hi, On a lighter note :).. I would think Move entries to the current... would be more appropriate than Remove .. -Madhu $ head -3 CHANGES Changes with Apache 2.1.0-dev [Remove entries to the current 2.0 section below, when backported]
Re: Advanced Mass Hosting Module
I would also love to see such a module available, and im very willing to contribute in any way i can, however, im skillless in the C arena :( Good luck. Tim - Original Message - From: Nathan Ollerenshaw [EMAIL PROTECTED] To: [EMAIL PROTECTED] Sent: Thursday, March 13, 2003 10:27 PM Subject: Advanced Mass Hosting Module Resending this to this list as I got no response on users list. Currently, we are using flat config files generated by our website provisioning software to support our mass hosted customers. The reason for doing it this way, and not using the mod_vhost_alias module is because we need to be able to turn on/off CGI, PHP, Java, shtml etc on a per vhost basis. We need the power that having a distinct VirtualHost directive for each site gives you. Is there a better way? What I have in mind is a module that fits in with our current LDAP based infrastructure. Currently, LDAP services our mail users, and I would like to see the Apache mass hosting configuration held in LDAP as well. In this way, we can just scale by adding more apache servers, mounting the shared docroot and pointing them to the LDAP server. The LDAP entry would look something like this: # www.example.com, base dn: uid=www.example.com, o=base siteGidNumber: 10045 siteUidNumber: 10045 objectClass: top objectClass: apacheVhost serverName: www.example.com serverAlias: example.com serverAlias: another.example.com docRoot: /data/web/04/09/example.com/www vhostStatus: enabled phpStatus: enabled shtmlStatus: enabled cgiStatus: enabled dataOutSoftLimit: 100 (in bytes per month) dataOutHardLimit: 1000 dataInSoftLimit: 100 dataInHardLimit: 1000 dataThrottleRate: 100 (in bits/sec) Then, as a request came in, the imaginary mod_advanced_masshosting module would first check to see if it had the information about the domain already cached in memory (to avoid hitting LDAP for every HTTP request, which would be a Bad Idea) and then if not, it would grab the entry from LDAP, cache it, and service the incoming requests. The cache itself would need to be shared among the actual child apache processes somehow. In addition to these features, the module would keep track of the amount of data transferred in out for each vhost and apply a soft/hard limit when the limits defined in the LDAP entry were reached. The amount of actual data transferred would periodically be written to either a GDBM file or even to an LDAP entry (not sure what is best - probably LDAP for consistency) and the data would also need to be shared among any servers in a cluster somehow. This would enable ISPs to bill on a per vhost basis fairly accurately, and limit abusive sites. Now, I've looked around for something like this, and as far as I can see, there isn't anything that does vhosting quite like this, except for the commercial systems out there such as Zeus. Do people think this is a good approach? Will another method give me what I want? (LDAP is not a dependency, just a nice-to-have) Finally, I am thinking about starting an Open Source project to write this module. My C is pretty primitive right now, though I have got simple LDAP lookup code working already (just not in Apache, yet). Would anyone else see this as a worthwhile project for Apache? It certainly would solve our problems, but it sometimes feels like I'm trying to fix a simple problem with something very heavy - though implemented correctly, I don't think performance will be a problem. Comments gratefully received :) Regards, Nathan. -- Nathan Ollerenshaw - Systems Engineer - Shared Hosting ValueCommerce Japan - http://www.valuecommerce.ne.jp If you think nobody cares if you're alive, try missing a couple of car payments.
RE: Advanced Mass Hosting Module
These are neat ideas. At a few companies I've worked for we already do similar things but we have scripts that generate the httpd.conf files and distribute them out to the web servers and gracefully restart. Adding a new web server machine to the mix is as simple as adding the host name to the distribution script. What you're talking about doing sounds like a lot more complexity to achieve a similar thing, and more complexity means there's a lot more that can go wrong. For instance, what are you going to do if the LDAP server is down, are many not-yet-cached virtual hosts just going to fail? In our scenario it's solved simply and easily by the generation script simply failing and nothing being copied (but at least the web servers keep working fine with the last config revision, so not many/any end user web surfers will notice the outage). Dave -Original Message- From: Nathan Ollerenshaw [mailto:[EMAIL PROTECTED] Sent: Thursday, March 13, 2003 3:28 AM To: [EMAIL PROTECTED] Subject: Advanced Mass Hosting Module Resending this to this list as I got no response on users list. Currently, we are using flat config files generated by our website provisioning software to support our mass hosted customers. The reason for doing it this way, and not using the mod_vhost_alias module is because we need to be able to turn on/off CGI, PHP, Java, shtml etc on a per vhost basis. We need the power that having a distinct VirtualHost directive for each site gives you. Is there a better way? What I have in mind is a module that fits in with our current LDAP based infrastructure. Currently, LDAP services our mail users, and I would like to see the Apache mass hosting configuration held in LDAP as well. In this way, we can just scale by adding more apache servers, mounting the shared docroot and pointing them to the LDAP server. The LDAP entry would look something like this: # www.example.com, base dn: uid=www.example.com, o=base siteGidNumber: 10045 siteUidNumber: 10045 objectClass: top objectClass: apacheVhost serverName: www.example.com serverAlias: example.com serverAlias: another.example.com docRoot: /data/web/04/09/example.com/www vhostStatus: enabled phpStatus: enabled shtmlStatus: enabled cgiStatus: enabled dataOutSoftLimit: 100 (in bytes per month) dataOutHardLimit: 1000 dataInSoftLimit: 100 dataInHardLimit: 1000 dataThrottleRate: 100 (in bits/sec) Then, as a request came in, the imaginary mod_advanced_masshosting module would first check to see if it had the information about the domain already cached in memory (to avoid hitting LDAP for every HTTP request, which would be a Bad Idea) and then if not, it would grab the entry from LDAP, cache it, and service the incoming requests. The cache itself would need to be shared among the actual child apache processes somehow. In addition to these features, the module would keep track of the amount of data transferred in out for each vhost and apply a soft/hard limit when the limits defined in the LDAP entry were reached. The amount of actual data transferred would periodically be written to either a GDBM file or even to an LDAP entry (not sure what is best - probably LDAP for consistency) and the data would also need to be shared among any servers in a cluster somehow. This would enable ISPs to bill on a per vhost basis fairly accurately, and limit abusive sites. Now, I've looked around for something like this, and as far as I can see, there isn't anything that does vhosting quite like this, except for the commercial systems out there such as Zeus. Do people think this is a good approach? Will another method give me what I want? (LDAP is not a dependency, just a nice-to-have) Finally, I am thinking about starting an Open Source project to write this module. My C is pretty primitive right now, though I have got simple LDAP lookup code working already (just not in Apache, yet). Would anyone else see this as a worthwhile project for Apache? It certainly would solve our problems, but it sometimes feels like I'm trying to fix a simple problem with something very heavy - though implemented correctly, I don't think performance will be a problem. Comments gratefully received :) Regards, Nathan. -- Nathan Ollerenshaw - Systems Engineer - Shared Hosting ValueCommerce Japan - http://www.valuecommerce.ne.jp If you think nobody cares if you're alive, try missing a couple of car payments.
Re: Advanced Mass Hosting Module
On Thu, Mar 13, 2003 at 04:55:19PM -0800, David Burry wrote: These are neat ideas. At a few companies I've worked for we already do similar things but we have scripts that generate the httpd.conf files and distribute them out to the web servers and gracefully restart. Adding a new web server machine to the mix is as simple as adding the host name to the distribution script. I've done the same in the past. It works fine, but becomes unweildy when you're talking about thousands of sites per server. Graceful restarts also take a nontrivial amount of time in this environment. What you're talking about doing sounds like a lot more complexity to achieve a similar thing, and more complexity means there's a lot more that can go wrong. For instance, what are you going to do if the LDAP server is down, are many not-yet-cached virtual hosts just going to fail? Redundant LDAP servers? Or even pluggable backends - keep a DBM-format copy on the local filesystem as a backup. I imagine many people would be happy with a default vhost specified in the config, which could display an Ooops! Something's broken! page. In my experience, the 80:20 rule definitely applies here - and I would be inclined to suggest the ratio is even more severe. That is, more than 80% of the vhosts contribute less than 20% of the load. While the dynamic reconfiguration afforded by this proposal is a big win, I'm more impressed with the opportunity to minimise the amount of wasted resources in large environments. I'm interested to hear whether this is feasible for development against 2.0, as I don't believe the current architecture allows for plugging in this sort of functionality as a 3rd-party module. Zac
Re: Advanced Mass Hosting Module
Resending this to this list as I got no response on users list. Sorry, I missed the original version of this post. Currently, we are using flat config files generated by our website provisioning software to support our mass hosted customers. The reason for doing it this way, and not using the mod_vhost_alias module is because we need to be able to turn on/off CGI, PHP, Java, shtml etc on a per vhost basis. We need the power that having a distinct VirtualHost directive for each site gives you. Is there a better way? The mod_vhost_alias way came from a heritage of very basic web site provisioning, with little change in architecture since 1996. The model was abusing the filesystem as a database -- we were using permissions on users' home directories to record if they had been barred or had exceeded their quota. We also abused the DNS as a database, which is where UseCanonicalName DNS came from. From a more recent perspective this is foolish (or at least naive). In addition to these features, the module would keep track of the amount of data transferred in out for each vhost and apply a soft/hard limit when the limits defined in the LDAP entry were reached. The amount of actual data transferred would periodically be written to either a GDBM file or even to an LDAP entry (not sure what is best - probably LDAP for consistency) and the data would also need to be shared among any servers in a cluster somehow. This would enable ISPs to bill on a per vhost basis fairly accurately, and limit abusive sites. This part of it should be separate from the vhosting side of things. How you provision a web site is independent of how you accumulate stats on it. It's a logging module, which is naturally separate from a URI-filename mapping module -- though a proper vhosting module needs to hook into the DirectoryWalk side of things to do permissions. Will another method give me what I want? (LDAP is not a dependency, just a nice-to-have) Clever application of .htaccess files, directory sections containing AllowOverride directives, etc. *may* be good enough, but it's a very blunt tool. Sounds like you're aiming for something good. Lots of people have asked me for database-driven mod_vhost_alias (which misses the point, but) so there is a clear need. Don't worry too much about the project management side of things -- just write the code and the docs and publish it, then keep polishing and answering emails. Tony. -- f.a.n.finch [EMAIL PROTECTED] http://dotat.at/ BERWICK ON TWEED TO WHITBY: SOUTHEAST 2 OR 3, INCREASING 4 PERHAPS 5. FAIR. MODERATE OR GOOD. SLIGHT, INCREASING MODERATE LATER.
Re: Advanced Mass Hosting Module
On Friday, March 14, 2003, at 10:15 AM, Zac Stevens wrote: On Thu, Mar 13, 2003 at 04:55:19PM -0800, David Burry wrote: These are neat ideas. At a few companies I've worked for we already do similar things but we have scripts that generate the httpd.conf files and distribute them out to the web servers and gracefully restart. Adding a new web server machine to the mix is as simple as adding the host name to the distribution script. I've done the same in the past. It works fine, but becomes unweildy when you're talking about thousands of sites per server. Graceful restarts also take a nontrivial amount of time in this environment. Even a few hundred sites are now taking an inordinate time to do a graceful - our config is on NFS, with a separate file for each site - a design decision that I am beginning to regret... I did some testing, but I didn't account for the fact that I'd be loading the configs over NFS. Not great. What you're talking about doing sounds like a lot more complexity to achieve a similar thing, and more complexity means there's a lot more that can go wrong. For instance, what are you going to do if the LDAP server is down, are many not-yet-cached virtual hosts just going to fail? Redundant LDAP servers? Or even pluggable backends - keep a DBM-format copy on the local filesystem as a backup. I imagine many people would be happy with a default vhost specified in the config, which could display an Ooops! Something's broken! page. We use redundancy everywhere, the backend LDAP is no exception tho this rule. The main reason for LDAP is because we have a front-end provisioning system that creates accounts for FTP and Email in LDAP, it would be nice to keep the website configurations in there too, without the provisioning system having to write apache config files. You're right, of course. Some form of graceful failure would be needed, but it would probably be a 'Temporarily Unavailable' error with a custom error page in Japanese and English (most of our customers are Japanese). In my experience, the 80:20 rule definitely applies here - and I would be inclined to suggest the ratio is even more severe. That is, more than 80% of the vhosts contribute less than 20% of the load. While the dynamic reconfiguration afforded by this proposal is a big win, I'm more impressed with the opportunity to minimise the amount of wasted resources in large environments. This equates with my experience too. It irks me that apache spends a large amount of time and memory holding the configuration for a bunch of sites that only get hit maybe once a day (when the owner loads the page to see if the hit counter has increased - HAH!) I'm interested to hear whether this is feasible for development against 2.0, as I don't believe the current architecture allows for plugging in this sort of functionality as a 3rd-party module. I was looking at implementing it in the URI-to-filename translation phase. Any memory malloc'd for a in-memory cache would only be accessable by that particular child, but that would not be so bad for a v1.0 implementation of the module. In the future, we might look at shmem or something like that. Even a DB file held on a ramdisk might be acceptable (if a little perverse). Nathan. -- Nathan Ollerenshaw - Systems Engineer - Shared Hosting ValueCommerce Japan - http://www.valuecommerce.ne.jp In the days, When we were swinging form the trees I was a monkey, Stealing honey from a swarm of bees I could taste, I could taste you even then And I would chase you down the wind
Re: Advanced Mass Hosting Module
On Friday, March 14, 2003, at 09:00 AM, Tim Nagel wrote: I would also love to see such a module available, and im very willing to contribute in any way i can, however, im skillless in the C arena :( Learn C, and you're on the team! Good luck. Tim Nathan. -- Nathan Ollerenshaw - Systems Engineer - Shared Hosting ValueCommerce Japan - http://www.valuecommerce.ne.jp I'm your blubber boy you should rub me The sun beat me down too viciously I fell into the ground to what I used to be I've melted away I'm nothing again
Re: Advanced Mass Hosting Module
On Friday, March 14, 2003, at 09:55 AM, David Burry wrote: These are neat ideas. At a few companies I've worked for we already do similar things but we have scripts that generate the httpd.conf files and distribute them out to the web servers and gracefully restart. Adding a new web server machine to the mix is as simple as adding the host name to the distribution script. Yup. Not too dissimilar to what we use right now. We have a shared NFS filesystem mounted on all the apache servers with a single level tree of config files, one per domain. Apache just includes the base directory. This sucks, performance wise. Convenience wise, it's great. The NFS server is a High Availability setup, so thats cool. And even if I was worried about the NFS going away and the server not being able to read it's configs, the point is mute - the NFS server also holds the docs. What you're talking about doing sounds like a lot more complexity to achieve a similar thing, and more complexity means there's a lot more that can go wrong. For instance, what are you going to do if the LDAP Normally, I'd agree. But like what was mentioned before, you have to load thousands, or if you're really lucky, tens of thousands of virtual hosts into your apache daemon. Eventually what happens is the apache daemon starts using an inordinate amount of ram just to load all those configurations into memory, and reloading takes an age. At least with 1.3, I saw a massive memory usage when loading 5,000 virtualhosts in a test. I am not sure about 2.0. Besides. I don't want to have to keep restarting my apache daemon *every time* someone wants to enable/disable php on their site. It ruins the uptime! ;) server is down, are many not-yet-cached virtual hosts just going to fail? In our scenario it's solved simply and easily by the generation script simply failing and nothing being copied (but at least the web servers keep working fine with the last config revision, so not many/any end user web surfers will notice the outage). Have more than one LDAP server :) This is easy to do, LDAP allows for it, and as long as the client software is smart (stops trying to use a borked LDAP server) you won't even notice the failure of a back-end LDAP slave. Besides, LDAP is much-maligned. I've been running LDAP in production systems for a long time now, and I've never had one just up and die on me. The ability to store all your configuration data in one place overrides the inconvenience of having to manage another set of servers. Nathan. -- Nathan Ollerenshaw - Systems Engineer - Shared Hosting ValueCommerce Japan - http://www.valuecommerce.ne.jp In the days, When we were swinging form the trees I was a monkey, Stealing honey from a swarm of bees I could taste, I could taste you even then And I would chase you down the wind