Re: Events, Destruction and Locking

2009-07-08 Thread Bojan Smojver
On Wed, 2009-07-08 at 22:53 -0400, Paul Querna wrote:
> But the event mpm doesn't have an accept mutex :D

Yeah, I know. I was talking about making prefork async too.

-- 
Bojan



Re: Events, Destruction and Locking

2009-07-08 Thread Paul Querna
On Wed, Jul 8, 2009 at 9:11 PM, Bojan Smojver wrote:
> On Wed, 2009-07-08 at 11:01 +1000, Bojan Smojver wrote:
>> So, the loop would be:
>>
>> - poll()
>> - try assembling a full request from data read so far
>>   - process if successful
>>   - go back to poll() if not
>>
>> Too naive?
>
> I see that we'd most likely get stuck with the accept mutex (i.e. if
> another process had it, we would not be poll()-ing already accepted fds
> any more).
>

But the event mpm doesn't have an accept mutex :D

> We could work around this by using apr_proc_mutex_trylock() if there are
> any already accepted fds. If this fails, we just poll() already accepted
> fds (i.e. someone is already poll()-ing to accept()). Otherwise, we
> poll() the lot.
>
> --
> Bojan
>
>


Re: segmentation fault in worker.c

2009-07-08 Thread Andrej van der Zee
Hi,

> I guess you are hit by this:
>
> https://issues.apache.org/bugzilla/show_bug.cgi?id=46467
>

Indeed so it was!

Cheers,
Andrej


Re: Events, Destruction and Locking

2009-07-08 Thread Bojan Smojver
On Wed, 2009-07-08 at 11:01 +1000, Bojan Smojver wrote:
> So, the loop would be:
> 
> - poll()
> - try assembling a full request from data read so far
>   - process if successful
>   - go back to poll() if not
> 
> Too naive?

I see that we'd most likely get stuck with the accept mutex (i.e. if
another process had it, we would not be poll()-ing already accepted fds
any more).

We could work around this by using apr_proc_mutex_trylock() if there are
any already accepted fds. If this fails, we just poll() already accepted
fds (i.e. someone is already poll()-ing to accept()). Otherwise, we
poll() the lot.

-- 
Bojan



new Hook doesn't work

2009-07-08 Thread h iroshan
Hi All,

I created a new hook for some additional task in mod_proxy_balancer .I
clearly followed the steps at
http://213.11.80.10/manual/developer/hooks.html . I register my Hook as
follows

 static  void  ap_proxy_balancer_register_hook(apr_pool_t *p)
{
  //additional code goes here
proxy_hook_my_request(proxy_balancer_my_request, NULL, NULL,
APR_HOOK_MIDDLE);

}

By using  apxs tool this module and mod_proxy module can compile without any
error. When my server runs,  the proxy_balancer_my_request function never
execute and all the other hooks execute there functions without any problem.
Even there is no any error message in error log also. I want to know why
this proxy_balancer_my_reques function never execute?

Please Help me
Iroshan.


Re: Events, Destruction and Locking

2009-07-08 Thread Graham Dumpleton
2009/7/9 Rainer Jung :
> On 08.07.2009 15:55, Paul Querna wrote:
>> On Wed, Jul 8, 2009 at 3:05 AM, Graham
>> Dumpleton wrote:
>>> 2009/7/8 Graham Leggett :
 Paul Querna wrote:

> It breaks the 1:1: connection mapping to thread (or process) model
> which is critical to low memory footprint, with thousands of
> connections, maybe I'm just insane, but all of the servers taking
> market share, like lighttpd, nginx, etc, all use this model.
>
> It also prevents all variations of the slowaris stupidity, because its
> damn hard to overwhelm the actual connection processing if its all
> async, and doesn't block a worker.
 But as you've pointed out, it makes our heads bleed, and locks slow us 
 down.

 At the lowest level, the event loop should be completely async, and be
 capable of supporting an arbitrary (probably very high) number of
 concurrent connections.

 If one connection slows or stops (deliberately or otherwise), it won't
 block any other connections on the same event loop, which will continue
 as normal.
>>> But which for a multiprocess web server screws up if you then have a
>>> blocking type model for an application running on top. Specifically,
>>> the greedy nature of accepting connections may mean a process accepts
>>> more connections which it has high level threads to handle. If the
>>> high level threads end up blocking, then any accepted connections for
>>> the blocking high level application, for which request headers are
>>> still being read, or are pending, will be blocked as well even though
>>> another server process may be idle. In the current Apache model a
>>> process will only accept connections if it knows it is able to process
>>> it at that time. If a process doesn't have the threads available, then
>>> a different process would pick it up instead. I have previously
>>> commented how this causes problems with nginx for potentially blocking
>>> applications running in nginx worker processes. See:
>>>
>>>  http://blog.dscpl.com.au/2009/05/blocking-requests-and-nginx-version-of.html
>>>
>>> To prevent this you are forced to run event driven system for
>>> everything and blocking type applications can't be run in same
>>> process. Thus, anything like that has to be shoved out into a separate
>>> process. FASTCGI was mentioned for that, but frankly I believed
>>> FASTCGI is getting a bit crufty these days. It perhaps really needs to
>>> be modernised, with the byte protocol layout simplified to get rid of
>>> these varying size length indicator bytes. This may have been
>>> warranted when networks were slower and amount of body data being
>>> passed around less, but I can't see that that extra complexity is
>>> warranted any more. FASTCGI also can't handle things like end to end
>>> 100-continue processing and perhaps has other problems as well in
>>> respect of handling logging outside of request context etc etc.
>>>
>>> So, I personally would really love to see a good review of FASTCGI,
>>> AJP and any other similar/pertinent protocols done to distill what in
>>> these modern times is required and would be a better mechanism. The
>>> implementations of FASTCGI could also perhaps be modernised. Of
>>> course, even though FASTCGI may not be the most elegant of systems,
>>> probably too entrenched to get rid of it. The only way perhaps might
>>> be if a improved version formed the basis of any internal
>>> communications for a completely restructured internal model for Apache
>>> 3.0 based on serf which had segregation between processes handling
>>> static files and applications, with user separation etc etc.
>>
>> TBH, I think the best way to modernize FastCGI or AJP is to just proxy
>> HTTP over a daemon socket, then you solve all the protocol issues...
>> and just treat it like another reverse proxy.  The part we really need
>> to write is the backend process manager, to spawn/kill more of these
>> workers.
>
> Though there is one nice feature in the AJP protocol: since it knows
> it's serving via a reverse proxy, the back end patches some
> communication data like it were the front end. So if the context on the
> back end asks for port, protocol, host name etc. it automatically gets
> the data that looks like the one of the front end. That way cookies,
> self-referencing links etc. work right.
>
> Most of that can be simulated by appropriate configuration with HTTP to
> (yes, there are a lot of proxy options for this), but in AJP its
> automatic. Some parts are not configurable right now, like e.g. the
> client IP. You always have to introduce something that's aware e.g. of
> the X-Forwarded-For header. Another example would be whether the
> communication to the reverse proxy was via https. You can transport all
> that info va custom headers, but the backend usually doesn't know how to
> handle it.

Yes, these are the sort of things which would be nice to be
transparent. Paul's comment is valid though in

Re: Help with worker.c

2009-07-08 Thread Graham Dumpleton
In case you haven't already found it, ensure you have a read of:

  
http://www.fmc-modeling.org/category/projects/apache/amp/4_3Multitasking_server.html

It may not address the specific question, but certainly will give you
a better overall picture.

The rest of that book is also worth reading as well.

Graham

2009/7/8 ricardo13 :
>
> Hi,
>
> I'm trying understand worker.c module.
> My doubt is about operation push() and pop().
>
> Push() add a socket in array fd_queue_t->data and Pop() retrieve a socket
> for processing.
>
> But what's the order of PUSH() ?? It adds in final queue ??
> And POP() ?? Retrieve a socket only before (elem =
> &queue->data[--queue->nelts];) ??
>
> Thank you
> Ricardo
> --
> View this message in context: 
> http://www.nabble.com/Help-with-worker.c-tp24389140p24389140.html
> Sent from the Apache HTTP Server - Dev mailing list archive at Nabble.com.
>
>


Help with worker.c

2009-07-08 Thread ricardo figueiredo
Hi,

I'm trying understand worker.c module.
My doubt is about operation push() and pop().

Push() add a socket in array fd_queue_t->data and Pop() retrieve a socket
for processing.

But what's the order of PUSH() ?? It adds in final queue ??
And POP() ?? Retrieve a socket only before (elem =
&queue->data[--queue->nelts];) ??

Is it a FIFO or LIFO (Last in, First Out) ??

And I cannot increase the lenght of queue, I print the worker_queue->nelts
and doesn't exceed of value 1 and the CPU usage is low.
I developed a application in PHP with a loop until 1.

I've already configured directives MPM Worker (StartServer, ThreadPerChild,
ServerLimit, etc...).

My idea is develop new algorithms or other array (fd_queue_t->data) to
prioritze requests in MPM Worker.

-- 
Thank You

Ricardo


Re: Events, Destruction and Locking

2009-07-08 Thread Rainer Jung
On 08.07.2009 15:55, Paul Querna wrote:
> On Wed, Jul 8, 2009 at 3:05 AM, Graham
> Dumpleton wrote:
>> 2009/7/8 Graham Leggett :
>>> Paul Querna wrote:
>>>
 It breaks the 1:1: connection mapping to thread (or process) model
 which is critical to low memory footprint, with thousands of
 connections, maybe I'm just insane, but all of the servers taking
 market share, like lighttpd, nginx, etc, all use this model.

 It also prevents all variations of the slowaris stupidity, because its
 damn hard to overwhelm the actual connection processing if its all
 async, and doesn't block a worker.
>>> But as you've pointed out, it makes our heads bleed, and locks slow us down.
>>>
>>> At the lowest level, the event loop should be completely async, and be
>>> capable of supporting an arbitrary (probably very high) number of
>>> concurrent connections.
>>>
>>> If one connection slows or stops (deliberately or otherwise), it won't
>>> block any other connections on the same event loop, which will continue
>>> as normal.
>> But which for a multiprocess web server screws up if you then have a
>> blocking type model for an application running on top. Specifically,
>> the greedy nature of accepting connections may mean a process accepts
>> more connections which it has high level threads to handle. If the
>> high level threads end up blocking, then any accepted connections for
>> the blocking high level application, for which request headers are
>> still being read, or are pending, will be blocked as well even though
>> another server process may be idle. In the current Apache model a
>> process will only accept connections if it knows it is able to process
>> it at that time. If a process doesn't have the threads available, then
>> a different process would pick it up instead. I have previously
>> commented how this causes problems with nginx for potentially blocking
>> applications running in nginx worker processes. See:
>>
>>  http://blog.dscpl.com.au/2009/05/blocking-requests-and-nginx-version-of.html
>>
>> To prevent this you are forced to run event driven system for
>> everything and blocking type applications can't be run in same
>> process. Thus, anything like that has to be shoved out into a separate
>> process. FASTCGI was mentioned for that, but frankly I believed
>> FASTCGI is getting a bit crufty these days. It perhaps really needs to
>> be modernised, with the byte protocol layout simplified to get rid of
>> these varying size length indicator bytes. This may have been
>> warranted when networks were slower and amount of body data being
>> passed around less, but I can't see that that extra complexity is
>> warranted any more. FASTCGI also can't handle things like end to end
>> 100-continue processing and perhaps has other problems as well in
>> respect of handling logging outside of request context etc etc.
>>
>> So, I personally would really love to see a good review of FASTCGI,
>> AJP and any other similar/pertinent protocols done to distill what in
>> these modern times is required and would be a better mechanism. The
>> implementations of FASTCGI could also perhaps be modernised. Of
>> course, even though FASTCGI may not be the most elegant of systems,
>> probably too entrenched to get rid of it. The only way perhaps might
>> be if a improved version formed the basis of any internal
>> communications for a completely restructured internal model for Apache
>> 3.0 based on serf which had segregation between processes handling
>> static files and applications, with user separation etc etc.
> 
> TBH, I think the best way to modernize FastCGI or AJP is to just proxy
> HTTP over a daemon socket, then you solve all the protocol issues...
> and just treat it like another reverse proxy.  The part we really need
> to write is the backend process manager, to spawn/kill more of these
> workers.

Though there is one nice feature in the AJP protocol: since it knows
it's serving via a reverse proxy, the back end patches some
communication data like it were the front end. So if the context on the
back end asks for port, protocol, host name etc. it automatically gets
the data that looks like the one of the front end. That way cookies,
self-referencing links etc. work right.

Most of that can be simulated by appropriate configuration with HTTP to
(yes, there are a lot of proxy options for this), but in AJP its
automatic. Some parts are not configurable right now, like e.g. the
client IP. You always have to introduce something that's aware e.g. of
the X-Forwarded-For header. Another example would be whether the
communication to the reverse proxy was via https. You can transport all
that info va custom headers, but the backend usually doesn't know how to
handle it.

Regards,

Rainer


Re: Help with worker.c

2009-07-08 Thread ricardo13

Hi,

I cannot increase the lenght of queue.
I print the worker_queue->nelts and doesn't exceed of value 1 and the CPU
usage is low.
I did a application in PHP. Application has a loop until 1.

I've configured directives MPM Worker (StartServer, ThreadPerChild,
ServerLimit, etc...)

Thank you
Ricardo


ricardo13 wrote:
> 
> Hi,
> 
> I'm trying understand worker.c module.
> My doubt is about operation push() and pop().
> 
> Push() add a socket in array fd_queue_t->data and Pop() retrieve a socket
> for processing.
> 
> But what's the order of PUSH() ?? It adds in final queue ??
> And POP() ?? Retrieve a socket only before (elem =
> &queue->data[--queue->nelts];) ??
> 
> Thank you
> Ricardo
> 

-- 
View this message in context: 
http://www.nabble.com/Help-with-worker.c-tp24389140p24397052.html
Sent from the Apache HTTP Server - Dev mailing list archive at Nabble.com.



Re: need some help from an awk wizard ...

2009-07-08 Thread Guenter Knauf
Hi,
Guenter Knauf schrieb:
> thanks in advance for any help with this!
thanks to Rainer Jung who helped me finding the right track I finally
solved this.

Gün.




Re: Help with worker.c

2009-07-08 Thread ricardo13



Jorge Schrauwen-3 wrote:
> 
> If I'm not mistaken it's FIFO (First In, First Out).
> 
> I think it's LIFO (Last In, First Out).
> Because, for example, arrive 3 requests. It will do three operations
> Push(), the last request will be processing first than others.
> 
> My programming logic understand it. I didn't undestood FIFO.
> 
> I want undestand FIFO, but cannot understand. 
> 
> Thank you.
> Ricardo
> 
> ~Jorge
> 
> 
> 
> On Wed, Jul 8, 2009 at 12:39 PM, ricardo13
> wrote:
>>
>> Hi,
>>
>> I'm trying understand worker.c module.
>> My doubt is about operation push() and pop().
>>
>> Push() add a socket in array fd_queue_t->data and Pop() retrieve a socket
>> for processing.
>>
>> But what's the order of PUSH() ?? It adds in final queue ??
>> And POP() ?? Retrieve a socket only before (elem =
>> &queue->data[--queue->nelts];) ??
>>
>> Thank you
>> Ricardo
>> --
>> View this message in context:
>> http://www.nabble.com/Help-with-worker.c-tp24389140p24389140.html
>> Sent from the Apache HTTP Server - Dev mailing list archive at
>> Nabble.com.
>>
>>
> 
> 

-- 
View this message in context: 
http://www.nabble.com/Help-with-worker.c-tp24389140p24392802.html
Sent from the Apache HTTP Server - Dev mailing list archive at Nabble.com.



Re: Help with worker.c

2009-07-08 Thread Jorge Schrauwen
If I'm not mistaken it's FIFO (First In, First Out).

~Jorge



On Wed, Jul 8, 2009 at 12:39 PM, ricardo13 wrote:
>
> Hi,
>
> I'm trying understand worker.c module.
> My doubt is about operation push() and pop().
>
> Push() add a socket in array fd_queue_t->data and Pop() retrieve a socket
> for processing.
>
> But what's the order of PUSH() ?? It adds in final queue ??
> And POP() ?? Retrieve a socket only before (elem =
> &queue->data[--queue->nelts];) ??
>
> Thank you
> Ricardo
> --
> View this message in context: 
> http://www.nabble.com/Help-with-worker.c-tp24389140p24389140.html
> Sent from the Apache HTTP Server - Dev mailing list archive at Nabble.com.
>
>


Re: Events, Destruction and Locking

2009-07-08 Thread Paul Querna
On Wed, Jul 8, 2009 at 3:05 AM, Graham
Dumpleton wrote:
> 2009/7/8 Graham Leggett :
>> Paul Querna wrote:
>>
>>> It breaks the 1:1: connection mapping to thread (or process) model
>>> which is critical to low memory footprint, with thousands of
>>> connections, maybe I'm just insane, but all of the servers taking
>>> market share, like lighttpd, nginx, etc, all use this model.
>>>
>>> It also prevents all variations of the slowaris stupidity, because its
>>> damn hard to overwhelm the actual connection processing if its all
>>> async, and doesn't block a worker.
>>
>> But as you've pointed out, it makes our heads bleed, and locks slow us down.
>>
>> At the lowest level, the event loop should be completely async, and be
>> capable of supporting an arbitrary (probably very high) number of
>> concurrent connections.
>>
>> If one connection slows or stops (deliberately or otherwise), it won't
>> block any other connections on the same event loop, which will continue
>> as normal.
>
> But which for a multiprocess web server screws up if you then have a
> blocking type model for an application running on top. Specifically,
> the greedy nature of accepting connections may mean a process accepts
> more connections which it has high level threads to handle. If the
> high level threads end up blocking, then any accepted connections for
> the blocking high level application, for which request headers are
> still being read, or are pending, will be blocked as well even though
> another server process may be idle. In the current Apache model a
> process will only accept connections if it knows it is able to process
> it at that time. If a process doesn't have the threads available, then
> a different process would pick it up instead. I have previously
> commented how this causes problems with nginx for potentially blocking
> applications running in nginx worker processes. See:
>
>  http://blog.dscpl.com.au/2009/05/blocking-requests-and-nginx-version-of.html
>
> To prevent this you are forced to run event driven system for
> everything and blocking type applications can't be run in same
> process. Thus, anything like that has to be shoved out into a separate
> process. FASTCGI was mentioned for that, but frankly I believed
> FASTCGI is getting a bit crufty these days. It perhaps really needs to
> be modernised, with the byte protocol layout simplified to get rid of
> these varying size length indicator bytes. This may have been
> warranted when networks were slower and amount of body data being
> passed around less, but I can't see that that extra complexity is
> warranted any more. FASTCGI also can't handle things like end to end
> 100-continue processing and perhaps has other problems as well in
> respect of handling logging outside of request context etc etc.
>
> So, I personally would really love to see a good review of FASTCGI,
> AJP and any other similar/pertinent protocols done to distill what in
> these modern times is required and would be a better mechanism. The
> implementations of FASTCGI could also perhaps be modernised. Of
> course, even though FASTCGI may not be the most elegant of systems,
> probably too entrenched to get rid of it. The only way perhaps might
> be if a improved version formed the basis of any internal
> communications for a completely restructured internal model for Apache
> 3.0 based on serf which had segregation between processes handling
> static files and applications, with user separation etc etc.

TBH, I think the best way to modernize FastCGI or AJP is to just proxy
HTTP over a daemon socket, then you solve all the protocol issues...
and just treat it like another reverse proxy.  The part we really need
to write is the backend process manager, to spawn/kill more of these
workers.


Help with worker.c

2009-07-08 Thread ricardo13

Hi,

I'm trying understand worker.c module.
My doubt is about operation push() and pop().

Push() add a socket in array fd_queue_t->data and Pop() retrieve a socket
for processing.

But what's the order of PUSH() ?? It adds in final queue ??
And POP() ?? Retrieve a socket only before (elem =
&queue->data[--queue->nelts];) ??

Thank you
Ricardo
-- 
View this message in context: 
http://www.nabble.com/Help-with-worker.c-tp24389140p24389140.html
Sent from the Apache HTTP Server - Dev mailing list archive at Nabble.com.



RE: segmentation fault in worker.c

2009-07-08 Thread Plüm, Rüdiger, VF-Group
 

> -Original Message-
> From: Andrej van der Zee
> Sent: Mittwoch, 8. Juli 2009 06:19
> To: dev@httpd.apache.org
> Subject: segmentation fault in worker.c
> 
> Hi,
> 
> I compiled httpd-2.2.11 with "./configure --with-included-apr
> --enable-ssl --disable-cgi --disable-cgid --with-mpm=prefork
> --enable-status". HTTP requests seem to be processed fine from a users
> point of view, but I get many segfaults in my apache log when I
> seriously increase the workload. Here a trace from gdb:
> 
> Core was generated by `/usr/local/apache2/bin/httpd -k start'.
> Program terminated with signal 11, Segmentation fault.
> [New process 9935]
> #0  apr_pollset_add (pollset=0x0, descriptor=0xbf8225dc) at
> poll/unix/epoll.c:150
> 150   if (pollset->flags & APR_POLLSET_NOCOPY) {
> (gdb) print pollset
> $1 = (apr_pollset_t *) 0x0
> (gdb) bt
> #0  apr_pollset_add (pollset=0x0, descriptor=0xbf8225dc) at
> poll/unix/epoll.c:150
> #1  0x080c2c41 in child_main (child_num_arg=) at
> prefork.c:532
> #2  0x080c30f3 in make_child (s=0x9c849a8, slot=138) at prefork.c:746
> #3  0x080c3ef8 in ap_mpm_run (_pconf=0x9c7d0a8, plog=0x9cbb1a0,
> s=0x9c849a8) at prefork.c:881
> #4  0x0806e808 in main (argc=164081968, argv=0xbf822904) at main.c:740
> (gdb)

I guess you are hit by this:

https://issues.apache.org/bugzilla/show_bug.cgi?id=46467

Regards

Rüdiger


Re: svn commit: r791617 - in /httpd/httpd/trunk/modules: cluster/mod_heartmonitor.c proxy/balancers/mod_lbmethod_heartbeat.c

2009-07-08 Thread jean-frederic clere

On 07/07/2009 09:05 PM, Ruediger Pluem wrote:


On 07/06/2009 11:14 PM, jfcl...@apache.org wrote:

Author: jfclere
Date: Mon Jul  6 21:14:21 2009
New Revision: 791617

URL: http://svn.apache.org/viewvc?rev=791617&view=rev
Log:
Add use slotmem. Directive HeartbeatMaxServers>  10 to activate the logic.
Otherwise it uses the file logic to store the heartbeats.

Modified:
 httpd/httpd/trunk/modules/cluster/mod_heartmonitor.c
 httpd/httpd/trunk/modules/proxy/balancers/mod_lbmethod_heartbeat.c

Modified: httpd/httpd/trunk/modules/cluster/mod_heartmonitor.c
URL: 
http://svn.apache.org/viewvc/httpd/httpd/trunk/modules/cluster/mod_heartmonitor.c?rev=791617&r1=791616&r2=791617&view=diff
==
--- httpd/httpd/trunk/modules/cluster/mod_heartmonitor.c (original)
+++ httpd/httpd/trunk/modules/cluster/mod_heartmonitor.c Mon Jul  6 21:14:21 
2009




@@ -440,7 +530,17 @@
  return HTTP_INTERNAL_SERVER_ERROR;
  }
  apr_brigade_flatten(input_brigade, buf,&len);
-hm_processmsg(ctx, r->pool, r->connection->remote_addr, buf, len);
+
+/* we can't use hm_processmsg because it uses hm_get_server() */
+buf[len] = '\0';
+tbl = apr_table_make(r->pool, 10);
+qs_to_table(buf, tbl, r->pool);
+apr_sockaddr_ip_get(&ip, r->connection->remote_addr);
+hmserver.ip = ip;
+hmserver.busy = atoi(apr_table_get(tbl, "busy"));
+hmserver.ready = atoi(apr_table_get(tbl, "ready"));
+hmserver.seen = apr_time_now();
+hm_slotmem_update_stat(&hmserver, r);


Sorry for being confused, but this means that we are storing the data in 
different
locations dependent on whether we use the handler or the UDP listener and more 
so
we provide them in different locations for other modules to use (sharedmem / 
file).
Does this make sense?


The file logic for the handler is tricky, I need to work on it.


IMHO we should either provide them in both locations (sharedmem / file) no 
matter
which source contributed it or we should make it configurable where this 
information
is offered.


Yep.





Modified: httpd/httpd/trunk/modules/proxy/balancers/mod_lbmethod_heartbeat.c
URL: 
http://svn.apache.org/viewvc/httpd/httpd/trunk/modules/proxy/balancers/mod_lbmethod_heartbeat.c?rev=791617&r1=791616&r2=791617&view=diff
==
--- httpd/httpd/trunk/modules/proxy/balancers/mod_lbmethod_heartbeat.c 
(original)
+++ httpd/httpd/trunk/modules/proxy/balancers/mod_lbmethod_heartbeat.c Mon Jul  
6 21:14:21 2009



@@ -39,9 +47,20 @@
  int busy;
  int ready;
  int seen;
+int id;
  proxy_worker *worker;
  } hb_server_t;

+#define MAXIPSIZE  64
+typedef struct hm_slot_server_t
+{
+char ip[MAXIPSIZE];
+int busy;
+int ready;
+apr_time_t seen;
+int id;
+} hm_slot_server_t;
+


Shouldn't these things go to a common include file?
I guess defining them in each file is waiting for a missed-to-update
error to happen.


I will move that in a common include file.

Cheers

Jean-Frederic


Re: Events, Destruction and Locking

2009-07-08 Thread Graham Dumpleton
2009/7/8 Graham Leggett :
> Paul Querna wrote:
>
>> It breaks the 1:1: connection mapping to thread (or process) model
>> which is critical to low memory footprint, with thousands of
>> connections, maybe I'm just insane, but all of the servers taking
>> market share, like lighttpd, nginx, etc, all use this model.
>>
>> It also prevents all variations of the slowaris stupidity, because its
>> damn hard to overwhelm the actual connection processing if its all
>> async, and doesn't block a worker.
>
> But as you've pointed out, it makes our heads bleed, and locks slow us down.
>
> At the lowest level, the event loop should be completely async, and be
> capable of supporting an arbitrary (probably very high) number of
> concurrent connections.
>
> If one connection slows or stops (deliberately or otherwise), it won't
> block any other connections on the same event loop, which will continue
> as normal.

But which for a multiprocess web server screws up if you then have a
blocking type model for an application running on top. Specifically,
the greedy nature of accepting connections may mean a process accepts
more connections which it has high level threads to handle. If the
high level threads end up blocking, then any accepted connections for
the blocking high level application, for which request headers are
still being read, or are pending, will be blocked as well even though
another server process may be idle. In the current Apache model a
process will only accept connections if it knows it is able to process
it at that time. If a process doesn't have the threads available, then
a different process would pick it up instead. I have previously
commented how this causes problems with nginx for potentially blocking
applications running in nginx worker processes. See:

  http://blog.dscpl.com.au/2009/05/blocking-requests-and-nginx-version-of.html

To prevent this you are forced to run event driven system for
everything and blocking type applications can't be run in same
process. Thus, anything like that has to be shoved out into a separate
process. FASTCGI was mentioned for that, but frankly I believed
FASTCGI is getting a bit crufty these days. It perhaps really needs to
be modernised, with the byte protocol layout simplified to get rid of
these varying size length indicator bytes. This may have been
warranted when networks were slower and amount of body data being
passed around less, but I can't see that that extra complexity is
warranted any more. FASTCGI also can't handle things like end to end
100-continue processing and perhaps has other problems as well in
respect of handling logging outside of request context etc etc.

So, I personally would really love to see a good review of FASTCGI,
AJP and any other similar/pertinent protocols done to distill what in
these modern times is required and would be a better mechanism. The
implementations of FASTCGI could also perhaps be modernised. Of
course, even though FASTCGI may not be the most elegant of systems,
probably too entrenched to get rid of it. The only way perhaps might
be if a improved version formed the basis of any internal
communications for a completely restructured internal model for Apache
3.0 based on serf which had segregation between processes handling
static files and applications, with user separation etc etc.

Graham