I have a patch for conn.c and ns.h which enables the functionality of the 
previous conn.c patch to be handled in a module, and/or written more clearly. 
The patch is an extension of the external Ns_Conn* API, best seen in context 
using the patch of ns.h:

Index: ns.h
===================================================================
RCS file: /cvsroot/aolserver/aolserver/include/ns.h,v
retrieving revision 1.86
diff -u -r1.86 ns.h
--- ns.h        7 Jul 2006 03:27:22 -0000       1.86
+++ ns.h        8 Oct 2007 14:11:36 -0000
@@ -659,6 +659,7 @@
 NS_EXTERN char *Ns_ConnServer(Ns_Conn *conn);
 NS_EXTERN int Ns_ConnResponseStatus(Ns_Conn *conn);
 NS_EXTERN int Ns_ConnContentSent(Ns_Conn *conn);
+NS_EXTERN int Ns_ConnSetContentSent(Ns_Conn *conn, int nContentSent);
 NS_EXTERN int Ns_ConnResponseLength(Ns_Conn *conn);
 NS_EXTERN Ns_Time *Ns_ConnStartTime(Ns_Conn *conn);
 NS_EXTERN char *Ns_ConnPeer(Ns_Conn *conn);
@@ -666,7 +667,10 @@
 NS_EXTERN char *Ns_ConnLocation(Ns_Conn *conn);
 NS_EXTERN char *Ns_ConnHost(Ns_Conn *conn);
 NS_EXTERN int Ns_ConnPort(Ns_Conn *conn);
-NS_EXTERN int Ns_ConnSock(Ns_Conn *conn);
+NS_EXTERN SOCKET Ns_ConnSock(Ns_Conn *conn);
+NS_EXTERN int NS_ConnSetSock(Ns_Conn *conn, SOCKET sock);
+NS_EXTERN int Ns_ConnSetSockInvalid(Ns_Conn *conn);
 NS_EXTERN char *Ns_ConnDriverName(Ns_Conn *conn);
 NS_EXTERN void *Ns_ConnDriverContext(Ns_Conn *conn);
 NS_EXTERN int Ns_ConnGetKeepAliveFlag(Ns_Conn *conn);


I also updated the module to use the new API so that it functions the same as 
ns_conn channel and ns_conn contentsentlength. 

Everything is available at http://rmadilo.com/files/nsbgwrite/ 

There is also an associated Tcl module which creates the worker thread using 
thread::create, storing the id in an nsv_array, along with a counter to track 
the number of uses. An example tcl page takes requests and passes them off to 
the worker thread. The tcl page works with both [ns_conn channel] and 
[ns_bgwrite channel]. 

I used apache-bench to saturate the driver, threadpool and worker queues, 
using the default threadpool with a maxthreads of 10.

The summary of requests/sec for concurrencies from 1-100 (28k image):

Requests per second:    163.53 [#/sec] (mean) -n 1 -c 1
Requests per second:    187.54 [#/sec] (mean)
Requests per second:    217.56 [#/sec] (mean)
Requests per second:    309.01 [#/sec] (mean)
Requests per second:    318.85 [#/sec] (mean)
Requests per second:    300.49 [#/sec] (mean)
Requests per second:    284.14 [#/sec] (mean)
Requests per second:    409.18 [#/sec] (mean)
Requests per second:    484.74 [#/sec] (mean)
Requests per second:    400.49 [#/sec] (mean)
Requests per second:    421.17 [#/sec] (mean)
Requests per second:    386.16 [#/sec] (mean)
Requests per second:    387.18 [#/sec] (mean)
Requests per second:    540.77 [#/sec] (mean)
Requests per second:    509.06 [#/sec] (mean)
Requests per second:    497.58 [#/sec] (mean)
Requests per second:    390.64 [#/sec] (mean)
Requests per second:    407.27 [#/sec] (mean)
Requests per second:    398.91 [#/sec] (mean)
Requests per second:    725.58 [#/sec] (mean)
Requests per second:    765.01 [#/sec] (mean) -n 2000 -c 20
Requests per second:    761.57 [#/sec] (mean) -n 2000 -c 40
Requests per second:    685.17 [#/sec] (mean)
Requests per second:    724.04 [#/sec] (mean) -n 2000 -c 75
Requests per second:    588.27 [#/sec] (mean)
Requests per second:    766.57 [#/sec] (mean) -n 4000 -c 100

Full results and summaries are at:
http://rmadilo.com/files/nsbgwrite/logs/

Everything seems to work pretty well. Although the response time goes up with 
concurrency, the timing in the server log (log.txt) shows that this is mostly 
due to time waiting in a queue, not in processing time, as both time in the 
tcl page and the worker thread are about equal or less than the average 
response time. 

One issue which seems strange is that with concurrency > 1, there is an 
additional request showing up in the logs (actually (concurrency -1) 
additional requests). These end in error, the sock is not available. Apache 
doesn't show that it sends these and then disconnects, but they show up even 
at prequeue in the driver thread. It may be a bug in apache bench. Whatever 
the cause, it doesn't appear to be anything more than the client 
disconnecting at some intermediate step. The exact same behavior is seen with 
[ns_conn channel] using the cvs head version. Or I have a bug in my tcl code.

New APIs:

Ns_ConnSetSock can be used for two purposes. One is to change the sock used by
a connection. The other is to signal that the sock is no longer valid, so the 
connection pipeline will do something different. (Not sure exactly what is 
different, but at least the Conn doesn't try to close the sock, since it 
doesn't know about it anymore. But Ns_Sock still exists.)

Ns_ConnSetSockInvalid is an easier to understand version of Ns_ConnSetSock, it 
simply calls Ns_ConnSetSock(conn, INVALID_SOCKET). 

The other new API is Ns_ConnSetContentSent. I'm not too sure about this API, 
but it does allow you to log the expected number of bytes sent to the client 
when you handle the return outside of a conn. There are no good other options 
at the moment. the access log module looks pretty unfriendly to additional 
inputs. 

ns_conn:

Instead of being such an ass on this, I'll remove my objection to the new 
ns_conn channel, etc. Tcl API, but I think both should be deprecated for 
future use, replaced with one or more modules which use the new Ns_Conn APIs. 
The main reason being that ns_conn channel does way too much to be in conn.c. 
The real missing feature was the ability to invalidate the sock (or make 
Ns_Sock think the socket was invalid) so that it wouldn't be cleaned up 
immediately (and restore the conn sock if something goes wrong while creating 
a channel). 

async without Thread package?:

After looking at too much code, task.c and tclhttp.c (which gives ns_http) 
caught my eye. It turns out that tclhttp.c is a very complicated example of 
how to use task.c (Ns_Task). But the purpose of ns_http is very different 
than handing off client returns to a single worker thread. In fact, ns_http 
does a lot of _extra_ work to ensure that each http request is handled by a 
single (not the queue) thread. But multiple threads could all work at the 
same time using the same Ns_Task queue.

So, the bottom like appears to be that Ns_Task could be used to create a 
worker thread. Adding a task to a queue triggers the event loop if it isn't 
already running. It should be very easy to write the callback for the queue 
to initialize by transfering the sock, if it hasn't already been transfered, 
and opening the file to send, creating the headers, etc. 

The task event loop works by doing one callback operation for each sock that 
is ready, like copy the read buffer to the write buffer. The fact that the 
callback proc doesn't have to do anything but very simple operations makes it 
easier to program, but this was far from apparent with the ns_http example, 
which uses blocking wait for reading. Ns_Task breaks up the 'what to do' from 
the 'where to do it' and 'when to do it'.

tom jackson


--
AOLserver - http://www.aolserver.com/

To Remove yourself from this list, simply send an email to <[EMAIL PROTECTED]> 
with the
body of "SIGNOFF AOLSERVER" in the email message. You can leave the Subject: 
field of your email blank.

Reply via email to