Dear all,
The idea of controlling the number of running threads via queuing
latency is interesting, but i have to look into the details before i
can comment on this.
before one can consider controlling the number of running threads via
queuing latency, one has to improve the awareness in NaviServer about
the various phases in the requests lifetime.
In the experimental version, we have now the following time stamps recorded:
- acceptTime (the time, a socket was accepted)
- requestQueueTime (the time the request was queued; was startTime)
- requestDequeueTime (the time the request was dequeued)
The difference between requestQueueTime and acceptTime is the setup cost
and depends on the amount of work, the driver does. For instance, nssock
of naviserver performs read-ahead, while nsssl does not and passes
connection right away. So, the previously used startTime (which is
acctually the time the request was queued) was for drivers with read
ahead not correct. In the experimental version, [ns_conn start] returns
now always the accept time.
The next paragraph uses the term endTime, which is the time, when a
connection thread is done with a request (either the content was
delivered, or the content was handed over to a writer thread).
The difference between requestDequeueTime and requestQueueTime is the
time spent in the queue. The difference between endTime and
requestDequeueTime is the pure runtime, the difference between endTime
and acceptTime is the totalTime. As a rough approximation the time
between requestDequeueTime and acceptTime is pretty much influenced by
the server setup, and the runtime by the application. I used the term
"approximation" since the runtime of certain other requests influences
the queuing time, as we see in the following:
Consider a server with two running connection threads receiving 6
requests, where requests 2-5 are received in a very short time. The
first three requests are directly assign to connection threads, have
fromQueue == 0. These have queuing times between 88 and 110 micro
seconds, which includes signal sending/receiving, thread change, and
initial setup in the connection thread. The runtimes for these requests
are pretty bad, in the range of 0.24 to 3.8 seconds elapsed time.
[1] waiting 0 current 2 idle 1 ncons 999 fromQueue 0 accept 0.000000 queue
0.000110 run 0.637781 total 0.637891
[2] waiting 3 current 2 idle 0 ncons 998 fromQueue 0 accept 0.000000 queue
0.000090 run 0.245030 total 0.245120
[3] waiting 2 current 2 idle 0 ncons 987 fromQueue 0 accept 0.000000 queue
0.000088 run 0.432421 total 0.432509
[4] waiting 1 current 2 idle 0 ncons 997 fromQueue 1 accept 0.000000 queue
0.244246 run 0.249208 total 0.493454
[5] waiting 0 current 2 idle 0 ncons 986 fromQueue 1 accept 0.000000 queue
0.431545 run 3.713331 total 4.144876
[6] waiting 0 current 2 idle 0 ncons 996 fromQueue 1 accept 0.000000 queue
0.480382 run 3.799818 total 4.280200
Requests [4, 5, 6] are queued, and have queuing times between 0.2 and
0.5 seconds. The queuing times are pretty much the runtimes of [2, 3,
4], therefore the runtime determines the queuing time. For example, the
totalTime of request [4] was 0.493454 secs, half of the time it was
waiting in the queue. Request [4] can consider itself happy that it was
not scheduled after [5] or [6], where its totalTime would have been
likely in the range of 4 secs (10 times slower). Low waiting times are
essential for good performance.
This example shows pretty well the importance of aync delivery
mechanisms like the writer thread or bgdelivery in OpenACS. A file being
delivered by the connection thread over a slow internet connection might
block later requests substantially (as in the cases above). This is even
more important for todays web-sites, where a single view might entail
60+ embedded requests for js, css, images, .... where it is not feasible
to defined hundreds of connection threads.
Before going further into detail i'll provide further introspection
mechanism to the experimental version.
- [ns_server stats] adding total waiting time
- [ns_conn queuetime] ... time spent in queue
- [ns_conn dequeue] ... time stamp when the req starts to actually
run (similar to [ns_conn start])
The first can be used for server monitoring, the next two for single
connection introspection. The queuetime can be useful for better
awareness and for optional output in the access log, and the dequeue
time-stamp for application level profiling as base for a difference with
current time.
Further wishes, suggestions, comments?
-gustaf neumann
------------------------------------------------------------------------------
Monitor your physical, virtual and cloud infrastructure from a single
web console. Get in-depth insight into apps, servers, databases, vmware,
SAP, cloud infrastructure, etc. Download 30-day Free Trial.
Pricing starts from $795 for 25 servers or applications!
http://p.sf.net/sfu/zoho_dev2dev_nov
_______________________________________________
naviserver-devel mailing list
naviserver-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/naviserver-devel