Dear all,

again some update:

The mechanism sketched below works now as well in the regression test. 
There is now a backchannel in place that lets conn threads notify the 
driver to check the liveliness of the server. This backchannel makes as 
well the timeout based  liveliness checking obsolete. By using the 
lowwatermark parameter to control thread creation, the resource 
consumption went down significantly without sacrificing speed for this 
setup.

Here is some data from next-scripting.org, which is a rather idle site 
with real world traffic (including bots etc.). The server has defined 
minthreads = 2, is running 2 drivers (nssock + nsssl) and uses a writer 
thread.

before (creating threads, when idle == 0, running server for 2 days)
   10468 requests, connthreads 267
   total cputime 00:10:32

new (creating threads, when queue >= 5, running server for 2 days)
   requests 10104 connthreads 27
   total cputime 00:06:14

One can see, that the number of create operations for connection threads 
went down by a factor of 10 (from 267 to 27), and that the cpu 
consumption was reduced by about 40% (thread initialization costs 0.64 
secs in this configuration). One can get a behavior similar to idle==0 
by setting the low water mark to 0.

The shutdown mechanism is now adjusted to the new infrastructure 
(connection threads have their own condition variable, so one cannot use 
the old broadcast to all conn threads anymore).

-gustaf neumann


Am 07.11.12 02:54, schrieb Gustaf Neumann:
> Some update: after some more testing with the new code, i still think, 
> the version is promising, but needs a few tweaks. I have started to 
> address the thread creation.
>
> To sum up the thread creation behavior/configuration of naviserver-tip:
>
>   - minthreads (try to keep at least minthreads threads idle)
>   - spread (fight against thread mass extinction due to round robin)
>   - threadtimeout (useless due to round robin)
>   - connsperthread (the only parameter effectively controlling the 
> lifespan of an conn thread)
>   - maxconnections (controls maximum number of connections in the 
> waiting queue, including running threads)
>   - concurrentcreatethreshold (percentage of waiting queue full, when 
> to create threads in concurrently)
>
> Due to the policy of keeping at least minthreads idle, threads are 
> preallocated when the load is high, the number of threads never falls 
> under minthreads by construct. Threads stop mostly due to connsperthread.
>
> Naviserver with thread queue (fork)
>
>   - minthreads (try to keep at least minthreads threads idle)
>   - threadtimeout (works effectively, default 120 secs)
>   - connsperthread (as before, just not varied via spread)
>   - maxconnections (as before; use maybe "queuesize" instead)
>   - lowwatermark (new)
>   - highwatermark (was concurrentcreatethreshold)
>
> The parameter "spread" is already deleted, since the enqueueing takes 
> care for a certain distribution, at least, when several threads are 
> created. Threads are deleted often before connsperthread due to the 
> timeout. Experiments show furthermore, that the rather agressive 
> preallocation policy with minthreads idle threads causes now much more 
> thread destroy and thread create operations than before. With with 
> OpenACS, thread creation is compute-intense (about 1 sec).
>
> In the experimental version, connections are only queued when no 
> connection thread is available (the tip version places every 
> connection into the queue). Queueing happens with "bulky" requests, 
> when e.g. a view causes a bunch (on average 5, often 10+, sometimes 
> 50+) of requests for embedded resources (style files, javascript, 
> images). It seems that permitting a few queued requests is often a 
> good idea, since the connection threads will pick these up typically 
> very quickly.
>
> To make the aggressiveness of the thread creation policy better 
> configurable, the experimental version uses for this purpose solely 
> the number of queued requests based on two parameters:
>
>   - lowwatermark (if the actual queue size is below this value, don't 
> try to create threads; default 5%)
>   - highwatermark (if the actual queue size is above this value, allow 
> parallel thread creates; default 80%)
>
> To increase the aggressiveness, one could set lowwatermark to e.g. 0, 
> causing thread-creates, whenever a connection is queued. Increasing 
> the lowwatermark reduces the willingness to create new threads. The 
> highwatermark might be useful for benchmark situations, where the 
> queue is filled up quickly.
>
> The default values seems to work quite well, it is used currently on 
> http://next-scripting.org. However we still need some more experiments 
> on different sites to get a better understanding.
>
> hmm final comment: for the regression test, i had to add the policy to 
> create threads, when all connection threads are busy. The config file 
> of the regression test uses connsperthread 0 (which is the default, 
> but not very good as such), causing the exit every connection thread 
> to exit after every threads. So, when the request comes in, that we 
> have a thread busy, but nothing queued. So, there would not be the 
> need to create a new thread. However, when the conn thread exists, the 
> single request would not be processed.
>
> So, much more testing is needed.
> -gustaf neumann
>
> Am 01.11.12 20:17, schrieb Gustaf Neumann:
>> Dear all,
>>
>> There is now a version on bitbucket, which works quite nice
>> and stable, as far i can tell. I have split up the rather
>> coarse lock of all pools and introduced finer locks for
>> waiting queue (wqueue) and thread queue (tqueue) per pool.
>> The changes lead to significant finer lock granularity and
>> improve scalability.
>>
>> I have tested this new version with a synthetic load of 120
>> requests per seconds, some slower requests and some faster
>> ones, and it appears to be pretty stable. This load keeps
>> about 20 connection threads quite busy on my home machine.
>> The contention of the new locks is very little: on this test
>> we saw 12 busy locks on 217.000 locks on the waiting queue,
>> and 9 busy locks out of 83.000 locks on the thread queue.
>> These measures are much better than in current naviserver,
>> which has on the same test on the queue 248.000 locks with
>> 190 busy ones. The total waiting time for locks is reduced
>> by a factor of 10. One has to add, that it was not so bad
>> before either. The benefit will be larger when multiple
>> pools are used.
>>
>> Finally i think, the code is clearer than before, where the
>> lock duration was quite tricky to determine.
>>
>> opinions?
>> -gustaf neumann
>>
>> PS: For the changes, see:
>> https://bitbucket.org/gustafn/naviserver-connthreadqueue/changesets
>>
>> PS2: have not addressed the server exit signaling yet.
>>
>> On 29.10.12 13:41, Gustaf Neumann wrote:
>>> A version of this is in the following fork:
>>>
>>> https://bitbucket.org/gustafn/naviserver-connthreadqueue/changesets
>>>
>>> So far, the competition on the pool mutex is quite high, but
>>> i think, it can be improved. Currently primarily the pool
>>> mutex is used for conn thread life-cycle management, and it
>>> is needed from the main/drivers/spoolers as well from the
>>> connection threads to update the idle/running/.. counters
>>> needed for controlling thread creation etc. Differentiating
>>> these mutexes should help.
>>>
>>> i have not addressed the termination signaling, but that's
>>> rather simple.
>>>
>>> -gustaf neumann
>>>
>>> On 28.10.12 03:08, Gustaf Neumann wrote:
>>>> i've just implemented lightweight version of the above (just
>>>> a few lines of code) by extending the connThread Arg
>>>> structure; ....

------------------------------------------------------------------------------
Monitor your physical, virtual and cloud infrastructure from a single
web console. Get in-depth insight into apps, servers, databases, vmware,
SAP, cloud infrastructure, etc. Download 30-day Free Trial.
Pricing starts from $795 for 25 servers or applications!
http://p.sf.net/sfu/zoho_dev2dev_nov
_______________________________________________
naviserver-devel mailing list
naviserver-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/naviserver-devel

Reply via email to