Gustaf Neumann wrote:
> Am 11.10.12 19:42, schrieb Jeff Rogers:
>> I'll clean up my testcases and add them.
> great!

Hrm.  I have a completely reproducible case, but I'm not sure how to 
actually write a .test file for it.  Maybe you can give me some 
pointers.  Here's the setup:

config file includes:
ns_section      "ns/server/default"
ns_param        maxthreads 2
ns_param        maxconnections 1000
ns_param        connsperthread  50

ns_section     "ns/server/default/adp"
ns_param        map                 "/*.adp"

The pools settings greatly simplify reproduction, but I'm fairly certain 
are not strictly necessary.  I'm otherwise using simple-config.tcl

In server root are 2 sample pages:
slow.adp
---
<% after 500 %>
ok
---

fast.txt -
---
ok
---


test script "dorequests.tcl"
---
package require http 2

set slow_url "http://127.0.0.1:8080/slow.adp";
set slow_count 4
set fast_url "http://127.0.0.1:8080/fast.txt";
set fast_count 500

proc cleanup {tok} {
     puts "$tok completed"
     http::cleanup $tok
}

for {set c 0} {$c < $slow_count} {incr c} {
     http::geturl $slow_url -command cleanup
}
puts "started $slow_count slow requests"
for {set c 0} {$c < $fast_count} {incr c} {
     http::geturl $fast_url -command cleanup
}
puts "started $fast_count fast requests"

vwait forever
---

Start the server, then run "tclsh dorequests.tcl".  The "slow" requests 
will bind up the running conn threads for a bit, then the "fast" 
requests will fill up the connection queue.  The slow requests will 
eventually complete and the workers will each process cpt + spread 
requests then exit, leaving some 300 or so requests sitting in the 
queue, with nothing to process them.  A new incoming request will start 
one worker, which will process the next cpt+spread queued requests and 
exit, and so forth.

Limiting maxconnections helps some, because it ensures that any new 
thread created as a result of a new incoming connection will be able to 
process the entire queue.  However, since the spread is random, it's 
unlikely but possible in certain configurations for all threads to 
finish their overtime work with connections still in the queue.

> i looked a little into the request burst problem and got some better
> understanding. The problem actually happens when the server runs
> out of threads and more than maxconn requests arrive constantly
> in the timespan of a thread creation.

I think it's when many requests arrive in less than the timespan of a 
thread lifetime, not just creation.

> This served us well over the last
> two years with busy real-world traffic. However, under
> benchmark-situations, the server can be flooded so fast, that the
> queuing during the serialized thread creation causes the problem.

Since new connections will respawn threads, it's nearly impossible to 
run into the problem on a busy server.

> So, the behavior should be at least parameterized, and tailorable.
> i have added a proposal to hg, which is just a few lines of code,
> that allows parallel thread creation above a certain threshold.
> the threshold should become a parameter. If the threshold
> is set large enough, the thread serialization happens, if it is
> small, parallel thread creates are allowed. Ideas for a
> good name of the parameter are very welcome.
>
> Please test, if this helps as well for your test cases...

Thanks, I'll take a look.

-J


------------------------------------------------------------------------------
Don't let slow site performance ruin your business. Deploy New Relic APM
Deploy New Relic app performance management and know exactly
what is happening inside your Ruby, Python, PHP, Java, and .NET app
Try New Relic at no cost today and get our sweet Data Nerd shirt too!
http://p.sf.net/sfu/newrelic-dev2dev
_______________________________________________
naviserver-devel mailing list
naviserver-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/naviserver-devel

Reply via email to