Hi Lance, Lance Hendrix wrote: > In my attempts to investigate and work on issue #7490, I decided to do > some load testing yesterday. The preliminary results of that testing > can be found on the wiki at http://trac.edgewall.org/wiki/Performance. > Everything except the "Testing Methodology" section is accurate (I > decided to change the methodology somewhat). The current results are > only for Trac-0.11-stable and I am also planning to test > Trac-0.10-stable and the current trunk as I really want to see how much > of a (real) difference there is between 0.11 and 0.10. >
The difference will highly depend on the nature of the page. The biggest the output, the more difference you'll see. Try for example on rendering /browser/file10k.txt, /browser/file20k.txt, etc. with those files containing some dummy text (so as to take highlighting out of the picture), taking 10k, 20k and so on... For 100k, the difference must be quite visible. > One item that surprised me was that when running the "tracd" server as > opposed to the mod_python, it appears that "tracd" is only single > threaded. That is, on my dual core test machine, it only appeared to > use about 50% or only 1/2 of both cores, as it did seem that Linux was > able to bounce the workload between the processors, but never able to > fully utilize both cores. Is this expected behavior? Yes, that's one "feature" of Python, the GIL (Global Interpreter Lock). There are actually multiple threads, but only one can run with the GIL held, the others threads are usually either waiting on that lock or waiting on some system call to finish. Note that you can eventually simulate the mod_python behavior with multiple tracd running on different ports, behind a load balancer (there must be a recipe on the web ... oh great, even on t.e.o ;-) see http://trac.edgewall.org/wiki/TracNginxRecipe). I just noticed your reply on my "tracd --http11" suggestion on the Performance page, and I'm sure that if I you try it, this should also go a long way improving the tracd performance. > The actual > throughput results also bore this out in that mod_python was able to > achieve just over 80% more throughput (able to leverage both cores, > which from experience doesn't mean 100%) than tracd, obviously at a > greater memory cost (more processes/threads = greater memory consumption > and more process swapping for the kernel scheduler). > > Just wanted to be sure that I hadn't done something "hinky" to tracd > that would skew the results. Otherwise, if you get a chance, and have > an interest, let me know what you think about these results. What you did with tracd seems fine to me. Please also include in the page the "System Information" from the /about page (you need to have TRAC_ADMIN permission in order to see it). > As indicated in my email on the user thread, I will also replicate this > testing on a server using Windows to see if that makes a difference. > Hint: pick a different hardware then, so that this doesn't turn into a Windows vs. Linux comparison (nah, I'm joking, I'm actually interested to know what performs the best for Trac). > All of this was in an attempt to replicate the issue described in ticket > #7490 and as yet, I haven't been able to do so. As a matter of fact, > both tracd and trac with mod_python worked flawlessly (I was really > impressed) as I really ended up pushing them pretty hard. FYI, my final > test for mod_python (where I finally started to get some http errors > returned) required me to simulate 160 users with "0" think time. That > is, the test harness was simultaneously sending request for the three > test pages from 160 different threads (equates to about 480 simultaneous > requests), which did start to return a significant number of http > errors, but no 500 errors and no errors in the trac log (I had expected > some kind of application error, but surprisingly no)! The prior test > with 80 simulated users (240 simultaneous requests) returned absolutely > 0 (none, zip, nada) http or application errors; however, as you would > expect the response times had climbed significantly, but the Trac/Apache > were still able to service 100% of the requests, which is again > impressive! > > BTW, these loads were sustained over a 10 minute period! > That's nice to hear! But OTOH, you're requesting the /timeline, /roadmap and / (so usually wiki/WikiStart). As you said, all those requests are "Read Only", which is actually the nicest test case. When stress-testing, you should also try to be nasty... so you should do some POSTs requests as well, and there things generally get more interesting error-wise... For example, my completely unscientific test methodology for stress-testing is usually to prepare a dozen of tabs in Firefox involving updates (e.g. timeline "days back" changes + Update, custom queries with various different criterions + Update, etc.) and then do "Reload All Tabs", sometimes also triggering the reload all while the requests are still being processed. If you find a way to automate such scenario, then we could also get some interesting numbers. When using the browser, I can't of course get any reliable numbers, but I do get the failures ;-) -- Christian --~--~---------~--~----~------------~-------~--~----~ You received this message because you are subscribed to the Google Groups "Trac Development" group. To post to this group, send email to trac-dev@googlegroups.com To unsubscribe from this group, send email to trac-dev+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/trac-dev?hl=en -~----------~----~----~----~------~----~------~--~---