Re: updated worker, threadpool, and leader/follower performance comparisons
Damn...this is getting close. It's getting there it looks like. I can't help but think the final outcome might be a choice of worker OR leader/follower. I'll take a hit in the CPU to have closer avg load between worker and leader/follower as well as the requests/sec being what they are. Ohh..if only.. :) On Sun, 2002-04-28 at 23:02, Brian Pane wrote: With a single listener port (I'll run multi-listener tests later today), MPM RequestsMean resp. CPU CPU typeper second time (ms) load utilization -- worker 125037.4 6.1 65% leader 117540.0 5.6 61% threadpool 101247.1 4.2 47% with two listeners, MPM RequestsMean resp. CPU CPU typeper second time (ms) load utilization -- worker 107144.3 4.1 51% leader 96449.4 3.9 46% threadpool 99747.8 3.9 46% -- Austin Gonyou Systems Architect, CCNA Coremetrics, Inc. Phone: 512-698-7250 email: [EMAIL PROTECTED] It is the part of a good shepherd to shear his flock, not to skin it. Latin Proverb signature.asc Description: This is a digitally signed message part
updated worker, threadpool, and leader/follower performance comparisons
I just ran some tests of the latest code base (including all of Aaron's and my changes to worker and threadpool) to compare the performance of the thread-based Unix MPMs. The test config is the same one I used for my last round of tests, except that I reduced the file size from 10KB to 0KB. Some of the worker and leader/follower tests had been saturating the available network bandwidth, and using a zero-length file was the easiest way to remove this bottleneck while still putting a lot of stress on the thread synchronization code in the MPMs. With a single listener port (I'll run multi-listener tests later today), MPM RequestsMean resp. CPU CPU typeper second time (ms) load utilization -- worker 125037.4 6.1 65% leader 117540.0 5.6 61% threadpool 101247.1 4.2 47% --Brian
Re: updated worker, threadpool, and leader/follower performance comparisons
Also, it looks like the tweaks to worker to reduce the time spent in mutex-protected code may have worked. In this test case, the mutex lock/wakeup calls aren't as prominent as they used to be. syscall seconds calls errors read 21.223611902 open4.51 905 close 8.321802 brk .07 6 stat3.96 905 lseek 5.15 903 fcntl 9.701815 lwp_park2.721042 lwp_unpark 15.08 970 poll5.45 900 writev 4.14 902 lwp_mutex_wakeup .46 83 lwp_mutex_lock .70 71 fstat6411.691815 accept 15.08 908 shutdown4.26 902 getsockname 3.96 908 getsockopt 11.171815 setsockopt 6.04 907 --- -- sys totals: 133.68 21170902 usr time: 3.93 elapsed: 49.28
Re: updated worker, threadpool, and leader/follower performance comparisons
With a single listener port (I'll run multi-listener tests later today), MPM RequestsMean resp. CPU CPU typeper second time (ms) load utilization -- worker 125037.4 6.1 65% leader 117540.0 5.6 61% threadpool 101247.1 4.2 47% with two listeners, MPM RequestsMean resp. CPU CPU typeper second time (ms) load utilization -- worker 107144.3 4.1 51% leader 96449.4 3.9 46% threadpool 99747.8 3.9 46%