On Sun, Sep 16, 2001 at 08:13:25PM -0700, Justin Erenkrantz wrote: > On Sun, Sep 16, 2001 at 07:59:19PM -0700, Aaron Bannert wrote: > > I don't think it's a quirk of the thread library, I think it's > > fully expected. For the sake of others, here's an excerpt from the > > Solaris 8 pthread_setconcurrency(3THR) man page: > > In testlockperf, you are assuming that all of the threads have > started and will compete for the locks. In a M:N implementation, > this assumption is false. You end up executing in serial rather > than in parallel. This only occurs because you never hit a > user-scheduler entry point in testlockperf. In the case of a MPM, > you will be hitting them left and right. =-) > > Therefore, you need to devise a strategy within testlockperf to > ensure that all of the threads are ready to compete before > continuing the test. The suggested sleep is one way - condition > variables *may* be possible, but it isn't completely obvious to > me how that would work. -- justin
Agreed, but instead of adding sleep we should: a) call pthread_setconcurrency() b) devise a more life-like test c) not do anything cause it's working fine testlockperf is really just trying to gauge the overhead from the mutex routines, and I think it does a very good job of that. The secondary purpose of testlockperf is to compare the old locking API to the new one. > P.S. If you are running a site where you get 50,000 hits a minute, > you shouldn't have MRPC at 10,000. I'd be curious to see what > cnet runs with. You're not going to get 50,000 hits a minute on any box that only has ~32,000 ports and Minimum Segment Length set to anything normal (like 2 minutes). My default Sol8 install can only take down 32k (non-keepalive) hits in 4 minutes before all the sockets are sitting in TIME_WAIT. -aaron
