Rainer Toebbicke wrote:
We've been playing around with an 8-core file server serving a massive amount of clients (thousands) in a stress test. The result was pathetic despite cranking up to 512 threads and not vbusying any requests:

the simple scanning of the incoming call queue in order to implement per-service thread quota monopolized about 12% of the CPU (we typically ran with 3000 calls waiting for a thread), which on an 8-core this means that basically a complete CPU was in that loop which is run through under a lock.

Removing the per-service thread-quota handling short-circuited the queue handling and we got 30% more traffic out of the box than previously in a quick hack. At the only expense that now xstat is about as slow as the rest.

Now, one fix would be to postulate that all calls are served first-come-first-serve regardless of rx service. But this of course means a change in semantics.

A more sophisticated solution would be to implement different incoming_call_queues per service, start the threads right away only on the service they are meant for. These preserves the original semantics.

My question is: does anybody care for the "sophisticated" solution? As far as I can see, "servers" normally use one rx service for stats and the other for normal serving, similar for voting and serving in the ubik case. The only slightly more elaborate one is the kaserver. All those probably run fine with a single thread pool and first-come-first-serve.

Any opinions?

I haven't looked at this closely at all. However, if the only purpose of the queue scan is to determine the number of items of each type, then
per-service counters should be maintained when inserting and removing
items from the queue.  This would avoid the scan.

I would also implement separate queues for each service.

In your performance analysis do you have any numbers on the amount of time that worker threads are blocked while processing requests?

Jeffrey Altman

Attachment: smime.p7s
Description: S/MIME Cryptographic Signature

Reply via email to