:of other connections. My solution was the same as Matt's :-)
:(I'm not happy about the extra context switching that it requires but
:I was more interested in working code than performance; I haven't
:benchmarked it.)
:
:Tony.

    Yah, neither was I, but I figured that the overhead was (A) deterministic,
    and (B) absorbed under heavy loads because the subprocess in question was
    probably already in a run state under those conditions.  So the method
    scales to load quite well and gives us loads of other features.  For 
    example, I could do realtime reverse DNS lookups with a single cache 
    (in the main acceptor process) and then a pool of DNS lookup subprocesses
    which I communicated with over pipes.  Thus the main load-bearing threads
    had very small core loops which was good for the L1/L2 cpu caches.

    It's kinda funny how something you might expect to generate more overhead
    can actually generate less.

                                                -Matt




To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message

Reply via email to