In <[EMAIL PROTECTED]>, Daniel Taylor <[EMAIL PROTECTED]> typed:
> data/second), a lot of memcpy()s, and doesn't scale
> very well. Also, adding a packet to N queues is
> expensive because it needs to acquire and release
> N mutex locks (one for each client queue.)
You can't escape that with this architecture. In paticular:
> Each
> enqueue bumps the refcount, each dequeue decreases it;
> when the refcount drops to 0, the packet is free()'d
> (by whomever happened to dequeue it last).
These operations have to be locked, so you have to acquire and release
1 mutex lock N+1 times.
The FSM model already suggested works well, though I tend to call it
the async I/O model, because all your I/O is done async. You track the
state of each socket, and events on the socket trigger state
transitions for that socket. The programming for a single execution
path is a bit more complicated, because the state has to be tracked
explicitly instead of being implicit in the PC, but *all* the
concurrency issues go away, so overall it's a win.
<mike
--
Mike Meyer <[EMAIL PROTECTED]> http://www.mired.org/consulting.html
Independent Network/Unix/Perforce consultant, email for more information.
_______________________________________________
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to "[EMAIL PROTECTED]"