So, I have two pools of shared buffers; pool A, which is a set of buffers of uncompressed data, and pool B, for compressed data. I three sets of threads.
Thread set 1 pulls from pool A, and fills buffers it receives from pool A up with uncompressed data. Thread set 2 is given a pool from A that has recently been filled. It pulls a buffer from pool B, compresses from A into B, and then returns the buffer it was given, cleared, back to pool A. Thread set 3 is a single thread, that is continually handed compressed data from thread set 2, which it outputs. When data is finished output, it returns the buffer to pool B, cleared. Can anybody describe a scheme to me that will allow thread sets 1 & 2 to scale? Also, suppose for pools A and B, I'm using shared queues that are just C++ stl lists. When I pop from the front, I use a lock for removal to make sure that removal is deterministic. When I enqueue, I use a separate lock to ensure that the internals of the STL list is respected (don't want two threads receiving iterators to the same beginning node, that would probably corrupt the container or cause data loss, or both). Is this the appropriate way to go about it? Thread sets 1 & 2 will likely have more than one thread, but there's no guarantee that thread sets 1 & 2 will have equal threads. I was reading the ZeroMQ manual, and I read the part about multi-threading and message passing, and I was wondering what approaches should be taken with message passing when data is inherently shared between threads.
_______________________________________________ zeromq-dev mailing list [email protected] http://lists.zeromq.org/mailman/listinfo/zeromq-dev
