On May 6, 2007, at 4:13, reffael caspi wrote:

I don’t see where the client sends parallel requests to all the buckets.

It's up to the client to decide how it wants its operations processed. I can't speak to the design of the clients you were looking at, but this is a basic overview of mine:

        There is a single IO thread handling all IO for all connections.

        Each connection has an input queue, a read queue, and a write queue.

On the caller side, an operation is constructed, a server is selected via the current hash function, that operation is added the appropriate server's input queue, and the IO selector is interrupted. Note that in the case of a multi-key get, multiple operations may be created (one for each destination server).

In the IO thread, all IO is handled for available reads and writes (as well as connection management), and then it loops. Before going back into the selector, the input queue is processed.

Input queue processing involves transforming operations into buffers (which are placed on the write queue), as well as optimizing sequential get operations into a single, deduplicated get op on the wire.

Once we have write ops, it's just a matter of the selector informing us a given channel is write ready. In writing, a network buffer is filled by concatenating sequential write ops as to send them as efficiently as possible. Write ops whose buffers are drained get a state change and then are added to the read op queue waiting for their responses.


Sorry for the excessive background, but I think it's helpful to understand the answer to your question with respect to my client. For example:

If I issue two get requests from two threads, one for key ``a'' and one for key ``b'' that respectively hash to servers 11 and 25, those two will get queued concurrently, both writes will happen around the same time (although sequentially), and the selector will be waiting for results from both.

        If you want more detail, you can get my client here:

        <http://bleu.west.spy.net/~dustin/projects/memcached/>

--
Dustin Sallings


Reply via email to