At least in my experience at Facebook, 1 request != 1 packet. That is, if
you send several/many requests to the same memcached box quickly, they will
tend to go out in the same packet or group of packets, so you still get the
benefits of fewer packets (and in fact, we take advantage of this because
it is very important at very high request rates -- eg, over 1M gets per
second). The same thing happens on reply -- the results tend to come back
in just one packet (or more, if the replies are larger than a packet). At
Facebook, our main way of talking to memcached (mcrouter) doesn't even
support multi-gets on the client side, and it *doesn't matter* because the
batching happens anyway.

I don't have any experience with the memcached-defined binary protocol, but
I think there's probably something similar going on here. You can verify by
using a tool like tcpdump or ngrep to see what goes into each packet when
you do a series of gets of the same box over the binary protocol. My bet is
that you'll see them going in the same packet (as long as there aren't any
delays in sending them out from your client application). That being said,
I'd love to see what you learn if you do this experiment.

Cheers,

~Ryan


On Wed, May 7, 2014 at 1:24 AM, Byung-chul Hong <[email protected]>wrote:

> Hello,
>
> For now, I'm trying to evaluate the performance of memcached server by
> using several client workloads.
> I have a question about multi-get implementation in binary protocol.
> As I know, in ascii protocol, we can send multiple keys in a single
> request packet to implement multi-get.
>
> But, in a binary protocol, it seems that we should send multiple request
> packets (one request packet per key) to implement multi-get.
> Even though we send multiple getQ, then sends get for the last key, we
> only can save the number of response packets only for cache miss.
> If I understand correctly, multi-get in binary protocol cannot reduce the
> number of request packets, and
> it also cannot reduce the number of response packets if hit-ratio is very
> high (like 99% get hit).
>
> If the performance bottleneck is on the network side not on the CPU, I
> think reducing the number of packets is still very important,
> but I don't understand why the binary protocol doesn't care about this.
> I missed something?
>
> Thanks in advance,
> Byungchul.
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups
> "memcached" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to [email protected].
> For more options, visit https://groups.google.com/d/optout.
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"memcached" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to