The first is correct -- mcrouters won't send out multi-gets. Specfically,
mcrouter will accept multi-gets on the server-side. That is, it will
correctly parse a command like "get key1 key2 key3\r\n"), but when it sends
the requests out, it will send them out as "get key1\r\nget key2\r\nget
key3\r\n", even if they all go to the same memcached server. We considered
changing this a few times, but found out it increased complexity
significantly and really didn't matter for the way we used memcache at
Facebook.

On Mon, Jan 5, 2015 at 1:30 PM, Yongming Shen <[email protected]> wrote:

> Hi Ryan, by "mcrouter doesn't even support multi-gets on the client side",
> do you mean mcrouters won't send multi-gets to memcached servers, or
> frontend servers won't send multi-gets to mcrouters, or both?
>
>
> On Wednesday, May 7, 2014 5:10:15 PM UTC-4, Ryan McElroy wrote:
>>
>> At least in my experience at Facebook, 1 request != 1 packet. That is, if
>> you send several/many requests to the same memcached box quickly, they will
>> tend to go out in the same packet or group of packets, so you still get the
>> benefits of fewer packets (and in fact, we take advantage of this because
>> it is very important at very high request rates -- eg, over 1M gets per
>> second). The same thing happens on reply -- the results tend to come back
>> in just one packet (or more, if the replies are larger than a packet). At
>> Facebook, our main way of talking to memcached (mcrouter) doesn't even
>> support multi-gets on the client side, and it *doesn't matter* because the
>> batching happens anyway.
>>
>> I don't have any experience with the memcached-defined binary protocol,
>> but I think there's probably something similar going on here. You can
>> verify by using a tool like tcpdump or ngrep to see what goes into each
>> packet when you do a series of gets of the same box over the binary
>> protocol. My bet is that you'll see them going in the same packet (as long
>> as there aren't any delays in sending them out from your client
>> application). That being said, I'd love to see what you learn if you do
>> this experiment.
>>
>> Cheers,
>>
>> ~Ryan
>>
>>
>> On Wed, May 7, 2014 at 1:24 AM, Byung-chul Hong <[email protected]>
>> wrote:
>>
>>> Hello,
>>>
>>> For now, I'm trying to evaluate the performance of memcached server by
>>> using several client workloads.
>>> I have a question about multi-get implementation in binary protocol.
>>> As I know, in ascii protocol, we can send multiple keys in a single
>>> request packet to implement multi-get.
>>>
>>> But, in a binary protocol, it seems that we should send multiple request
>>> packets (one request packet per key) to implement multi-get.
>>> Even though we send multiple getQ, then sends get for the last key, we
>>> only can save the number of response packets only for cache miss.
>>> If I understand correctly, multi-get in binary protocol cannot reduce
>>> the number of request packets, and
>>> it also cannot reduce the number of response packets if hit-ratio is
>>> very high (like 99% get hit).
>>>
>>> If the performance bottleneck is on the network side not on the CPU, I
>>> think reducing the number of packets is still very important,
>>> but I don't understand why the binary protocol doesn't care about this.
>>> I missed something?
>>>
>>> Thanks in advance,
>>> Byungchul.
>>>
>>> --
>>>
>>> ---
>>> You received this message because you are subscribed to the Google
>>> Groups "memcached" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to [email protected].
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>>
>>  --
>
> ---
> You received this message because you are subscribed to the Google Groups
> "memcached" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to [email protected].
> For more options, visit https://groups.google.com/d/optout.
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"memcached" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to