Hi Dormando,

So, I've tested Memcached by sending direct byte blocks to it in the order 
I expected it to behave, and indeed it works, as you said it would: 
https://gist.github.com/diogobaeder/4982425

I have no clue why libmemcached does that switch in the middle, but I 
understood what you said about not expecting things to happen in an exact 
order - like, the server can start responding at any moment after it 
receives the first request block -. So, I guess what I need to do now is to 
redesign my proxy to consider this flexible ordering, and not do this whole 
multi-get interaction in one shot, but distribute it into events.

Maybe something that would be nice is to update that protocol documentation 
with the things you told me, this would enlighten the path of whoever wants 
to adventure in this part of building a memcached client, what do you 
think? For example, it says nothing about no-op signaling as end-of-batch.

Thanks for the help, man! :-)

Cheers,

Diogo



On Monday, February 18, 2013 9:45:33 PM UTC-3, Dormando wrote:
>
> > However, after trying to follow this behaviour in a proxy I'm building, 
> this order of interactions is not being respected; So, what I did 
> afterwards, to assert that something strange was going on, was to fire up 
> Wireshark and listen for memcached requests and 
> > responses. Here's a sample of the request and response blocks sent 
> between the client and the server: 
> > http://pastebin.ubuntu.com/1679323/ 
> > This was tested with pylibmc, in binary mode, and after setting "foo" to 
> "bar", and "foo2" to "bar2", I tried to multi-get "foo" and "foo2". I also 
> tested with more keys after this sample, and this is the behaviour I'm 
> getting: 
> >  1. The client sends a getkq for each desired key, in a batch; 
> >  2. The server sends the getkq response for the first key; 
> >  3. The client sends (and the server reads) the no-op request; 
> >  4. The server sends the rest of the keys as getkq responses, in a 
> batch; 
> >  5. The server sends the no-op request. 
> > This is really weird for me, since the first key value is responded 
> without the server even having received the no-op request. 
>
> So this is a bit weird in the documentation of the protocol but 
> basically: The server's free to start sending responses as soon as it gets 
> requests. The reason the no-op packet is stacked at the end (a get-non-q 
> should work as well) is so that once you see the response to that no-op, 
> you can be sure there're no other responses waiting. As with getq a 
> response is "optional", you don't need to look for a 'miss' response. 
>
> So a client *can*: Send all key requests in one batch, along with the 
> no-op packet in the same write. 
>
> For some reason libmemcached *does*: batch all of the requests, then do 
> the no-op in a second write? (this isn't bad, nothing's waiting on it, it 
> just doesn't tack it on in the same write). It's a waste of a packet on 
> the wire. 
>
> Then it can read back whatever. 
>
> It might look mixed up if you're doing this over localhost and there's no 
> lag. Try it over the internet or use a qdisc to add an artificial delay. 
> It should line up more along the way you expect. 
>
> > So, I have two questions: 
> >  1. Is my understanding of how the protocol works, for multi-get, 
> correct? The documentation I found for it (
> http://code.google.com/p/memcached/wiki/BinaryProtocolRevamped) doesn't 
> seem very up-to-date, and doesn't respect that, but I've read somewhere 
> else (and 
> >     experimented with) that no-ops are the blocks that signal an 
> end-of-batch; 
> >  2. If my understanding is right, then is this a problem in the server 
> or in the client? I'm guessing it's in the server, since it starts 
> responding without even getting the no-op request. I can provide more 
> complete details of this interaction data (with timestamps 
> >     included), if you need me to. 
> > Thanks for the help, 
>
> Have you verified that it's an actual problem, or is this just the server 
> responded not in the same order you expect? I don't think anything's 
> blocking by your description, but it might look that way given how split 
> up things are. 
>
> With ASCII prot the server can't start responding until it's read the 
> newline at the end. I'd like it better if libmemcached packed that no-op, 
> and if the server cork'ed writes so it'd at least only send completed 
> packets unless nothing else is in the queue to process. However, like most 
> holidays I am sad and pony-less, unless I go get that pony myself. 
>
> -Dormando 
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"memcached" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


Reply via email to