Re: Strange behaviour on get-multi over binary protocol

2013-02-19 Thread Brian Aker
Hi,

On Feb 18, 2013, at 7:11 PM, dormando dorma...@rydia.net wrote:

 1) libmemcached requests 2 keys
 2) memcached responds with 1 key
 3) libmemcached sends no-op packet
 4) memcached responds with 2nd key, no-op packet

Assuming one server and binary protocol... a single write to the socket would 
be made for both keys under both the binary and ascii server (assuming the keys 
are small enough to fit into the default buffer, which is the case assuming 
trunk version of memcached). 

The code bunches up as many requests as can be made into the buffer before 
flushing. When looking through the code I didn't find anything that should be 
fouling that up (though I did find that we were incrementing the packet counter 
more then what we should, but that won't really effect anything). 

Anyway, everything looks to be ok.

Cheers,
-Brian

smime.p7s
Description: S/MIME cryptographic signature


Re: Strange behaviour on get-multi over binary protocol

2013-02-19 Thread Brian Aker
Hi,

On Feb 19, 2013, at 12:14 AM, dormando dorma...@rydia.net wrote:

 Both keys go out okay, but the no-op at the end seems to go out in a
 separate packet. I've noticed this on several installs using libmemcached,
 verified with tcpdump/etc.

I didn't write this part of the  binary code, Trond did. I am not sure why the 
NOOP is required. I would think that a simple flush of the buffer would be fine.

Cheers,
-Brian

smime.p7s
Description: S/MIME cryptographic signature


Re: Strange behaviour on get-multi over binary protocol

2013-02-19 Thread Trond Norbye
Its been a while since I looked at that code but if my memory is correct
we're using the quiet' mode of the get requests so that it won't send not
found results. The noop is then used as an internal marker so that you
know on the receiving side that you've received all of the responses from
the server..

But I might remember this wrong.. after all its been a few years since I
last looked at the code.

Trond



On Tue, Feb 19, 2013 at 5:07 PM, Brian Aker br...@tangent.org wrote:

 Hi,

 On Feb 19, 2013, at 12:14 AM, dormando dorma...@rydia.net wrote:

  Both keys go out okay, but the no-op at the end seems to go out in a
  separate packet. I've noticed this on several installs using
 libmemcached,
  verified with tcpdump/etc.

 I didn't write this part of the  binary code, Trond did. I am not sure why
 the NOOP is required. I would think that a simple flush of the buffer would
 be fine.

 Cheers,
 -Brian




-- 
Trond Norbye

-- 

--- 
You received this message because you are subscribed to the Google Groups 
memcached group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Strange behaviour on get-multi over binary protocol

2013-02-19 Thread dormando
This is correct. You use the no-op packet to be sure you're not waiting
for any more responses, since you're not going to get miss packets for
missing keys.

No reason for it to be a separate write/packet though.

On Tue, 19 Feb 2013, Trond Norbye wrote:

 Its been a while since I looked at that code but if my memory is correct 
 we're using the quiet' mode of the get requests so that it won't send not 
 found results. The noop is then used as an internal marker so that you know 
 on the receiving side that you've
 received all of the responses from the server..
 But I might remember this wrong.. after all its been a few years since I last 
 looked at the code.

 Trond



 On Tue, Feb 19, 2013 at 5:07 PM, Brian Aker br...@tangent.org wrote:
   Hi,

   On Feb 19, 2013, at 12:14 AM, dormando dorma...@rydia.net wrote:

Both keys go out okay, but the no-op at the end seems to go out in a
separate packet. I've noticed this on several installs using 
 libmemcached,
verified with tcpdump/etc.

 I didn't write this part of the  binary code, Trond did. I am not sure why 
 the NOOP is required. I would think that a simple flush of the buffer would 
 be fine.

 Cheers,
         -Brian




 --
 Trond Norbye

 --
  
 ---
 You received this message because you are subscribed to the Google Groups 
 memcached group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to memcached+unsubscr...@googlegroups.com.
 For more options, visit https://groups.google.com/groups/opt_out.
  
  



-- 

--- 
You received this message because you are subscribed to the Google Groups 
memcached group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Strange behaviour on get-multi over binary protocol

2013-02-19 Thread Brian Aker
Agreed, I'll take a look and see why that is happening.

From looking at the code I can see where it is happening, I just need to find 
out if there was a reason for it. The default value for io_key_prefetch is 
zero, which is what is causing the flush to happen:

http://docs.libmemcached.org/memcached_behavior.html?highlight=memcached_behavior_io_key_prefetch#MEMCACHED_BEHAVIOR_IO_KEY_PREFETCH

It would be interesting to see what would happen if the original reporter of 
this issue modified that value upward.

Cheers,
-Brian

On Feb 19, 2013, at 12:28 PM, dormando dorma...@rydia.net wrote:

 
 
 On Tue, 19 Feb 2013, Trond Norbye wrote:
 
 Its been a while since I looked at that code but if my memory is correct 
 we're using the quiet' mode of the get requests so that it won't send not 
 found results. The noop is then used as an internal marker so that you know 
 on the receiving side that you've
 received all of the responses from the server..
 But I might remember this wrong.. after all its been a few years since I 
 last looked at the code.
 
 Trond
 
 
 
 On Tue, Feb 19, 2013 at 5:07 PM, Brian Aker br...@tangent.org wrote:
  Hi,
 
  On Feb 19, 2013, at 12:14 AM, dormando dorma...@rydia.net wrote:
 
 Both keys go out okay, but the no-op at the end seems to go out in a
 separate packet. I've noticed this on several installs using libmemcached,
 verified with tcpdump/etc.
 
 I didn't write this part of the  binary code, Trond did. I am not sure why 
 the NOOP is required. I would think that a simple flush of the buffer would 
 be fine.
 
 Cheers,
 -Brian
 
 
 
 
 --
 Trond Norbye
 
 --
  
 ---
 You received this message because you are subscribed to the Google Groups 
 memcached group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to memcached+unsubscr...@googlegroups.com.
 For more options, visit https://groups.google.com/groups/opt_out.
  
  
 
 
 

-- 

--- 
You received this message because you are subscribed to the Google Groups 
memcached group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Strange behaviour on get-multi over binary protocol

2013-02-19 Thread Diogo Baeder
Hi guys,

Good news: everything's working now, here, with your help I figured out how
to deal with this situation; However, I opted by forcing the ordering of
the requests and responses in my proxy, because of the specific needs for
the proxy. (I could explain in details, but I thought it would be just
noise in the discussion.)

Brian,

I've tried to set that behavior to different values (0, 1, 100 and 1000),
and they all behave the same: I first get a response from Memcached, for
the first key, before sending the no-op request. (In other words, the same
behavior as I noticed in the beginning.) I've done this by setting the
_io_key_prefetch behavior in pylibmc, which should map
to MEMCACHED_BEHAVIOR_IO_KEY_PREFETCH. (If you try with io_key_prefetch,
without the initial underscore, it breaks.)

Thanks a lot for the help, guys! You're amazing! :-)

Cheers!

__
Diogo Baeder
http://diogobaeder.com.br


On Tue, Feb 19, 2013 at 8:00 PM, Brian Aker br...@tangent.org wrote:

 Agreed, I'll take a look and see why that is happening.

 From looking at the code I can see where it is happening, I just need to
 find out if there was a reason for it. The default value for
 io_key_prefetch is zero, which is what is causing the flush to happen:


 http://docs.libmemcached.org/memcached_behavior.html?highlight=memcached_behavior_io_key_prefetch#MEMCACHED_BEHAVIOR_IO_KEY_PREFETCH

 It would be interesting to see what would happen if the original reporter
 of this issue modified that value upward.

 Cheers,
 -Brian

 On Feb 19, 2013, at 12:28 PM, dormando dorma...@rydia.net wrote:

 
 
  On Tue, 19 Feb 2013, Trond Norbye wrote:
 
  Its been a while since I looked at that code but if my memory is
 correct we're using the quiet' mode of the get requests so that it won't
 send not found results. The noop is then used as an internal marker so
 that you know on the receiving side that you've
  received all of the responses from the server..
  But I might remember this wrong.. after all its been a few years since
 I last looked at the code.
 
  Trond
 
 
 
  On Tue, Feb 19, 2013 at 5:07 PM, Brian Aker br...@tangent.org wrote:
   Hi,
 
   On Feb 19, 2013, at 12:14 AM, dormando dorma...@rydia.net wrote:
 
  Both keys go out okay, but the no-op at the end seems to go out in a
  separate packet. I've noticed this on several installs using
 libmemcached,
  verified with tcpdump/etc.
 
  I didn't write this part of the  binary code, Trond did. I am not sure
 why the NOOP is required. I would think that a simple flush of the buffer
 would be fine.
 
  Cheers,
  -Brian
 
 
 
 
  --
  Trond Norbye
 
  --
 
  ---
  You received this message because you are subscribed to the Google
 Groups memcached group.
  To unsubscribe from this group and stop receiving emails from it, send
 an email to memcached+unsubscr...@googlegroups.com.
  For more options, visit https://groups.google.com/groups/opt_out.
 
 
 
 
 

 --

 ---
 You received this message because you are subscribed to the Google Groups
 memcached group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to memcached+unsubscr...@googlegroups.com.
 For more options, visit https://groups.google.com/groups/opt_out.




-- 

--- 
You received this message because you are subscribed to the Google Groups 
memcached group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Strange behaviour on get-multi over binary protocol

2013-02-18 Thread dormando
 However, after trying to follow this behaviour in a proxy I'm building, this 
 order of interactions is not being respected; So, what I did afterwards, to 
 assert that something strange was going on, was to fire up Wireshark and 
 listen for memcached requests and
 responses. Here's a sample of the request and response blocks sent between 
 the client and the server:
 http://pastebin.ubuntu.com/1679323/
 This was tested with pylibmc, in binary mode, and after setting foo to 
 bar, and foo2 to bar2, I tried to multi-get foo and foo2. I also 
 tested with more keys after this sample, and this is the behaviour I'm 
 getting:
  1. The client sends a getkq for each desired key, in a batch;
  2. The server sends the getkq response for the first key;
  3. The client sends (and the server reads) the no-op request;
  4. The server sends the rest of the keys as getkq responses, in a batch;
  5. The server sends the no-op request.
 This is really weird for me, since the first key value is responded without 
 the server even having received the no-op request.

So this is a bit weird in the documentation of the protocol but
basically: The server's free to start sending responses as soon as it gets
requests. The reason the no-op packet is stacked at the end (a get-non-q
should work as well) is so that once you see the response to that no-op,
you can be sure there're no other responses waiting. As with getq a
response is optional, you don't need to look for a 'miss' response.

So a client *can*: Send all key requests in one batch, along with the
no-op packet in the same write.

For some reason libmemcached *does*: batch all of the requests, then do
the no-op in a second write? (this isn't bad, nothing's waiting on it, it
just doesn't tack it on in the same write). It's a waste of a packet on
the wire.

Then it can read back whatever.

It might look mixed up if you're doing this over localhost and there's no
lag. Try it over the internet or use a qdisc to add an artificial delay.
It should line up more along the way you expect.

 So, I have two questions:
  1. Is my understanding of how the protocol works, for multi-get, correct? 
 The documentation I found for it 
 (http://code.google.com/p/memcached/wiki/BinaryProtocolRevamped) doesn't seem 
 very up-to-date, and doesn't respect that, but I've read somewhere else (and
 experimented with) that no-ops are the blocks that signal an end-of-batch;
  2. If my understanding is right, then is this a problem in the server or in 
 the client? I'm guessing it's in the server, since it starts responding 
 without even getting the no-op request. I can provide more complete details 
 of this interaction data (with timestamps
 included), if you need me to.
 Thanks for the help,

Have you verified that it's an actual problem, or is this just the server
responded not in the same order you expect? I don't think anything's
blocking by your description, but it might look that way given how split
up things are.

With ASCII prot the server can't start responding until it's read the
newline at the end. I'd like it better if libmemcached packed that no-op,
and if the server cork'ed writes so it'd at least only send completed
packets unless nothing else is in the queue to process. However, like most
holidays I am sad and pony-less, unless I go get that pony myself.

-Dormando

-- 

--- 
You received this message because you are subscribed to the Google Groups 
memcached group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Strange behaviour on get-multi over binary protocol

2013-02-18 Thread Diogo Baeder
Hi Dormando,

So, I've tested Memcached by sending direct byte blocks to it in the order 
I expected it to behave, and indeed it works, as you said it would: 
https://gist.github.com/diogobaeder/4982425

I have no clue why libmemcached does that switch in the middle, but I 
understood what you said about not expecting things to happen in an exact 
order - like, the server can start responding at any moment after it 
receives the first request block -. So, I guess what I need to do now is to 
redesign my proxy to consider this flexible ordering, and not do this whole 
multi-get interaction in one shot, but distribute it into events.

Maybe something that would be nice is to update that protocol documentation 
with the things you told me, this would enlighten the path of whoever wants 
to adventure in this part of building a memcached client, what do you 
think? For example, it says nothing about no-op signaling as end-of-batch.

Thanks for the help, man! :-)

Cheers,

Diogo



On Monday, February 18, 2013 9:45:33 PM UTC-3, Dormando wrote:

  However, after trying to follow this behaviour in a proxy I'm building, 
 this order of interactions is not being respected; So, what I did 
 afterwards, to assert that something strange was going on, was to fire up 
 Wireshark and listen for memcached requests and 
  responses. Here's a sample of the request and response blocks sent 
 between the client and the server: 
  http://pastebin.ubuntu.com/1679323/ 
  This was tested with pylibmc, in binary mode, and after setting foo to 
 bar, and foo2 to bar2, I tried to multi-get foo and foo2. I also 
 tested with more keys after this sample, and this is the behaviour I'm 
 getting: 
   1. The client sends a getkq for each desired key, in a batch; 
   2. The server sends the getkq response for the first key; 
   3. The client sends (and the server reads) the no-op request; 
   4. The server sends the rest of the keys as getkq responses, in a 
 batch; 
   5. The server sends the no-op request. 
  This is really weird for me, since the first key value is responded 
 without the server even having received the no-op request. 

 So this is a bit weird in the documentation of the protocol but 
 basically: The server's free to start sending responses as soon as it gets 
 requests. The reason the no-op packet is stacked at the end (a get-non-q 
 should work as well) is so that once you see the response to that no-op, 
 you can be sure there're no other responses waiting. As with getq a 
 response is optional, you don't need to look for a 'miss' response. 

 So a client *can*: Send all key requests in one batch, along with the 
 no-op packet in the same write. 

 For some reason libmemcached *does*: batch all of the requests, then do 
 the no-op in a second write? (this isn't bad, nothing's waiting on it, it 
 just doesn't tack it on in the same write). It's a waste of a packet on 
 the wire. 

 Then it can read back whatever. 

 It might look mixed up if you're doing this over localhost and there's no 
 lag. Try it over the internet or use a qdisc to add an artificial delay. 
 It should line up more along the way you expect. 

  So, I have two questions: 
   1. Is my understanding of how the protocol works, for multi-get, 
 correct? The documentation I found for it (
 http://code.google.com/p/memcached/wiki/BinaryProtocolRevamped) doesn't 
 seem very up-to-date, and doesn't respect that, but I've read somewhere 
 else (and 
  experimented with) that no-ops are the blocks that signal an 
 end-of-batch; 
   2. If my understanding is right, then is this a problem in the server 
 or in the client? I'm guessing it's in the server, since it starts 
 responding without even getting the no-op request. I can provide more 
 complete details of this interaction data (with timestamps 
  included), if you need me to. 
  Thanks for the help, 

 Have you verified that it's an actual problem, or is this just the server 
 responded not in the same order you expect? I don't think anything's 
 blocking by your description, but it might look that way given how split 
 up things are. 

 With ASCII prot the server can't start responding until it's read the 
 newline at the end. I'd like it better if libmemcached packed that no-op, 
 and if the server cork'ed writes so it'd at least only send completed 
 packets unless nothing else is in the queue to process. However, like most 
 holidays I am sad and pony-less, unless I go get that pony myself. 

 -Dormando 


-- 

--- 
You received this message because you are subscribed to the Google Groups 
memcached group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Strange behaviour on get-multi over binary protocol

2013-02-18 Thread Brian Aker
Hi,

On Feb 18, 2013, at 6:03 PM, Diogo Baeder diogobae...@gmail.com wrote:

 I have no clue why libmemcached does that switch in the middle, but I 
 understood what you said about not expecting things to happen in an exact 
 order 

Are you sure the data is on the same server? Libmemcached responds back with 
whatever returned first, which when spread across a number of servers,... well 
who knows who might respond back the quickest.

If you want order, then issue a single blocking get (one after another).

Cheers,
-Brian

smime.p7s
Description: S/MIME cryptographic signature


Re: Strange behaviour on get-multi over binary protocol

2013-02-18 Thread dormando
 On Feb 18, 2013, at 6:03 PM, Diogo Baeder diogobae...@gmail.com wrote:

  I have no clue why libmemcached does that switch in the middle, but I 
  understood what you said about not expecting things to happen in an exact 
  order

 Are you sure the data is on the same server? Libmemcached responds back with 
 whatever returned first, which when spread across a number of servers,... 
 well who knows who might respond back the quickest.

 If you want order, then issue a single blocking get (one after another).

That's not quite what he meant.

He's seeing:

1) libmemcached requests 2 keys
2) memcached responds with 1 key
3) libmemcached sends no-op packet
4) memcached responds with 2nd key, no-op packet

This is just due to the test being run over localhost. libmemcached sends
the final no-op packet in a second write, so memcached has a chance to
start responding before receiving it.

If libmemcached sent everything as one write he probably wouldn't have
noticed until testing with much larger multigets.

It's not a problem, it's just not what he was expecting I guess? The only
actual problem here is both libmemcached and memcached generating a
nonoptimal amount of syscalls and wire packets.

-- 

--- 
You received this message because you are subscribed to the Google Groups 
memcached group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Strange behaviour on get-multi over binary protocol

2013-02-18 Thread Diogo Baeder
Yep, agreed, Dormando, not a problem, just different from my initial
expectations. I'll just have to figure out how to use Tornado in my favor,
to build this part, and deal correctly with the asynchronicity. :-)

Cheers!

__
Diogo Baeder
http://diogobaeder.com.br


On Tue, Feb 19, 2013 at 12:11 AM, dormando dorma...@rydia.net wrote:

  On Feb 18, 2013, at 6:03 PM, Diogo Baeder diogobae...@gmail.com wrote:
 
   I have no clue why libmemcached does that switch in the middle, but I
 understood what you said about not expecting things to happen in an exact
 order
 
  Are you sure the data is on the same server? Libmemcached responds back
 with whatever returned first, which when spread across a number of
 servers,... well who knows who might respond back the quickest.
 
  If you want order, then issue a single blocking get (one after another).

 That's not quite what he meant.

 He's seeing:

 1) libmemcached requests 2 keys
 2) memcached responds with 1 key
 3) libmemcached sends no-op packet
 4) memcached responds with 2nd key, no-op packet

 This is just due to the test being run over localhost. libmemcached sends
 the final no-op packet in a second write, so memcached has a chance to
 start responding before receiving it.

 If libmemcached sent everything as one write he probably wouldn't have
 noticed until testing with much larger multigets.

 It's not a problem, it's just not what he was expecting I guess? The only
 actual problem here is both libmemcached and memcached generating a
 nonoptimal amount of syscalls and wire packets.

 --

 ---
 You received this message because you are subscribed to the Google Groups
 memcached group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to memcached+unsubscr...@googlegroups.com.
 For more options, visit https://groups.google.com/groups/opt_out.




-- 

--- 
You received this message because you are subscribed to the Google Groups 
memcached group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Strange behaviour on get-multi over binary protocol

2013-02-18 Thread Diogo Baeder
Guess what: I just built a fake Memcached server, to answer hardcoded
values (same as before) to a get_multi op for pylibmc, with normally
ordered batches (2 reqs, 1 noop, 2 resp, 1 noop), and it worked. So, in the
end, it seems like forcing the ordering is not what is causing me troubles,
it's something else that is failing to me that I didn't yet notice. In
other words, libmemcached seems to play just fine with ordered batches.

Not sure if this adds anything here, but I thought you might want to know.
:-)

Cheers!

__
Diogo Baeder
http://diogobaeder.com.br


On Tue, Feb 19, 2013 at 12:15 AM, Diogo Baeder diogobae...@gmail.comwrote:

 Yep, agreed, Dormando, not a problem, just different from my initial
 expectations. I'll just have to figure out how to use Tornado in my favor,
 to build this part, and deal correctly with the asynchronicity. :-)

 Cheers!

 __
 Diogo Baeder
 http://diogobaeder.com.br


 On Tue, Feb 19, 2013 at 12:11 AM, dormando dorma...@rydia.net wrote:

  On Feb 18, 2013, at 6:03 PM, Diogo Baeder diogobae...@gmail.com
 wrote:
 
   I have no clue why libmemcached does that switch in the middle, but I
 understood what you said about not expecting things to happen in an exact
 order
 
  Are you sure the data is on the same server? Libmemcached responds back
 with whatever returned first, which when spread across a number of
 servers,... well who knows who might respond back the quickest.
 
  If you want order, then issue a single blocking get (one after another).

 That's not quite what he meant.

 He's seeing:

 1) libmemcached requests 2 keys
 2) memcached responds with 1 key
 3) libmemcached sends no-op packet
 4) memcached responds with 2nd key, no-op packet

 This is just due to the test being run over localhost. libmemcached sends
 the final no-op packet in a second write, so memcached has a chance to
 start responding before receiving it.

 If libmemcached sent everything as one write he probably wouldn't have
 noticed until testing with much larger multigets.

 It's not a problem, it's just not what he was expecting I guess? The only
 actual problem here is both libmemcached and memcached generating a
 nonoptimal amount of syscalls and wire packets.

 --

 ---
 You received this message because you are subscribed to the Google Groups
 memcached group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to memcached+unsubscr...@googlegroups.com.
 For more options, visit https://groups.google.com/groups/opt_out.





-- 

--- 
You received this message because you are subscribed to the Google Groups 
memcached group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.