Server did not respond

2013-02-18 Thread Swapnil Baheti
Hi all,

I have installed memcached in one of our ubantu server. Also,I have 
installed php extension for memcached.

Currently,memcached service status shows running.Also,phpmemcachedAdmin is 
visible at the browser but the problem is when I click on see server stats 
it gives msg *Server ##.##.##.## did not respond* 

I am unable to figure out what's wrong with it.

Kindly reply. Thanks in advance.

Regards,
Swapnil. 

-- 

--- 
You received this message because you are subscribed to the Google Groups 
memcached group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.




Re: memcached 1.4.15, high load, infiniti loop on epoll_wait(3, {}, 32, 10) = 0 0.010073

2013-02-18 Thread lam
the same again, right now:
 
Thread 10 (Thread 0x7fbeab1c1700 (LWP 15511)):
#0  0x7fbeabdc718b in pthread_mutex_trylock () from 
/lib/x86_64-linux-gnu/libpthread.so.0
#1  0x004130a8 in item_get ()
#2  0x00409795 in process_command ()
#3  0x0040b61a in drive_machine ()
#4  0x7fbeac1ef744 in event_base_loop () from 
/usr/lib/x86_64-linux-gnu/libevent-2.0.so.5
#5  0x004129ad in worker_libevent ()
#6  0x7fbeabdc4e9a in start_thread () from 
/lib/x86_64-linux-gnu/libpthread.so.0
#7  0x7fbeabaf1cbd in clone () from /lib/x86_64-linux-gnu/libc.so.6
#8  0x in ?? ()
Thread 9 (Thread 0x7fbeaa9c0700 (LWP 15512)):
#0  0x7fbeabdc718b in pthread_mutex_trylock () from 
/lib/x86_64-linux-gnu/libpthread.so.0
#1  0x004130a8 in item_get ()
#2  0x00409b56 in process_command ()
#3  0x0040b61a in drive_machine ()
#4  0x7fbeac1ef744 in event_base_loop () from 
/usr/lib/x86_64-linux-gnu/libevent-2.0.so.5
#5  0x004129ad in worker_libevent ()
#6  0x7fbeabdc4e9a in start_thread () from 
/lib/x86_64-linux-gnu/libpthread.so.0
#7  0x7fbeabaf1cbd in clone () from /lib/x86_64-linux-gnu/libc.so.6
#8  0x in ?? ()
Thread 8 (Thread 0x7fbeaa1bf700 (LWP 15513)):
#0  0x7fbeabdc718b in pthread_mutex_trylock () from 
/lib/x86_64-linux-gnu/libpthread.so.0
#1  0x004130a8 in item_get ()
#2  0x00409b56 in process_command ()
#3  0x0040b61a in drive_machine ()
#4  0x7fbeac1ef744 in event_base_loop () from 
/usr/lib/x86_64-linux-gnu/libevent-2.0.so.5
#5  0x004129ad in worker_libevent ()
#6  0x7fbeabdc4e9a in start_thread () from 
/lib/x86_64-linux-gnu/libpthread.so.0
#7  0x7fbeabaf1cbd in clone () from /lib/x86_64-linux-gnu/libc.so.6
#8  0x in ?? ()
Thread 7 (Thread 0x7fbea99be700 (LWP 15514)):
#0  0x7fbeabdc718b in pthread_mutex_trylock () from 
/lib/x86_64-linux-gnu/libpthread.so.0
#1  0x004130a8 in item_get ()
#2  0x00409795 in process_command ()
#3  0x0040b61a in drive_machine ()
#4  0x7fbeac1ef744 in event_base_loop () from 
/usr/lib/x86_64-linux-gnu/libevent-2.0.so.5
#5  0x004129ad in worker_libevent ()
#6  0x7fbeabdc4e9a in start_thread () from 
/lib/x86_64-linux-gnu/libpthread.so.0
#7  0x7fbeabaf1cbd in clone () from /lib/x86_64-linux-gnu/libc.so.6
#8  0x in ?? ()
Thread 6 (Thread 0x7fbea91bd700 (LWP 15515)):
#0  0x7fbeabaf2353 in epoll_wait () from /lib/x86_64-linux-gnu/libc.so.6
#1  0x7fbeac203883 in ?? () from 
/usr/lib/x86_64-linux-gnu/libevent-2.0.so.5
#2  0x7fbeac1ef450 in event_base_loop () from 
/usr/lib/x86_64-linux-gnu/libevent-2.0.so.5
#3  0x004129ad in worker_libevent ()
#4  0x7fbeabdc4e9a in start_thread () from 
/lib/x86_64-linux-gnu/libpthread.so.0
#5  0x7fbeabaf1cbd in clone () from /lib/x86_64-linux-gnu/libc.so.6
#6  0x in ?? ()
Thread 5 (Thread 0x7fbea89bc700 (LWP 15516)):
#0  0x7fbeabdc718b in pthread_mutex_trylock () from 
/lib/x86_64-linux-gnu/libpthread.so.0
#1  0x004130a8 in item_get ()
#2  0x00409795 in process_command ()
#3  0x0040b61a in drive_machine ()
#4  0x7fbeac1ef744 in event_base_loop () from 
/usr/lib/x86_64-linux-gnu/libevent-2.0.so.5
#5  0x004129ad in worker_libevent ()
#6  0x7fbeabdc4e9a in start_thread () from 
/lib/x86_64-linux-gnu/libpthread.so.0
#7  0x7fbeabaf1cbd in clone () from /lib/x86_64-linux-gnu/libc.so.6
#8  0x in ?? ()
Thread 4 (Thread 0x7fbea81bb700 (LWP 15517)):
#0  0x7fbeabaf2353 in epoll_wait () from /lib/x86_64-linux-gnu/libc.so.6
#1  0x7fbeac203883 in ?? () from 
/usr/lib/x86_64-linux-gnu/libevent-2.0.so.5
#2  0x7fbeac1ef450 in event_base_loop () from 
/usr/lib/x86_64-linux-gnu/libevent-2.0.so.5
#3  0x004129ad in worker_libevent ()
#4  0x7fbeabdc4e9a in start_thread () from 
/lib/x86_64-linux-gnu/libpthread.so.0
#5  0x7fbeabaf1cbd in clone () from /lib/x86_64-linux-gnu/libc.so.6
#6  0x in ?? ()
Thread 3 (Thread 0x7fbea79ba700 (LWP 15518)):
#0  0x0041269d in assoc_find ()
#1  0x00411e52 in do_item_get ()
#2  0x004130b9 in item_get ()
#3  0x00409795 in process_command ()
#4  0x0040b61a in drive_machine ()
#5  0x7fbeac1ef744 in event_base_loop () from 
/usr/lib/x86_64-linux-gnu/libevent-2.0.so.5
#6  0x004129ad in worker_libevent ()
#7  0x7fbeabdc4e9a in start_thread () from 
/lib/x86_64-linux-gnu/libpthread.so.0
#8  0x7fbeabaf1cbd in clone () from /lib/x86_64-linux-gnu/libc.so.6
#9  0x in ?? ()
Thread 2 (Thread 0x7fbea71b9700 (LWP 15519)):
#0  0x7fbeabdc8d84 in pthread_cond_wait@@GLIBC_2.3.2 () from 
/lib/x86_64-linux-gnu/libpthread.so.0
#1  0x00412444 in assoc_maintenance_thread ()
#2  0x7fbeabdc4e9a in start_thread () from 
/lib/x86_64-linux-gnu/libpthread.so.0
#3  0x7fbeabaf1cbd in clone () from 

Strange behaviour on get-multi over binary protocol

2013-02-18 Thread Diogo Baeder
Hi guys,

I'm getting a rather strange behaviour when I try to issue a get-multi on 
memcached, so I'm not sure this is a problem with my understanding of how 
the protocol was specified or if it's an issue with either pylibmc or 
libmemcached.

What I expect:
As I understand it, for multi-get, the expected communication is:

   1. The client sends a getq/getkq for each desired key, in a batch;
   2. The client sends a no-op request, to signal that there are no more 
   keys to request for;
   3. The server stacks the previous requests, and retrieves all existing 
   keys within the requested ones;
   4. The server responds with all retrieved keys with getq/getkq 
   responses, in a batch;
   5. The server sends a no-op response, to signal that there are no more 
   keys to send.

However, after trying to follow this behaviour in a proxy I'm building, 
this order of interactions is not being respected; So, what I did 
afterwards, to assert that something strange was going on, was to fire up 
Wireshark and listen for memcached requests and responses. Here's a sample 
of the request and response blocks sent between the client and the server:
http://pastebin.ubuntu.com/1679323/
This was tested with pylibmc, in binary mode, and after setting foo to 
bar, and foo2 to bar2, I tried to multi-get foo and foo2. I also 
tested with more keys after this sample, and this is the behaviour I'm 
getting:

   1. The client sends a getkq for each desired key, in a batch;
   2. The server sends the getkq response for the first key;
   3. The client sends (and the server reads) the no-op request;
   4. The server sends the rest of the keys as getkq responses, in a batch;
   5. The server sends the no-op request.

This is really weird for me, since the first key value is responded without 
the server even having received the no-op request.

So, I have two questions:

   1. Is my understanding of how the protocol works, for multi-get, 
   correct? The documentation I found for it (
   http://code.google.com/p/memcached/wiki/BinaryProtocolRevamped) doesn't 
   seem very up-to-date, and doesn't respect that, but I've read somewhere 
   else (and experimented with) that no-ops are the blocks that signal an 
   end-of-batch;
   2. If my understanding is right, then is this a problem in the server or 
   in the client? I'm guessing it's in the server, since it starts responding 
   without even getting the no-op request. I can provide more complete details 
   of this interaction data (with timestamps included), if you need me to.

Thanks for the help,

Diogo

-- 

--- 
You received this message because you are subscribed to the Google Groups 
memcached group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.




Re: memcached 1.4.15, high load, infiniti loop on epoll_wait(3, {}, 32, 10) = 0 0.010073

2013-02-18 Thread dormando
Hey,

Looks like your paste got a bit weird, I see thread 3 twice in the second
one? What were the exact commadns you did for each dump?

Is there any chance you could run the memcached-debug binary from the
build tree and get another stack dump? It might help, though this might be
enough information already.

Another interesting thing would be a stack dump, then 'continue' for a
bit, then stack dump again. See if it's making progress at all, or if the
hung thread moved around.

I see a bunch of sutff hanging on what's likely the central hash lock, and
then one thread sitting in assoc_find(), except so far as I know
assoc_find() itself can't block.

It's not clear to me if the other threads are waiting on the item lock
table or the global lock, so it'd be nice to find that out.

One thing you can try is setting the -o hashpower= value on start. Take
one of your servers that's been running a while and get the hash power
level from it, and use that to seed the new instance.

The new hash table expansion code is a bit tricky and could break like
this. So if you take one instance and pre-size the hash table, and that
stops it from hanging every day, please let us know. That'll help narrow
it down.

On Mon, 18 Feb 2013, lam wrote:

 the same again, right now:
  
 Thread 10 (Thread 0x7fbeab1c1700 (LWP 15511)):
 #0  0x7fbeabdc718b in pthread_mutex_trylock () from 
 /lib/x86_64-linux-gnu/libpthread.so.0
 #1  0x004130a8 in item_get ()
 #2  0x00409795 in process_command ()
 #3  0x0040b61a in drive_machine ()
 #4  0x7fbeac1ef744 in event_base_loop () from 
 /usr/lib/x86_64-linux-gnu/libevent-2.0.so.5
 #5  0x004129ad in worker_libevent ()
 #6  0x7fbeabdc4e9a in start_thread () from 
 /lib/x86_64-linux-gnu/libpthread.so.0
 #7  0x7fbeabaf1cbd in clone () from /lib/x86_64-linux-gnu/libc.so.6
 #8  0x in ?? ()
 Thread 9 (Thread 0x7fbeaa9c0700 (LWP 15512)):
 #0  0x7fbeabdc718b in pthread_mutex_trylock () from 
 /lib/x86_64-linux-gnu/libpthread.so.0
 #1  0x004130a8 in item_get ()
 #2  0x00409b56 in process_command ()
 #3  0x0040b61a in drive_machine ()
 #4  0x7fbeac1ef744 in event_base_loop () from 
 /usr/lib/x86_64-linux-gnu/libevent-2.0.so.5
 #5  0x004129ad in worker_libevent ()
 #6  0x7fbeabdc4e9a in start_thread () from 
 /lib/x86_64-linux-gnu/libpthread.so.0
 #7  0x7fbeabaf1cbd in clone () from /lib/x86_64-linux-gnu/libc.so.6
 #8  0x in ?? ()
 Thread 8 (Thread 0x7fbeaa1bf700 (LWP 15513)):
 #0  0x7fbeabdc718b in pthread_mutex_trylock () from 
 /lib/x86_64-linux-gnu/libpthread.so.0
 #1  0x004130a8 in item_get ()
 #2  0x00409b56 in process_command ()
 #3  0x0040b61a in drive_machine ()
 #4  0x7fbeac1ef744 in event_base_loop () from 
 /usr/lib/x86_64-linux-gnu/libevent-2.0.so.5
 #5  0x004129ad in worker_libevent ()
 #6  0x7fbeabdc4e9a in start_thread () from 
 /lib/x86_64-linux-gnu/libpthread.so.0
 #7  0x7fbeabaf1cbd in clone () from /lib/x86_64-linux-gnu/libc.so.6
 #8  0x in ?? ()
 Thread 7 (Thread 0x7fbea99be700 (LWP 15514)):
 #0  0x7fbeabdc718b in pthread_mutex_trylock () from 
 /lib/x86_64-linux-gnu/libpthread.so.0
 #1  0x004130a8 in item_get ()
 #2  0x00409795 in process_command ()
 #3  0x0040b61a in drive_machine ()
 #4  0x7fbeac1ef744 in event_base_loop () from 
 /usr/lib/x86_64-linux-gnu/libevent-2.0.so.5
 #5  0x004129ad in worker_libevent ()
 #6  0x7fbeabdc4e9a in start_thread () from 
 /lib/x86_64-linux-gnu/libpthread.so.0
 #7  0x7fbeabaf1cbd in clone () from /lib/x86_64-linux-gnu/libc.so.6
 #8  0x in ?? ()
 Thread 6 (Thread 0x7fbea91bd700 (LWP 15515)):
 #0  0x7fbeabaf2353 in epoll_wait () from /lib/x86_64-linux-gnu/libc.so.6
 #1  0x7fbeac203883 in ?? () from 
 /usr/lib/x86_64-linux-gnu/libevent-2.0.so.5
 #2  0x7fbeac1ef450 in event_base_loop () from 
 /usr/lib/x86_64-linux-gnu/libevent-2.0.so.5
 #3  0x004129ad in worker_libevent ()
 #4  0x7fbeabdc4e9a in start_thread () from 
 /lib/x86_64-linux-gnu/libpthread.so.0
 #5  0x7fbeabaf1cbd in clone () from /lib/x86_64-linux-gnu/libc.so.6
 #6  0x in ?? ()
 Thread 5 (Thread 0x7fbea89bc700 (LWP 15516)):
 #0  0x7fbeabdc718b in pthread_mutex_trylock () from 
 /lib/x86_64-linux-gnu/libpthread.so.0
 #1  0x004130a8 in item_get ()
 #2  0x00409795 in process_command ()
 #3  0x0040b61a in drive_machine ()
 #4  0x7fbeac1ef744 in event_base_loop () from 
 /usr/lib/x86_64-linux-gnu/libevent-2.0.so.5
 #5  0x004129ad in worker_libevent ()
 #6  0x7fbeabdc4e9a in start_thread () from 
 /lib/x86_64-linux-gnu/libpthread.so.0
 #7  0x7fbeabaf1cbd in clone () from /lib/x86_64-linux-gnu/libc.so.6
 #8  0x in ?? ()
 Thread 4 (Thread 0x7fbea81bb700 (LWP 15517)):
 #0  0x7fbeabaf2353 in epoll_wait () from 

Re: Strange behaviour on get-multi over binary protocol

2013-02-18 Thread dormando
 However, after trying to follow this behaviour in a proxy I'm building, this 
 order of interactions is not being respected; So, what I did afterwards, to 
 assert that something strange was going on, was to fire up Wireshark and 
 listen for memcached requests and
 responses. Here's a sample of the request and response blocks sent between 
 the client and the server:
 http://pastebin.ubuntu.com/1679323/
 This was tested with pylibmc, in binary mode, and after setting foo to 
 bar, and foo2 to bar2, I tried to multi-get foo and foo2. I also 
 tested with more keys after this sample, and this is the behaviour I'm 
 getting:
  1. The client sends a getkq for each desired key, in a batch;
  2. The server sends the getkq response for the first key;
  3. The client sends (and the server reads) the no-op request;
  4. The server sends the rest of the keys as getkq responses, in a batch;
  5. The server sends the no-op request.
 This is really weird for me, since the first key value is responded without 
 the server even having received the no-op request.

So this is a bit weird in the documentation of the protocol but
basically: The server's free to start sending responses as soon as it gets
requests. The reason the no-op packet is stacked at the end (a get-non-q
should work as well) is so that once you see the response to that no-op,
you can be sure there're no other responses waiting. As with getq a
response is optional, you don't need to look for a 'miss' response.

So a client *can*: Send all key requests in one batch, along with the
no-op packet in the same write.

For some reason libmemcached *does*: batch all of the requests, then do
the no-op in a second write? (this isn't bad, nothing's waiting on it, it
just doesn't tack it on in the same write). It's a waste of a packet on
the wire.

Then it can read back whatever.

It might look mixed up if you're doing this over localhost and there's no
lag. Try it over the internet or use a qdisc to add an artificial delay.
It should line up more along the way you expect.

 So, I have two questions:
  1. Is my understanding of how the protocol works, for multi-get, correct? 
 The documentation I found for it 
 (http://code.google.com/p/memcached/wiki/BinaryProtocolRevamped) doesn't seem 
 very up-to-date, and doesn't respect that, but I've read somewhere else (and
 experimented with) that no-ops are the blocks that signal an end-of-batch;
  2. If my understanding is right, then is this a problem in the server or in 
 the client? I'm guessing it's in the server, since it starts responding 
 without even getting the no-op request. I can provide more complete details 
 of this interaction data (with timestamps
 included), if you need me to.
 Thanks for the help,

Have you verified that it's an actual problem, or is this just the server
responded not in the same order you expect? I don't think anything's
blocking by your description, but it might look that way given how split
up things are.

With ASCII prot the server can't start responding until it's read the
newline at the end. I'd like it better if libmemcached packed that no-op,
and if the server cork'ed writes so it'd at least only send completed
packets unless nothing else is in the queue to process. However, like most
holidays I am sad and pony-less, unless I go get that pony myself.

-Dormando

-- 

--- 
You received this message because you are subscribed to the Google Groups 
memcached group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.




Re: memcached 1.4.15, high load, infiniti loop on epoll_wait(3, {}, 32, 10) = 0 0.010073

2013-02-18 Thread lam


Ok, will run memcached-debug on all nodes
and define hashpower to 22\23 on one of them.

Thank you

-- 

--- 
You received this message because you are subscribed to the Google Groups 
memcached group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Strange behaviour on get-multi over binary protocol

2013-02-18 Thread Diogo Baeder
Hi Dormando,

So, I've tested Memcached by sending direct byte blocks to it in the order 
I expected it to behave, and indeed it works, as you said it would: 
https://gist.github.com/diogobaeder/4982425

I have no clue why libmemcached does that switch in the middle, but I 
understood what you said about not expecting things to happen in an exact 
order - like, the server can start responding at any moment after it 
receives the first request block -. So, I guess what I need to do now is to 
redesign my proxy to consider this flexible ordering, and not do this whole 
multi-get interaction in one shot, but distribute it into events.

Maybe something that would be nice is to update that protocol documentation 
with the things you told me, this would enlighten the path of whoever wants 
to adventure in this part of building a memcached client, what do you 
think? For example, it says nothing about no-op signaling as end-of-batch.

Thanks for the help, man! :-)

Cheers,

Diogo



On Monday, February 18, 2013 9:45:33 PM UTC-3, Dormando wrote:

  However, after trying to follow this behaviour in a proxy I'm building, 
 this order of interactions is not being respected; So, what I did 
 afterwards, to assert that something strange was going on, was to fire up 
 Wireshark and listen for memcached requests and 
  responses. Here's a sample of the request and response blocks sent 
 between the client and the server: 
  http://pastebin.ubuntu.com/1679323/ 
  This was tested with pylibmc, in binary mode, and after setting foo to 
 bar, and foo2 to bar2, I tried to multi-get foo and foo2. I also 
 tested with more keys after this sample, and this is the behaviour I'm 
 getting: 
   1. The client sends a getkq for each desired key, in a batch; 
   2. The server sends the getkq response for the first key; 
   3. The client sends (and the server reads) the no-op request; 
   4. The server sends the rest of the keys as getkq responses, in a 
 batch; 
   5. The server sends the no-op request. 
  This is really weird for me, since the first key value is responded 
 without the server even having received the no-op request. 

 So this is a bit weird in the documentation of the protocol but 
 basically: The server's free to start sending responses as soon as it gets 
 requests. The reason the no-op packet is stacked at the end (a get-non-q 
 should work as well) is so that once you see the response to that no-op, 
 you can be sure there're no other responses waiting. As with getq a 
 response is optional, you don't need to look for a 'miss' response. 

 So a client *can*: Send all key requests in one batch, along with the 
 no-op packet in the same write. 

 For some reason libmemcached *does*: batch all of the requests, then do 
 the no-op in a second write? (this isn't bad, nothing's waiting on it, it 
 just doesn't tack it on in the same write). It's a waste of a packet on 
 the wire. 

 Then it can read back whatever. 

 It might look mixed up if you're doing this over localhost and there's no 
 lag. Try it over the internet or use a qdisc to add an artificial delay. 
 It should line up more along the way you expect. 

  So, I have two questions: 
   1. Is my understanding of how the protocol works, for multi-get, 
 correct? The documentation I found for it (
 http://code.google.com/p/memcached/wiki/BinaryProtocolRevamped) doesn't 
 seem very up-to-date, and doesn't respect that, but I've read somewhere 
 else (and 
  experimented with) that no-ops are the blocks that signal an 
 end-of-batch; 
   2. If my understanding is right, then is this a problem in the server 
 or in the client? I'm guessing it's in the server, since it starts 
 responding without even getting the no-op request. I can provide more 
 complete details of this interaction data (with timestamps 
  included), if you need me to. 
  Thanks for the help, 

 Have you verified that it's an actual problem, or is this just the server 
 responded not in the same order you expect? I don't think anything's 
 blocking by your description, but it might look that way given how split 
 up things are. 

 With ASCII prot the server can't start responding until it's read the 
 newline at the end. I'd like it better if libmemcached packed that no-op, 
 and if the server cork'ed writes so it'd at least only send completed 
 packets unless nothing else is in the queue to process. However, like most 
 holidays I am sad and pony-less, unless I go get that pony myself. 

 -Dormando 


-- 

--- 
You received this message because you are subscribed to the Google Groups 
memcached group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Strange behaviour on get-multi over binary protocol

2013-02-18 Thread Brian Aker
Hi,

On Feb 18, 2013, at 6:03 PM, Diogo Baeder diogobae...@gmail.com wrote:

 I have no clue why libmemcached does that switch in the middle, but I 
 understood what you said about not expecting things to happen in an exact 
 order 

Are you sure the data is on the same server? Libmemcached responds back with 
whatever returned first, which when spread across a number of servers,... well 
who knows who might respond back the quickest.

If you want order, then issue a single blocking get (one after another).

Cheers,
-Brian

smime.p7s
Description: S/MIME cryptographic signature


Re: Strange behaviour on get-multi over binary protocol

2013-02-18 Thread dormando
 On Feb 18, 2013, at 6:03 PM, Diogo Baeder diogobae...@gmail.com wrote:

  I have no clue why libmemcached does that switch in the middle, but I 
  understood what you said about not expecting things to happen in an exact 
  order

 Are you sure the data is on the same server? Libmemcached responds back with 
 whatever returned first, which when spread across a number of servers,... 
 well who knows who might respond back the quickest.

 If you want order, then issue a single blocking get (one after another).

That's not quite what he meant.

He's seeing:

1) libmemcached requests 2 keys
2) memcached responds with 1 key
3) libmemcached sends no-op packet
4) memcached responds with 2nd key, no-op packet

This is just due to the test being run over localhost. libmemcached sends
the final no-op packet in a second write, so memcached has a chance to
start responding before receiving it.

If libmemcached sent everything as one write he probably wouldn't have
noticed until testing with much larger multigets.

It's not a problem, it's just not what he was expecting I guess? The only
actual problem here is both libmemcached and memcached generating a
nonoptimal amount of syscalls and wire packets.

-- 

--- 
You received this message because you are subscribed to the Google Groups 
memcached group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Strange behaviour on get-multi over binary protocol

2013-02-18 Thread Diogo Baeder
Yep, agreed, Dormando, not a problem, just different from my initial
expectations. I'll just have to figure out how to use Tornado in my favor,
to build this part, and deal correctly with the asynchronicity. :-)

Cheers!

__
Diogo Baeder
http://diogobaeder.com.br


On Tue, Feb 19, 2013 at 12:11 AM, dormando dorma...@rydia.net wrote:

  On Feb 18, 2013, at 6:03 PM, Diogo Baeder diogobae...@gmail.com wrote:
 
   I have no clue why libmemcached does that switch in the middle, but I
 understood what you said about not expecting things to happen in an exact
 order
 
  Are you sure the data is on the same server? Libmemcached responds back
 with whatever returned first, which when spread across a number of
 servers,... well who knows who might respond back the quickest.
 
  If you want order, then issue a single blocking get (one after another).

 That's not quite what he meant.

 He's seeing:

 1) libmemcached requests 2 keys
 2) memcached responds with 1 key
 3) libmemcached sends no-op packet
 4) memcached responds with 2nd key, no-op packet

 This is just due to the test being run over localhost. libmemcached sends
 the final no-op packet in a second write, so memcached has a chance to
 start responding before receiving it.

 If libmemcached sent everything as one write he probably wouldn't have
 noticed until testing with much larger multigets.

 It's not a problem, it's just not what he was expecting I guess? The only
 actual problem here is both libmemcached and memcached generating a
 nonoptimal amount of syscalls and wire packets.

 --

 ---
 You received this message because you are subscribed to the Google Groups
 memcached group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to memcached+unsubscr...@googlegroups.com.
 For more options, visit https://groups.google.com/groups/opt_out.




-- 

--- 
You received this message because you are subscribed to the Google Groups 
memcached group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Strange behaviour on get-multi over binary protocol

2013-02-18 Thread Diogo Baeder
Guess what: I just built a fake Memcached server, to answer hardcoded
values (same as before) to a get_multi op for pylibmc, with normally
ordered batches (2 reqs, 1 noop, 2 resp, 1 noop), and it worked. So, in the
end, it seems like forcing the ordering is not what is causing me troubles,
it's something else that is failing to me that I didn't yet notice. In
other words, libmemcached seems to play just fine with ordered batches.

Not sure if this adds anything here, but I thought you might want to know.
:-)

Cheers!

__
Diogo Baeder
http://diogobaeder.com.br


On Tue, Feb 19, 2013 at 12:15 AM, Diogo Baeder diogobae...@gmail.comwrote:

 Yep, agreed, Dormando, not a problem, just different from my initial
 expectations. I'll just have to figure out how to use Tornado in my favor,
 to build this part, and deal correctly with the asynchronicity. :-)

 Cheers!

 __
 Diogo Baeder
 http://diogobaeder.com.br


 On Tue, Feb 19, 2013 at 12:11 AM, dormando dorma...@rydia.net wrote:

  On Feb 18, 2013, at 6:03 PM, Diogo Baeder diogobae...@gmail.com
 wrote:
 
   I have no clue why libmemcached does that switch in the middle, but I
 understood what you said about not expecting things to happen in an exact
 order
 
  Are you sure the data is on the same server? Libmemcached responds back
 with whatever returned first, which when spread across a number of
 servers,... well who knows who might respond back the quickest.
 
  If you want order, then issue a single blocking get (one after another).

 That's not quite what he meant.

 He's seeing:

 1) libmemcached requests 2 keys
 2) memcached responds with 1 key
 3) libmemcached sends no-op packet
 4) memcached responds with 2nd key, no-op packet

 This is just due to the test being run over localhost. libmemcached sends
 the final no-op packet in a second write, so memcached has a chance to
 start responding before receiving it.

 If libmemcached sent everything as one write he probably wouldn't have
 noticed until testing with much larger multigets.

 It's not a problem, it's just not what he was expecting I guess? The only
 actual problem here is both libmemcached and memcached generating a
 nonoptimal amount of syscalls and wire packets.

 --

 ---
 You received this message because you are subscribed to the Google Groups
 memcached group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to memcached+unsubscr...@googlegroups.com.
 For more options, visit https://groups.google.com/groups/opt_out.





-- 

--- 
You received this message because you are subscribed to the Google Groups 
memcached group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.