Re: memcached scalability and availability

2015-06-10 Thread Ryan McElroy
memcached is not itself a distributed system, and therefore it doesn't
really make sense to talk about the scalability and availability of
memcached.

However, memcached can be an important piece of a distributed system --
Facebook has published some numbers about it's usage of memcached:

https://cs.uwaterloo.ca/~brecht/courses/854-Emerging-2014/readings/key-value/fb-memcached-nsdi-2013.pdf
https://code.facebook.com/posts/296442737213493/introducing-mcrouter-a-memcached-protocol-router-for-scaling-memcached-deployments/

Among them:
* mcrouter (a memcache protocol router for building distributed systems)
handles around 5 Billion requests per second
* a single memcached box can do 2-4 Million requests per second
* as a whole, Facebook stores Trillions of items in memcached
* Facebok builds a highly-available system via failover and re-routing in
the mcrouter layer

~Ryan

On Wed, Jun 10, 2015 at 11:50 AM, sanjivkho...@gmail.com wrote:

 Hello,

 Can anyone share specific metrics that we can track to demonstrate
 scalability and availability of memcached deployment? My client is asking
 us to publish numbers on scalability and availability. I need to figure out
 what to measure.

 Thanks,
 Sanjiv


  --

 ---
 You received this message because you are subscribed to the Google Groups
 memcached group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to memcached+unsubscr...@googlegroups.com.
 For more options, visit https://groups.google.com/d/optout.


-- 

--- 
You received this message because you are subscribed to the Google Groups 
memcached group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Possible to have dependent key expriration?

2015-04-03 Thread Ryan McElroy
memcached does not support any feature like this

On Fri, Apr 3, 2015 at 12:08 PM, Kevin Burton ronald.kevin.bur...@gmail.com
 wrote:

 If I have keys A, B, C and I want key A to expire in an hour OR expire
 immediately if the value for B or C changes. Is that possible?

 --

 ---
 You received this message because you are subscribed to the Google Groups
 memcached group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to memcached+unsubscr...@googlegroups.com.
 For more options, visit https://groups.google.com/d/optout.


-- 

--- 
You received this message because you are subscribed to the Google Groups 
memcached group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: cache decr NOT_FOUND

2015-03-15 Thread Ryan McElroy
That's a pretty ancient version of memcached. Have you considered
upgrading? You're running 1.4.13, which was released three years ago. The
current version is 1.4.22. I don't have any idea if this particular issue
has anything to do with your version of memcached, but it's one avenue to
explore.



On Thu, Mar 12, 2015 at 4:13 AM, lsmit...@hare.demon.co.uk wrote:


 Hi: I have a django web server view that increments/decrements a
 memcached counter.  Intermittently decr receives a NOT_FOUND status
 from memcached. I'm sure the correct key is being used. The cache
 timeout is set to non-expiring, but varying the timeout doesn't make
 any difference. incr never fails, only decr and only intermittently.

 Any ideas of what's going on?

 From the /var/log/memcached.log:

 28 incr :1:whois_proxy_whois_whoisxmlapi 1
 28 1
 28 decr :1:whois_proxy_whois_whoisxmlapi 1
 28 NOT_FOUND
 28 connection closed.

 memcached: 1.4.13
 ubuntu: 14.10
 django: 1.6.2
 python-memcached: 1.53
 python: 2.7

 Thanks for any help.



 --
 Les Smithson

 --

 ---
 You received this message because you are subscribed to the Google Groups
 memcached group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to memcached+unsubscr...@googlegroups.com.
 For more options, visit https://groups.google.com/d/optout.


-- 

--- 
You received this message because you are subscribed to the Google Groups 
memcached group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Starting memcached: Item max size cannot be less than 1024 bytes

2015-03-04 Thread Ryan McElroy
Need more info, like what command line is memcached being started with?

On Wed, Mar 4, 2015 at 1:54 PM, Vinit chouhan vinit.g...@gmail.com wrote:

 Hello guys, I am trying to start memcached server on centOS 6.6 and it is
 showing me following errorStarting memcached: Item max size cannot be
 less than 1024 bytes
 http://stackoverflow.com/questions/28861854/starting-memcached-item-max-size-cannot-be-less-than-1024-bytes
 I have tried to change MAXITEMSIZE into /etc/init.d/memcached but still
 the same error. Please help me out because my websites are down (memcached
 is session handler on my server).
 Thanks in advance


 --

 ---
 You received this message because you are subscribed to the Google Groups
 memcached group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to memcached+unsubscr...@googlegroups.com.
 For more options, visit https://groups.google.com/d/optout.


-- 

--- 
You received this message because you are subscribed to the Google Groups 
memcached group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Is there a where to work out when the key was written to memcache and calculate the age of the oldest key on our memcache?

2015-01-16 Thread Ryan McElroy
As was answered, there's no built-in way to determine when a key was set.
Furthermore, memcached itself doesn't track when the oldest key changes.
The only way to approximate the behavior you want is to store the set times
in each value, get all possible keys, and then do the comparison yourself.
This will not be fast or efficient.

I suggest taking a step back k and talking about the higher-level gioal
you're trying to accomplish that makes you think you need the time the
oldest key was set. There may be a good way to accomplish your actual goal
without this information.

~Ryan
On Jan 16, 2015 2:24 AM, Gurdipe Dosanjh gurd...@veeqo.com wrote:

 Hi All,

 Thank you for the updates.

 I have been doing a lot of reading on memcached and I am trying to find a
 way I can find out what is the oldest key.

 Is there a way I can do this?

 Kind Regards

 Gurdipe

 Kind Regards

 Gurdipe

 Email: gurd...@veeqo.com
 Mobile: 07879682511
 Home: 01656749236
 Skype: gurdipe_veeqo
 Linkedin: gurdipe
 Dropbox: gurd...@veepo.com


 On 12 January 2015 at 20:49, 'Jay Grizzard' via memcached 
 memcached@googlegroups.com wrote:

 Ack! You are, of course, right. I looked at the protocol documentation
 and completely failed to engage my brain enough to realize that the
 protocol documentation is… imprecise. Or at least unclear. Or at least
 lacks an appropriate definition of ‘age’.

 My bad!

 -j

 On Mon, Jan 12, 2015 at 12:14 PM, dormando dorma...@rydia.net wrote:

 The only data stored are when the item expires, and when the last time it
 was accessed.

 The age field (and evicted_time) is how long ago the oldest item in the
 LRU was accessed. You can roughly tell how wide your LRU is with that.

 On Mon, 12 Jan 2015, 'Jay Grizzard' via memcached wrote:

  I don’t think there’s a way to figure out when a given key was
 written. If you really needed that, you could write it as part of the data
 you
  stored, or use the ‘flags’ field to store a unixtime timestamp.
  You can get the age of the oldest key, on a per-slab basis, with
 ‘stats items’ and looking at the ‘age’ field. If you want the overall
 oldest age,
  you’ll have to find the oldest age value amongst all the slabs.
 
  Do note, though, that if you have evictions going on, ‘oldest’ is kind
 of dubious, if you’re trying to use it as a “anything newer than this
  exists”, since evictions happen in lru order and per-slab, so younger
 items can disappear before older ones, if they’re in a different slab or
 have
  been accessed more recently. (Don’t know if that’s what you’re doing,
 but just in case you are…)
 
  -j
 
 
  On Mon, Jan 12, 2015 at 9:34 AM, Gurdipe Dosanjh gurd...@veeqo.com
 wrote:
Hi All,
 
  I am new  to memcache and need to know is there a where to work out
 when the key was written to memcache and calculate the age of the oldest
  key on our memcache?
 
  Kind Regards
 
  Gurdipe
 
  --
 
  ---
  You received this message because you are subscribed to the Google
 Groups memcached group.
  To unsubscribe from this group and stop receiving emails from it, send
 an email to memcached+unsubscr...@googlegroups.com.
  For more options, visit https://groups.google.com/d/optout.
 
 
  --
 
  ---
  You received this message because you are subscribed to the Google
 Groups memcached group.
  To unsubscribe from this group and stop receiving emails from it, send
 an email to memcached+unsubscr...@googlegroups.com.
  For more options, visit https://groups.google.com/d/optout.
 
 


  --

 ---
 You received this message because you are subscribed to a topic in the
 Google Groups memcached group.
 To unsubscribe from this topic, visit
 https://groups.google.com/d/topic/memcached/D4Szg9tsaS0/unsubscribe.
 To unsubscribe from this group and all its topics, send an email to
 memcached+unsubscr...@googlegroups.com.

 For more options, visit https://groups.google.com/d/optout.


  --

 ---
 You received this message because you are subscribed to the Google Groups
 memcached group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to memcached+unsubscr...@googlegroups.com.
 For more options, visit https://groups.google.com/d/optout.


-- 

--- 
You received this message because you are subscribed to the Google Groups 
memcached group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Noob question so go easy

2015-01-14 Thread Ryan McElroy
Redundancy is not the only reason for having multiple servers. Another
common usage for multiple servers if if the data you want to cache won't
fit on one server, then you can shard the data across multiple servers. At
Facebook, a front-end cluster will have hundreds to thousands of cache
servers with the data split across them in this way. We achieve redundancy
by having multiple clusters, each with a similar number of memcached
machines.

Cheers,

~Ryan

On Wed, Jan 14, 2015 at 6:19 PM, Les Mikesell lesmikes...@gmail.com wrote:

 On Wed, Jan 14, 2015 at 8:01 PM, gunna lfkentw...@gmail.com wrote:
  I'm doing a large amount of reading on the subject but had a question
 about
  something.  Firstly I believe while you can have an array or memcached
  server they do not replicate.  If they don't replicate then what is the
  reason you would have an array?  Is it purely for redundancy?  What
 happens
  if a query is run on server 1 and loaded into memcahed but on server 2
 that
  query is not as current as server 1.  If server 1 fails and you go to
 server
  2 the query will be in cache but be stale?
 

 You use multiple servers so you still have some running in case one or
 a few fail.   You can either use a client hashing strategy that simply
 fails and pulls from the persistent database for the percentage of
 servers that are down, or you can use one that rebalances across the
 remaining servers.  When you cache something you set how long that
 value is allowed to be used.  Even in the rare case of rebalancing and
 servers going in and out of the cluster such that a client queries a
 server that does not have the latest value, it still won't return
 something older than the time you set as ok to reuse it.

 --
Les Mikesell
 lesmikes...@gmail.com

 --

 ---
 You received this message because you are subscribed to the Google Groups
 memcached group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to memcached+unsubscr...@googlegroups.com.
 For more options, visit https://groups.google.com/d/optout.


-- 

--- 
You received this message because you are subscribed to the Google Groups 
memcached group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Multi-get implementation in binary protocol

2015-01-05 Thread Ryan McElroy
The first is correct -- mcrouters won't send out multi-gets. Specfically,
mcrouter will accept multi-gets on the server-side. That is, it will
correctly parse a command like get key1 key2 key3\r\n), but when it sends
the requests out, it will send them out as get key1\r\nget key2\r\nget
key3\r\n, even if they all go to the same memcached server. We considered
changing this a few times, but found out it increased complexity
significantly and really didn't matter for the way we used memcache at
Facebook.

On Mon, Jan 5, 2015 at 1:30 PM, Yongming Shen symi...@gmail.com wrote:

 Hi Ryan, by mcrouter doesn't even support multi-gets on the client side,
 do you mean mcrouters won't send multi-gets to memcached servers, or
 frontend servers won't send multi-gets to mcrouters, or both?


 On Wednesday, May 7, 2014 5:10:15 PM UTC-4, Ryan McElroy wrote:

 At least in my experience at Facebook, 1 request != 1 packet. That is, if
 you send several/many requests to the same memcached box quickly, they will
 tend to go out in the same packet or group of packets, so you still get the
 benefits of fewer packets (and in fact, we take advantage of this because
 it is very important at very high request rates -- eg, over 1M gets per
 second). The same thing happens on reply -- the results tend to come back
 in just one packet (or more, if the replies are larger than a packet). At
 Facebook, our main way of talking to memcached (mcrouter) doesn't even
 support multi-gets on the client side, and it *doesn't matter* because the
 batching happens anyway.

 I don't have any experience with the memcached-defined binary protocol,
 but I think there's probably something similar going on here. You can
 verify by using a tool like tcpdump or ngrep to see what goes into each
 packet when you do a series of gets of the same box over the binary
 protocol. My bet is that you'll see them going in the same packet (as long
 as there aren't any delays in sending them out from your client
 application). That being said, I'd love to see what you learn if you do
 this experiment.

 Cheers,

 ~Ryan


 On Wed, May 7, 2014 at 1:24 AM, Byung-chul Hong byungch...@gmail.com
 wrote:

 Hello,

 For now, I'm trying to evaluate the performance of memcached server by
 using several client workloads.
 I have a question about multi-get implementation in binary protocol.
 As I know, in ascii protocol, we can send multiple keys in a single
 request packet to implement multi-get.

 But, in a binary protocol, it seems that we should send multiple request
 packets (one request packet per key) to implement multi-get.
 Even though we send multiple getQ, then sends get for the last key, we
 only can save the number of response packets only for cache miss.
 If I understand correctly, multi-get in binary protocol cannot reduce
 the number of request packets, and
 it also cannot reduce the number of response packets if hit-ratio is
 very high (like 99% get hit).

 If the performance bottleneck is on the network side not on the CPU, I
 think reducing the number of packets is still very important,
 but I don't understand why the binary protocol doesn't care about this.
 I missed something?

 Thanks in advance,
 Byungchul.

 --

 ---
 You received this message because you are subscribed to the Google
 Groups memcached group.
 To unsubscribe from this group and stop receiving emails from it, send
 an email to memcached+...@googlegroups.com.
 For more options, visit https://groups.google.com/d/optout.


  --

 ---
 You received this message because you are subscribed to the Google Groups
 memcached group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to memcached+unsubscr...@googlegroups.com.
 For more options, visit https://groups.google.com/d/optout.


-- 

--- 
You received this message because you are subscribed to the Google Groups 
memcached group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: How to store the key size more than 250 characters in memcached using java code?

2014-12-30 Thread Ryan McElroy
You can't; its a on-purpose limitation of memcached. What you can do is use md5 
or sha1 to hash your string and use that as a key.

Long keys are usually a sign of misuse of memcached, though, so it often helps 
to take a step back and evaluate what you actually need to put into your key.

~Ryan (mobile)

 On Dec 30, 2014, at 6:33 AM, sekhar mekala msekhar...@gmail.com wrote:
 
 HI Team,
 
 
  I need to store the key size more than 250 characters in memcached server.
 
 I am tried with my code:
 
 code:
  MemcachedClient client = new MemcachedClient((new 
 InetSocketAddress(192.168.7.134, 11211)));
  String 
 key=292FEC76-5F1C-486F-85A5-09D88096F098_VirtualizationObjectNamesInfo_CustomerManagementObject#292FEC76-5F1C-486F-85A5-09D88096F098#+
  
 (TENANT_PARENTOBJECTID='FFCEF160-BAEE-4876-9C45-8D179DC90481'$OR$TENANT_PARENTOBJECTID='B813170B-3390-4B51-9E6C-81EF3D44BB94'$+
  
 OR$TENANT_PARENTOBJECTID='CD8C669F-A8AA-4987-9B71-ABFBF5207014'$OR$TENANT_PARENTOBJECTID='373F4223-BD1E-4B4C-AE86-F0B09F3D7FBE'$+
  
 OR$TENANT_PARENTOBJECTID='1DE854CA-2E51-4B6B-A3F2-2EC1A9626D1D'$OR$TENANT_PARENTOBJECTID='32C4B07F-804B-45E1-A594-03472945FC1E'$OR+
  $TENANT_PARENTOBJECTID='608E4495-8FF9-49DB-8476-CAFCFF84A5B1'$OR+ 
 $TENANT_PARENTOBJECTID='ACFEB0D5-586E-45C1-A3B4-24781C1D07A5')#(TENANT_CHILDOBJECTID='FFCEF160-BAEE-4876-9C45-8D179DC90481'$OR+
  
 $TENANT_CHILDOBJECTID='B813170B-3390-4B51-9E6C-81EF3D44BB94'$OR$TENANT_CHILDOBJECTID='CD8C669F-A8AA-4987-9B71-ABFBF5207014'$OR+
  
 $TENANT_CHILDOBJECTID='373F4223-BD1E-4B4C-AE86-F0B09F3D7FBE'$OR$TENANT_CHILDOBJECTID='1DE854CA-2E51-4B6B-A3F2-2EC1A9626D1D'$OR+
  
 $TENANT_CHILDOBJECTID='32C4B07F-804B-45E1-A594-03472945FC1E'$OR$TENANT_CHILDOBJECTID='608E4495-8FF9-49DB-8476-CAFCFF84A5B1'$OR+
  
 $TENANT_CHILDOBJECTID='ACFEB0D5-586E-45C1-A3B4-24781C1D07A5')#978754F1-AE6D-4201-85B3-0008BD3EE190_v1.0;
 
 System.out.println(Key size: + key.length());
 client.set(key, 3600, teststse);
 System.out.println(client.get(key));
 
 Error:
 2014-12-30 19:20:57.944 INFO net.spy.memcached.MemcachedConnection:  Added 
 {QA sa=/192.168.7.134:11211, #Rops=0, #Wops=0, #iq=0, topRop=null, 
 topWop=null, toWrite=0, interested=0} to connect queue
 Key size:1184
 2014-12-30 19:20:57.947 INFO net.spy.memcached.MemcachedConnection:  
 Connection state changed for sun.nio.ch.SelectionKeyImpl@6c6455ae
 Exception in thread main java.lang.IllegalArgumentException: Key is too 
 long (maxlen = 250)
   at net.spy.memcached.util.StringUtils.validateKey(StringUtils.java:69)
   at 
 net.spy.memcached.MemcachedConnection.enqueueOperation(MemcachedConnection.java:745)
   at 
 net.spy.memcached.MemcachedClient.asyncStore(MemcachedClient.java:310)
   at net.spy.memcached.MemcachedClient.set(MemcachedClient.java:844)
   at 
 MemcachedConnectionPooling.ClearedMemcachedServer.main(ClearedMemcachedServer.java:42)
 
 
 
 Client: spymemcached
 Server: memcached
 
 plz help me.
 Thanks in advace
 
 Sekhar.
 
 
 
 -- 
 
 --- 
 You received this message because you are subscribed to the Google Groups 
 memcached group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to memcached+unsubscr...@googlegroups.com.
 For more options, visit https://groups.google.com/d/optout.

-- 

--- 
You received this message because you are subscribed to the Google Groups 
memcached group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: What happens when a memcached server is down and the next fetch from cache misses?

2014-12-04 Thread Ryan McElroy
All the logic you're describing is up to the client implementation; the servers 
just respond to requests they receive.

Some clients might failover a get to another memcached server; others might 
return an error.

Clients like mcrouter have configurable options on how to behave. Probably many 
other clients are configurable as well.

~Ryan (mobile)

 On Dec 4, 2014, at 8:52 AM, Gerard Ottaviano gjo...@comcast.net wrote:
 
 Hi, new to memcached.  When a memcached server (#1) goes down and a 
 subsequent fetch from cache to that server (#1) results in a miss and that 
 results in a fetch to the backend, does that now create a new cache entry on 
 one of the other memcached servers (#2) that is still available?  If so, when 
 the down server (#1) becomes available again, and the next fetch of that same 
 entry from the cache is requested, how does it determine from which server to 
 get it, now that it exists on 2 servers?  Do we need to save the hashes which 
 contain the server on which that cached entry exists, so we get the most 
 recent one?  Thanks.
 -- 
 
 --- 
 You received this message because you are subscribed to the Google Groups 
 memcached group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to memcached+unsubscr...@googlegroups.com.
 For more options, visit https://groups.google.com/d/optout.

-- 

--- 
You received this message because you are subscribed to the Google Groups 
memcached group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: does repcached(replication of memcached) has Encryption function when replicate data

2014-10-14 Thread Ryan McElroy
repcached is a separate project, so you should probably direct your queries
about it to that project.

however, it does not look like repcached does any encryption from my glance
at the project page and a quick search of the code.

~Ryan

On Mon, Oct 13, 2014 at 11:27 PM, Sean Lin seanlin...@gmail.com wrote:

 does repcached(replication of memcached)  has Encryption function when
 replicate data
 thanks!

 --

 ---
 You received this message because you are subscribed to the Google Groups
 memcached group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to memcached+unsubscr...@googlegroups.com.
 For more options, visit https://groups.google.com/d/optout.


-- 

--- 
You received this message because you are subscribed to the Google Groups 
memcached group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: memcached and entry size 1Mo

2014-09-10 Thread Ryan McElroy
The real issue, in my opinion, is that if you have values anywhere close to 1MB 
in size, what you probably actually have are values that are *unlimited* in 
size and therefore expanding the size you can store in memcached is just a 
stopgap measure and doesn't solve the underlying problem.

Most of the best practices for memcached (eg using it as a demand-filled, look 
aside cache and using deletes for consistency) work best when data is chunked 
into small units that you fetch several if rather than a large set of data that 
you only use parts of.

Your mileage may vary, so do what makes sense for your use case, but without 
info about specific things you're trying to accomplish, this is the best 
generic advice I have to give people wondering about the size limitations.

I am personally unaware of performance implications.

~Ryan (mobile)

 On Sep 10, 2014, at 8:52 AM, Olivier Martin ekk...@gmail.com wrote:
 
 Hi all
 
 I am trying to find feedbacks on using memcached with entry size superior to 
 1Mo, the default value.
 Since 1.4.2., it is possible to change this setting by configuration but i 
 can't find any feedback on the performance and throughput.
 I have seen a couple posts prior to 2010 who said that is not a good idea to 
 increase this value and to prefer chucking the value in several memcached 
 entries.
 Do someone have some relevant information, benchmark, graphics on this use 
 case? Is keeping the default value still a best practice?
 
 Thanks for your feedback 
 -- 
 
 --- 
 You received this message because you are subscribed to the Google Groups 
 memcached group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to memcached+unsubscr...@googlegroups.com.
 For more options, visit https://groups.google.com/d/optout.

-- 

--- 
You received this message because you are subscribed to the Google Groups 
memcached group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: How to update a database using memcached

2014-08-02 Thread Ryan McElroy
Standard memcached itself can't update the database for you, since it only
responds to requests from other servers and never initiates its own. The
way I've seen this solved is to have your application first modify the
database, then delete the value from memcached. Then on the read path, you
try to read from memcached and in the case of a miss, you read from the
database and write that back to memcached. This is what is known as a
look-aside caching strategy.

An alternative is to have your application update both the database and the
cache, but if you have racing writes, this is an easy way to end up with
cache inconsistencies.


On Fri, Aug 1, 2014 at 5:20 AM, Tristup Ghosh tris...@gmail.com wrote:

 Hi,

 My question is how to update a value to db by using the memcached. If I am
 set a new value to cache using my application and I want it to be updated
 to database, is it possible if yes please help.
 Thanks in advance.

 Regards
 Tristup Ghosh

  --

 ---
 You received this message because you are subscribed to the Google Groups
 memcached group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to memcached+unsubscr...@googlegroups.com.
 For more options, visit https://groups.google.com/d/optout.


-- 

--- 
You received this message because you are subscribed to the Google Groups 
memcached group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: How to use memcache in php code

2014-07-10 Thread Ryan McElroy
Thanks for pointing to the better implementation :)

~Ryan (mobile)

 On Jul 10, 2014, at 7:00 PM, Denis Samoylov samoi...@gmail.com wrote:
 
 (this is what happens when FB guys with their superior mcrouter suggest 
 something :) - no office, just joke)
 
 i would recommend to use Memcached client in PECL (not Memcache). The one 
 with d is based on libmemcached and much better (we run both of the in 
 parallel for long time :) )
 
 here is the direct link
 http://pecl.php.net/package/memcached
 it has examples and installation instructions.
 
 
 
 
 On Wednesday, July 9, 2014 1:50:28 AM UTC-7, Venkada Ramanujam wrote:
 Thanks Ryan.
 
 Thanks,
 Venkadaramanujam
 
 
 
 
 On Wed, Jul 9, 2014 at 2:10 PM, Ryan McElroy rya...@gmail.com wrote:
 First you need to install a memcache library for PHP. Instruction for one 
 choice are here: http://www.php.net/manual/en/memcache.installation.php
 
 Next, I suggest you start with the example on the PHP website: 
 http://www.php.net/manual/en/memcache.examples-overview.php
 
 ~Ryan
 
 
 
 On Wed, Jul 9, 2014 at 1:24 AM, Venkada Ramanujam venkadar...@gmail.com 
 wrote:
 Hi to all,
 
I have working on small project in php running in localhost. I want 
 to use memcache technique. I don't have any idea about memcache.
 Could anyone please tell me how to start work with memcache ?. It is 
 better, if someone give some example codes and integration procedure.
 
 Thanks,
 Venkadaramanujam
 -- 
 
 --- 
 You received this message because you are subscribed to the Google Groups 
 memcached group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to memcached+...@googlegroups.com.
 
 For more options, visit https://groups.google.com/d/optout.
 
 -- 
 
 --- 
 You received this message because you are subscribed to a topic in the 
 Google Groups memcached group.
 To unsubscribe from this topic, visit 
 https://groups.google.com/d/topic/memcached/AJ5mS2SU6Lk/unsubscribe.
 To unsubscribe from this group and all its topics, send an email to 
 memcached+...@googlegroups.com.
 For more options, visit https://groups.google.com/d/optout.
 
 -- 
 
 --- 
 You received this message because you are subscribed to the Google Groups 
 memcached group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to memcached+unsubscr...@googlegroups.com.
 For more options, visit https://groups.google.com/d/optout.

-- 

--- 
You received this message because you are subscribed to the Google Groups 
memcached group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: How to use memcache in php code

2014-07-09 Thread Ryan McElroy
First you need to install a memcache library for PHP. Instruction for one
choice are here: http://www.php.net/manual/en/memcache.installation.php

Next, I suggest you start with the example on the PHP website:
http://www.php.net/manual/en/memcache.examples-overview.php

~Ryan



On Wed, Jul 9, 2014 at 1:24 AM, Venkada Ramanujam 
venkadaramanu...@gmail.com wrote:

 Hi to all,

I have working on small project in php running in localhost. I want
 to use memcache technique. I don't have any idea about memcache.
 Could anyone please tell me how to start work with memcache ?. It is
 better, if someone give some example codes and integration procedure.

 Thanks,
 Venkadaramanujam

 --

 ---
 You received this message because you are subscribed to the Google Groups
 memcached group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to memcached+unsubscr...@googlegroups.com.
 For more options, visit https://groups.google.com/d/optout.


-- 

--- 
You received this message because you are subscribed to the Google Groups 
memcached group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: why I forbid the virtual memory by -k , memcached still used virtual memory

2014-07-08 Thread Ryan McElroy
They are measuring the same memory in this case. Resident is the only
number you really need to worry about. See:
http://www.darkcoding.net/software/resident-and-virtual-memory-on-linux-a-short-example/

~Ryan




On Tue, Jul 8, 2014 at 12:42 AM, 李鑫 xinster...@gmail.com wrote:

 Hi,
I set memcached maxBytes to 12gb  and  I want to forbid virtual memory
 ,  so I add -k option. but the result is physical memory is 12gb and
 virtual memory is 12.2gb . I don`t know why memcached cost about 24gb ,
  more than double maxbytes.
this is the memcached config :
   root 21075 1  4 Jun19 ?19:03:09
 /opt/apps/memcached/bin/memcached -u root -l 10.10.85.65 -p 11211 -m 12288
 -c 4096 -t 4 -P /opt/logs/memcached/11211.pid -k -d

this is memory status by command :  top  -d 1 -p 21075 :


 https://lh6.googleusercontent.com/-LnsjV_8lEwg/U7ufoK7yofI/AIg/hrfI-Cjfgvw/s1600/QQ%E5%9B%BE%E7%89%8720140708153645.jpg


   Is this situation is Ok ? And is memcached still use virtual memory?

 --

 ---
 You received this message because you are subscribed to the Google Groups
 memcached group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to memcached+unsubscr...@googlegroups.com.
 For more options, visit https://groups.google.com/d/optout.


-- 

--- 
You received this message because you are subscribed to the Google Groups 
memcached group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Memcached - keeping active items in cache

2014-06-16 Thread Ryan McElroy
Check out the touch command here: 
https://github.com/memcached/memcached/blob/master/doc/protocol.txt

It lets you update expiration time and should work for your purpose.

~Ryan (mobile)

 On Jun 16, 2014, at 3:22 PM, Caroline Beltran caroline.d.belt...@gmail.com 
 wrote:
 
 I am thinking about using Memcache and have read that you can set an 
 expiration time for your items but an expiry is not a good option for storing 
 sessions keys (in my opinion) because I don't want sessions to expire after a 
 pre-specified time such as 30 minutes.
 
 Instead, I want sessions to expire 30 minutes after the last session 
 activity. So for example, I want the session to remain in cache for the 
 duration of the user's visit end 30 minutes after the web browser is closed.
 
 
 Please advise if Memcache would allow me to do this. Thank you.
 
 -- 
 
 --- 
 You received this message because you are subscribed to the Google Groups 
 memcached group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to memcached+unsubscr...@googlegroups.com.
 For more options, visit https://groups.google.com/d/optout.

-- 

--- 
You received this message because you are subscribed to the Google Groups 
memcached group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Memcached - keeping active items in cache

2014-06-16 Thread Ryan McElroy
What you're describing is more of the behavior of the LRU part of the cache, 
which is not directly exposed to clients. Each time a key is accessed, it is 
moved to the front of the LRU list so it will be the last to be evicted, but 
this does not affect expiration time.

I don't know of a way to update expiration time on get -- I expect it doesn't 
exist because updating an item can be quite a bit more expensive than fetching 
an item due to the need to acquire additional locks when doing the write.

Probably doing a probabilistic touch after each get (say 10% of the time) would 
be sufficient for your needs, but only touching the key after each get would 
guarantee the behavior you're looking for.

~Ryan (mobile)

 On Jun 16, 2014, at 4:44 PM, Caroline Beltran caroline.d.belt...@gmail.com 
 wrote:
 
 Ryan, thank you for your suggestion, I looked at the link you sent and I 
 think that does what I would need.  If I understand correctly, my application 
 would periodically issue a 'touch' causing memcached to forward the 
 expiration time.
 
 Just out of curiosity, do you know if it is possible for memcached to 
 automatically touch itself after every time a key is looked up?  Thanks again.
 
 On Monday, June 16, 2014 3:38:16 PM UTC-5, Ryan McElroy wrote:
 Check out the touch command here: 
 https://github.com/memcached/memcached/blob/master/doc/protocol.txt
 
 It lets you update expiration time and should work for your purpose.
 
 ~Ryan (mobile)
 
 On Jun 16, 2014, at 3:22 PM, Caroline Beltran caroline@gmail.com 
 wrote:
 
 I am thinking about using Memcache and have read that you can set an 
 expiration time for your items but an expiry is not a good option for 
 storing sessions keys (in my opinion) because I don't want sessions to 
 expire after a pre-specified time such as 30 minutes.
 
 Instead, I want sessions to expire 30 minutes after the last session 
 activity. So for example, I want the session to remain in cache for the 
 duration of the user's visit end 30 minutes after the web browser is closed.
 
 
 Please advise if Memcache would allow me to do this. Thank you.
 
 -- 
 
 --- 
 You received this message because you are subscribed to the Google Groups 
 memcached group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to memcached+...@googlegroups.com.
 For more options, visit https://groups.google.com/d/optout.
 
 -- 
 
 --- 
 You received this message because you are subscribed to the Google Groups 
 memcached group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to memcached+unsubscr...@googlegroups.com.
 For more options, visit https://groups.google.com/d/optout.

-- 

--- 
You received this message because you are subscribed to the Google Groups 
memcached group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Stats in a clustered environment - Getting keys

2014-05-19 Thread Ryan McElroy
I took a quick look at the code, specifically here:
https://code.google.com/p/beitmemcached/source/browse/trunk/ClientLibrary/MemcachedClient.cs#354

As far as I can tell, the client is probably doing the right thing (eg,
sharding across memcached instances as expected). I didn't see options to
even attempt replication.

I think this comes down to a bug in your code or error in methodology, or a
bug in the library. I can't tell which from the information you provided.

I do most of my coding in PHP, C, and Python, so I can't help much with
.NET stuff, sorry.

~Ryan



On Mon, May 19, 2014 at 9:53 AM, Jonathan Minond jmin...@gmail.com wrote:

 Hi Ryan,

 Thanks for getting  back to me.
 I have the code for the BtIT client, so I can look... can you give me a
 hint what I would be looking for?
 It's an open source imp (
 https://code.google.com/p/beitmemcached/source/browse/ )
 If not, do you have a .NET client you would recommend, or is more widely
 used/supported perhaps?
 All of my interaction with BT / Memcached is wrapped in one library, so I
 can swap the back end client fairly easily if that would be better.




 On Sat, May 17, 2014 at 9:19 PM, Ryan McElroy ryan...@gmail.com wrote:

 memcached itself knows nothing about other nodes in a system. How the
 keys are distributed is entirely dependent upon your client implementation.
 I'm not familiar with BeIT client, but from reading through the wiki page,
 I would expect it to be splitting the keys among your memcache servers
 approximately equally. I say approximately, because hashing functions are
 probabilisitic. With only 27 keys, I wouldn't be surprised to see
 significant deviation here. At large numbers of keys, I would expect pretty
 even distribution though.

 I think more important than how you are fetching keys from each server is
 how you're using the BeIT client -- which you don't show here. Do you set
 it up to do replication or sharding? If replication, what you're seeing is
 expected. If sharding, I'd say it's unexpected.

 You can figure out what it is doing by using a packet sniffer (eg, ngrep,
 wireshark) and seeing when the client sets keys to which boxes.

 ~Ryan


 On Thu, May 15, 2014 at 1:39 PM, Jonathan Minond jmin...@gmail.comwrote:

 I am seeing something that struck me as a little odd.

 From my reading, as I understand, in a memcached environment, each
 memcache node contains a portion of the objects in the cluster.

 So, I would expect something like if I have 27 keys and 3 nodes.

 Each node is holding ~9 keys/objects is that correct to assume?

 So, to test out...
 add key=MemCached.Endpoint
 value=server1:11211,server2:11211,server3:11211 /

 As a client, I am using the BeIT Memcached Client for .NET (
 code.google.com/p/beitmemcached/)

 To get the keys, I am using Telnet, to get slabs, and then the items, as
 described by Boris here:
 groups.google.com/forum/#!topic/memcached/YyzonP9HUi0

 1) I loop through my collection of hosts
 2) Do the telnet process against that host
 3) Collect all the info.

 It seems to me, that I am getting the same keys listed on all 3
 servers. ?
 *I did not expect this, and I am hoping someone can explain.*

 To clarify:
 This is how I do a GET:


 And this is how I am trying to get the list of keys there is a bit
 of debug code buried in there, but it should still be clear:
 (TelNetConn = A simple telnet helper)

 Liststring ret = new Liststring();

 string memCacheEndPointAddress =
 Config.GetValueWithDefault(MemCached.Endpoint, localhost:11211);

 string[] points = memCacheEndPointAddress.Split(new[] { ',' },
 StringSplitOptions.RemoveEmptyEntries);

 foreach (string h in points)
 {
 string[] hParts = h.Split(new[] { ':' },
 StringSplitOptions.RemoveEmptyEntries);

 string cacheHost = hParts[0];
 TelNetConn tc = new TelNetConn(cacheHost,
 Convert.ToInt32(hParts[1]));

 if (tc.IsConnected)
 {
 ret.Add(HOST:  + cacheHost);

 tc.WriteLine(stats items);
 String s = tc.Read();
 String[] sLines = s.Split(
 new string[] { Environment.NewLine },
 StringSplitOptions.RemoveEmptyEntries);

 foreach (string sl in sLines)
 {
 if (sl == END) continue;

 String[] slParts = sl.Split(new[] { ':'
 }, StringSplitOptions.RemoveEmptyEntries);

 int slabID = Convert.ToInt32(slParts[1]);
 string slabType = slParts[2];

 if (slabType.StartsWith(number) ||
 slabType.StartsWith(age

Re: Stats in a clustered environment - Getting keys

2014-05-17 Thread Ryan McElroy
memcached itself knows nothing about other nodes in a system. How the keys
are distributed is entirely dependent upon your client implementation. I'm
not familiar with BeIT client, but from reading through the wiki page, I
would expect it to be splitting the keys among your memcache servers
approximately equally. I say approximately, because hashing functions are
probabilisitic. With only 27 keys, I wouldn't be surprised to see
significant deviation here. At large numbers of keys, I would expect pretty
even distribution though.

I think more important than how you are fetching keys from each server is
how you're using the BeIT client -- which you don't show here. Do you set
it up to do replication or sharding? If replication, what you're seeing is
expected. If sharding, I'd say it's unexpected.

You can figure out what it is doing by using a packet sniffer (eg, ngrep,
wireshark) and seeing when the client sets keys to which boxes.

~Ryan


On Thu, May 15, 2014 at 1:39 PM, Jonathan Minond jmin...@gmail.com wrote:

 I am seeing something that struck me as a little odd.

 From my reading, as I understand, in a memcached environment, each
 memcache node contains a portion of the objects in the cluster.

 So, I would expect something like if I have 27 keys and 3 nodes.

 Each node is holding ~9 keys/objects is that correct to assume?

 So, to test out...
 add key=MemCached.Endpoint
 value=server1:11211,server2:11211,server3:11211 /

 As a client, I am using the BeIT Memcached Client for .NET (
 code.google.com/p/beitmemcached/)

 To get the keys, I am using Telnet, to get slabs, and then the items, as
 described by Boris here:
 groups.google.com/forum/#!topic/memcached/YyzonP9HUi0

 1) I loop through my collection of hosts
 2) Do the telnet process against that host
 3) Collect all the info.

 It seems to me, that I am getting the same keys listed on all 3
 servers. ?
 *I did not expect this, and I am hoping someone can explain.*

 To clarify:
 This is how I do a GET:


 And this is how I am trying to get the list of keys there is a bit of
 debug code buried in there, but it should still be clear:
 (TelNetConn = A simple telnet helper)

 Liststring ret = new Liststring();

 string memCacheEndPointAddress =
 Config.GetValueWithDefault(MemCached.Endpoint, localhost:11211);

 string[] points = memCacheEndPointAddress.Split(new[] { ',' },
 StringSplitOptions.RemoveEmptyEntries);

 foreach (string h in points)
 {
 string[] hParts = h.Split(new[] { ':' },
 StringSplitOptions.RemoveEmptyEntries);

 string cacheHost = hParts[0];
 TelNetConn tc = new TelNetConn(cacheHost,
 Convert.ToInt32(hParts[1]));

 if (tc.IsConnected)
 {
 ret.Add(HOST:  + cacheHost);

 tc.WriteLine(stats items);
 String s = tc.Read();
 String[] sLines = s.Split(
 new string[] { Environment.NewLine },
 StringSplitOptions.RemoveEmptyEntries);

 foreach (string sl in sLines)
 {
 if (sl == END) continue;

 String[] slParts = sl.Split(new[] { ':' },
 StringSplitOptions.RemoveEmptyEntries);

 int slabID = Convert.ToInt32(slParts[1]);
 string slabType = slParts[2];

 if (slabType.StartsWith(number) ||
 slabType.StartsWith(age))
 {
 tc.WriteLine(stats cachedump  +
 slabID +  100);

 s = tc.Read();

 if (!String.IsNullOrEmpty(s))
 {
 if (s != END)
 {
 // ret.Add(FULL:  + s);

 if (s.StartsWith(ITEM ))
 {
 string[] itemparts =
 s.Split(new[] { ' ' }, StringSplitOptions.None);
 string key = itemparts[1];
 ret.Add(ITEM:  + key);
 }
 }
 }
 }
 }

 }
 else
 {
 ret.Add(HOST:  + cacheHost +  NOT
 CONNECTED);
 }

 tc.Dispose();
 }

 --

 ---
 You 

Re: Multi-get implementation in binary protocol

2014-05-07 Thread Ryan McElroy
At least in my experience at Facebook, 1 request != 1 packet. That is, if
you send several/many requests to the same memcached box quickly, they will
tend to go out in the same packet or group of packets, so you still get the
benefits of fewer packets (and in fact, we take advantage of this because
it is very important at very high request rates -- eg, over 1M gets per
second). The same thing happens on reply -- the results tend to come back
in just one packet (or more, if the replies are larger than a packet). At
Facebook, our main way of talking to memcached (mcrouter) doesn't even
support multi-gets on the client side, and it *doesn't matter* because the
batching happens anyway.

I don't have any experience with the memcached-defined binary protocol, but
I think there's probably something similar going on here. You can verify by
using a tool like tcpdump or ngrep to see what goes into each packet when
you do a series of gets of the same box over the binary protocol. My bet is
that you'll see them going in the same packet (as long as there aren't any
delays in sending them out from your client application). That being said,
I'd love to see what you learn if you do this experiment.

Cheers,

~Ryan


On Wed, May 7, 2014 at 1:24 AM, Byung-chul Hong byungchul.h...@gmail.comwrote:

 Hello,

 For now, I'm trying to evaluate the performance of memcached server by
 using several client workloads.
 I have a question about multi-get implementation in binary protocol.
 As I know, in ascii protocol, we can send multiple keys in a single
 request packet to implement multi-get.

 But, in a binary protocol, it seems that we should send multiple request
 packets (one request packet per key) to implement multi-get.
 Even though we send multiple getQ, then sends get for the last key, we
 only can save the number of response packets only for cache miss.
 If I understand correctly, multi-get in binary protocol cannot reduce the
 number of request packets, and
 it also cannot reduce the number of response packets if hit-ratio is very
 high (like 99% get hit).

 If the performance bottleneck is on the network side not on the CPU, I
 think reducing the number of packets is still very important,
 but I don't understand why the binary protocol doesn't care about this.
 I missed something?

 Thanks in advance,
 Byungchul.

 --

 ---
 You received this message because you are subscribed to the Google Groups
 memcached group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to memcached+unsubscr...@googlegroups.com.
 For more options, visit https://groups.google.com/d/optout.


-- 

--- 
You received this message because you are subscribed to the Google Groups 
memcached group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Add a feature 'strong cas', developed from 'lease' that mentioned in Facebook's paper

2014-04-22 Thread Ryan McElroy
I'll take a look at the API and behavior and provide feedback. I'll also
ping our former server guys to see if they can take a look at the
implementation.

Cheers,

~Ryan


On Sun, Apr 20, 2014 at 12:23 AM, dormando dorma...@rydia.net wrote:

 Well I haven't read the lease paper yet. Ryan, can folks more familiar
 with the actual implementation have a look through it maybe?

 On Thu, 17 Apr 2014, Zhiwei Chan wrote:

 
  I m working on a trading system, and getting stale data for the system
 is unaccepted at most of the time. But the high throughput make it
  impossible to get all data from mysql. So i want to make it more
 reliable when use memcache as a cache. Facebook's paper Scaling Memcache at
  Facebook mentions a method called ‘lease' and 'mcsqueal', but the
 mcsqueal is difficult for my case, because it is hard to get the key for
 mysql.
 
  Adding the 'strong cas' feature is devoted to solve the following
 typical problems, client A and Client B want to update the same key, and
 A(set
  key=1)update database before B(set key=2):
  key not exist in cache: (A get-miss)-(B get-miss)-(B set key=2) - (A
 set key=1);
  or key exist in cache: (A delete key)-(B delete key)-(B set key=2) -
 (A set key=1);
  Some thing Wrong! the key=2 in database but key=1 in cache.
 
  It is possible to happen in a high concurrent system, and i don't find a
 way to solve it with the current cas method. So i add two command 'getss'
  and 'deletess', they will create a lease and return a cas-unique, or
 tell the client there already exist lease on the server. the client can do
  something to prevent stale data. such as wait, or invalidate the
 pre-lease.
  I also think the lease is a concept of 'dirty lock', because anybody try
 to update it will replace itself expiration to the lease's expiration(the
  lease's expiration time should be very short), so in the worst case(low
 probability), the stale data only exist in cache for a short time. It is
  accepted for most app in my case.
 
  For more detail information, please read doc/strongcas.txt. And hoping
 for u guys suggestion ~_~
 
   i have created a pull request on github.
  https://github.com/memcached/memcached/pull/65
 
  --
 
  ---
  You received this message because you are subscribed to the Google
 Groups memcached group.
  To unsubscribe from this group and stop receiving emails from it, send
 an email to memcached+unsubscr...@googlegroups.com.
  For more options, visit https://groups.google.com/d/optout.
 
 

 --

 ---
 You received this message because you are subscribed to the Google Groups
 memcached group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to memcached+unsubscr...@googlegroups.com.
 For more options, visit https://groups.google.com/d/optout.


-- 

--- 
You received this message because you are subscribed to the Google Groups 
memcached group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Setting slab class number.

2014-04-02 Thread Ryan McElroy
memcached has a fixed per-item overhead that's more than 10 bytes so 10
byte slabs don't make a lot of sense to me.

Currently, 200 different slab sizes is a constant in the code. See the
lines around here:
https://github.com/memcached/memcached/blob/master/memcached.h#L71

That could be changed and you could recompile, but I don't know all the
repercussions it might cause.

This might be interesting to you:
http://dom.as/2008/12/25/memcached-for-small-objects/

~Ryan


On Wed, Apr 2, 2014 at 6:32 AM, Slawomir Pryczek slawek1...@gmail.comwrote:

 Hi guys, i noticed there's some limit in slab classes number.

 http://screencast.com/t/RqUovWXVLS
 Is it possible to have eg. 2000 slab classes?

 Alternatively, can i just set limit of all these individually, by typing
 eg. 200 tab separated numbers

 What i want to achieve is to have better distribuition of slabs, optimized
 for storing very small values
 1. 10bytes
 2. 11bytes
 [..]
 100. 10kb
 101. 15kb
 [..]
 189. 100kb
 190. 150kb
 etc.

 It seems that it isn't possible with current formula and -f attribute.

 Thanks,
 Slawomir.

 --

 ---
 You received this message because you are subscribed to the Google Groups
 memcached group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to memcached+unsubscr...@googlegroups.com.
 For more options, visit https://groups.google.com/d/optout.


-- 

--- 
You received this message because you are subscribed to the Google Groups 
memcached group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Configuration

2014-03-18 Thread Ryan McElroy
We need more information to help you. Is that the entire startup script? I
don't think it is. What OS/distribution are you running? How did you
install memcached?

~Ryan


On Tue, Mar 18, 2014 at 1:15 PM, Filippe Costa Spolti 
filippespo...@gmail.com wrote:

  Hi guys.

 I'm trying to change the memcached Memory Size, but it always stays in 64m:

 memcached startup script:

 PORT=11211
 USER=memcached
 MAXCONN=1024
 CACHESIZE=128
 OPTIONS=


 Process:
 498795  0.0  0.0 330848   820 ?Ssl  17:13   0:00 memcached
 -d -p 11211 -u memcached *-m 64* -c 1024 -P
 /var/run/memcached/memcached.pid



 Why it happen?

 --
 Regards,
 __
 Filippe Costa Spolti
 Linux User n°515639 - http://counter.li.org/
 filippespo...@gmail.com
 Be yourself
  http://www.linkedin.com/pub/filippe-costa-spolti/67/985/575

 --

 ---
 You received this message because you are subscribed to the Google Groups
 memcached group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to memcached+unsubscr...@googlegroups.com.
 For more options, visit https://groups.google.com/d/optout.


-- 

--- 
You received this message because you are subscribed to the Google Groups 
memcached group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
inline: linkedin.png

Re: race condition might happen when use memcached in such scenario

2014-03-03 Thread Ryan McElroy
This email does not appear to be related to memcached. If you intended to
send this here, please clarify what you trying to accomplish an what
questions you have.

Cheers,

~Ryan


On Mon, Mar 3, 2014 at 6:56 AM, charles merto charlesme...@gmail.comwrote:

 *Suppose that we have a message passing system using mailboxes. When
 sending to a full mail box or trying to receive from an empty one, a
 process does not block, instead, it gets an error code back. The process
 responds to the error code by just trying again, over and over, until it
 succeeds. Does this lead to RACE CONDITION? *


 can someone help me pls :(

 --

 ---
 You received this message because you are subscribed to the Google Groups
 memcached group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to memcached+unsubscr...@googlegroups.com.
 For more options, visit https://groups.google.com/groups/opt_out.


-- 

--- 
You received this message because you are subscribed to the Google Groups 
memcached group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


Re: memcached persistent connection

2014-03-03 Thread Ryan McElroy
What do you mean by invoking the script in this case? Are you invoking it
from the command line, or from a web server?

I just did some testing and it looks like PHP Memcached objects with
persistence IDs are stored per-PHP-thread.

So if you have 40 PHP threads in your web server, you will have 40 PHP
objects that you need to assign servers to before you're guaranteed that
the persistence will grab you one that already has the servers assigned.
If, on the other hand, you are running from the command line, when the
script finishes, the PHP thread will terminate and the Memcached object
will be destroyed.

Otherwise, your code looks to me like it will work, but you can just drop
the count() call to make it look cleaner, since an empty array evaluates to
false in an if statement in PHP.

~Ryan


On Mon, Feb 24, 2014 at 7:01 PM, siva sivakumar.buddhar...@gmail.comwrote:

 The following is the code snippet i am using to create a memcahed object
 with persistantId.
 First time when I invoke the script it is going to *if* block adds a
 server, but for all requests thereafter it should go to *else* block.
 but it is going to *if *block as the getServerList() is returning zero
 for subsequent requests.

 can someone correct if I am doing something wrong

 $memcache = new Memcached('mempool');
 if(!count($memcache-getServerList()))
 {
   //echo (new pool created has been added);
 $memcache-setOption(Memcached::OPT_LIBKETAMA_COMPATIBLE, true);
  return $memcache-addServer('127.0.0.1', 11211);

 }
 else{
  return true;
 }

 --

 ---
 You received this message because you are subscribed to the Google Groups
 memcached group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to memcached+unsubscr...@googlegroups.com.
 For more options, visit https://groups.google.com/groups/opt_out.


-- 

--- 
You received this message because you are subscribed to the Google Groups 
memcached group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


Re: mem_requested is larger than total_chunks * chunk_size

2014-03-03 Thread Ryan McElroy
I just did some quick testing and it looks like this happens when memcached
starts evicting items from the slab.


On Mon, Mar 3, 2014 at 7:20 PM, Tony Huo tunghai@gmail.com wrote:

 how can this happened?

 mem_requested is larger than total_chunks * chunk_size !

 is that means memcache can't request more page?


 or something else?


  --

 ---
 You received this message because you are subscribed to the Google Groups
 memcached group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to memcached+unsubscr...@googlegroups.com.
 For more options, visit https://groups.google.com/groups/opt_out.


-- 

--- 
You received this message because you are subscribed to the Google Groups 
memcached group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Memcached cluster

2014-03-03 Thread Ryan McElroy
There is no server-side concept of master-slave. memcached never talks to
any other servers; it only responds to requests from clients. Any
clustering or failover you do will need to be driven from the client. What
you could do is set up a client to write to two different memcached boxes
and read from one randomly; if that request fails, then read from the
other. This way, if one goes down, you still have all the data in cache.

Here's an article I found (but didn't read yet):
http://www.linuxjournal.com/article/7451

~Ryan


On Mon, Mar 3, 2014 at 9:36 PM, Ranjit D'Souza bernard@gmail.comwrote:

 Thank you

 Is there any concept of master-slave configuration (like in Redis), and
 promoting the slave to master?

 Can you point me to a document or wiki link that gives more information on
 how to set up a memcached cluster?

 --

 ---
 You received this message because you are subscribed to the Google Groups
 memcached group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to memcached+unsubscr...@googlegroups.com.
 For more options, visit https://groups.google.com/groups/opt_out.


-- 

--- 
You received this message because you are subscribed to the Google Groups 
memcached group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Memcached increasing the item size

2014-02-12 Thread Ryan McElroy
That's all you have to do on the memcached side. I don't know how well clients 
support larger items though; you'll want to verify that the client you are 
using will support larger items as well.

~Ryan

 On Feb 11, 2014, at 11:52 PM, Ratnesh Kumar Gupta ratnesh10.in...@gmail.com 
 wrote:
 
 Hi Ryan ,
 
 I would like to thanks for your reply.
 
 
 Finally,, what do you mean by not showing anything in progress? What 
 progress do you expect? memcached doesn't generally print anything out in 
 normal operation. On an ubuntu box I have, when I run 'memcached -I 2m' then 
 from another terminal run 'echo -ne set foo 0 0 200\r\n$(printf 
 %0.s-- {1..20})\r\n | nc localhost 11211' I get the expected 
 output: STORED. So it works!
 
 By the Warning I presume you mean this: http://pastebin.com/jtdjda2D; yes 
 same warning i use to get..  actually i am new to this so having bit of 
 confusions.
 
 After the $ memcached -I 2m so it will set the item size limit as 2mb and 
 later we can perform the storing of data. is that all or i have to do some 
 other changes also.
 
 
 
 On Wed, Feb 12, 2014 at 2:27 AM, Ryan McElroy ryan...@gmail.com wrote:
 According to the man page, this is the correct method to increase the item 
 size limit.
 
 By the Warning I presume you mean this: http://pastebin.com/jtdjda2D
 First of all, it's there for a reason: generally, if you think you need 
 items larger than 1MB, you're probably framing the problem suboptimally for 
 memcached. If possible, consider reframing the problem you're tackling so 
 you can store more smaller items instead of one or a few larger items.
 
 Finally,, what do you mean by not showing anything in progress? What 
 progress do you expect? memcached doesn't generally print anything out in 
 normal operation. On an ubuntu box I have, when I run 'memcached -I 2m' then 
 from another terminal run 'echo -ne set foo 0 0 200\r\n$(printf 
 %0.s-- {1..20})\r\n | nc localhost 11211' I get the expected 
 output: STORED. So it works!
 
 ~Ryan
 
 On Feb 11, 2014, at 9:43 AM, Ratnesh Kumar Gupta 
 ratnesh10.in...@gmail.com wrote:
 
 hi ,
 
 i am working with Memcached 1.4.14 on ubuntu server. i am in need to 
 increase the size of the item for storing the data in memcache. 
 
 i am using the command for updating the size as 
 
 $ memcached -I 2m 
 
 but its just shows me the Warning and it does not showing anything in 
 progress. Is this correct method .. ?
 
 Please help me in this as i am in urgent need of this.
 
 Thanks in advance 
 Ratnesh
 -- 
  
 --- 
 You received this message because you are subscribed to the Google Groups 
 memcached group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to memcached+unsubscr...@googlegroups.com.
 
 For more options, visit https://groups.google.com/groups/opt_out.
 
 -- 
  
 --- 
 You received this message because you are subscribed to a topic in the 
 Google Groups memcached group.
 To unsubscribe from this topic, visit 
 https://groups.google.com/d/topic/memcached/eSBo2irFERk/unsubscribe.
 To unsubscribe from this group and all its topics, send an email to 
 memcached+unsubscr...@googlegroups.com.
 For more options, visit https://groups.google.com/groups/opt_out.
 
 
 
 -- 
 --
 *Thanks   Regards *
 *Ratnesh Kumar Gupta*
 *(+91 - 9403983944)*
 
 
 -- 
  
 --- 
 You received this message because you are subscribed to the Google Groups 
 memcached group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to memcached+unsubscr...@googlegroups.com.
 For more options, visit https://groups.google.com/groups/opt_out.

-- 

--- 
You received this message because you are subscribed to the Google Groups 
memcached group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Certain items do not appear to expire after their expiry time

2014-01-26 Thread Ryan McElroy
I agree with your overall assessment -- memcached is supposed to not return 
items whose expiry has passed.

I just built memcached 1.4.15 on my ubuntu box and was able to verify that 
the keys are, as expected, not returned after the expected expiration time 
(in fact, the key is purged from memory on a subsequent get, also as 
expected).

It would help to have a way to reproduce the problem. Is there anything 
you're doing to the key after setting it? You mentioned incremented keys -- 
are you calling incr? Anything else you might be doing?

This is the test I ran:

date +%s; echo -ne set foo 0 1 5\r\nhello\r\n | nc localhost 11211; echo 
-ne get foo\r\n | nc localhost 11211; echo -ne stats cachedump 1 0\r\n 
| nc localhost 11211; sleep 1; echo -ne get foo\r\n | nc localhost 11211; 
echo -ne stats cachedump 1 0\r\n | nc localhost 11211; date +%s;

Output was as expected:

1390759755
STORED
VALUE foo 0 5
hello
END
ITEM foo [5 b; 1390759755 s]
END
END
END
1390759756

If you could modify the test to fail on your setup, that would be a good 
way to figure out what's going wrong.

~Ryan

On Wednesday, January 22, 2014 6:32:24 PM UTC-8, James Pearson wrote:

 We're using memcache to store short-lived, incremented keys for the 
 purposes of rate-limiting.  Right now, we're trying to debug an issue where 
 these keys stick around much longer than we expect.

 While investigating the issue, I found an example key that we believe 
 we've set with an expiry time of 300 seconds (five minutes).  Over the 
 course of several hours, I was able to GET it, with the value maintained.

 The only way I could figure out to find a key's expiry time (as actually 
 recorded inside memcache) is through the 'stats cachedump' command, like so:

 [$] (sleep 1; echo stats cachedump 1 0; sleep 1; echo quit;) | 
 telnet localhost 11211 | grep 'my_key'
 ITEM my_key [2 b; 1389767076 s]

 As I understand it[0], that last integer value is the unix timestamp 
 representing the item's expiry time.  That time, however, is roughly one 
 week ago[1].  I double-checked, and the system date is set correctly.

 Now, I know that memcached doesn't delete items when they expire, but the 
 protocol explicitly states that I should not be able to get a key if its 
 expiry time has passed[2].  Yet, that appears to be happening.

 Given this is a core feature of memcached, my assumption is that this is 
 not, in fact, a bug in memcached, but rather a misunderstanding on my part. 
  As such, I'd be grateful if you can point out any incorrect assumptions 
 I've made, or provide any advice on where I should be looking.

 We're using memcached 1.4.15.

 Thanks.
  - P

 [0]: 
 http://stackoverflow.com/q/1645764/120999http://www.google.com/url?q=http%3A%2F%2Fstackoverflow.com%2Fq%2F1645764%2F120999sa=Dsntz=1usg=AFQjCNEveut0SSu7ZuH3J1PfbgNdUjp9XA
 [1]: https://www.wolframalpha.com/input/?i=1389767076+unixtime
 [2]: 
 https://github.com/memcached/memcached/blob/master/doc/protocol.txt#L165-L168


-- 

--- 
You received this message because you are subscribed to the Google Groups 
memcached group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.