Oh, so this is amazon elasticache?

On Tue, 7 Jul 2020, Shweta Agrawal wrote:

> We use aws for deployment and don't have that information. What particularly 
> looks odd in settings? 
>
> On Wednesday, July 8, 2020 at 8:10:04 AM UTC+5:30, Dormando wrote:
>       what're your start arguments? the settings look a little odd. ie; the 
> full
>       commandline (censoring anything important) that you used to start
>       memcached
>
>       On Tue, 7 Jul 2020, Shweta Agrawal wrote:
>
>       > Sorry. Here it is.
>       >
>       > On Wednesday, July 8, 2020 at 12:38:38 AM UTC+5:30, Dormando wrote:
>       >       'stats settings' file is empty
>       >
>       >       On Tue, 7 Jul 2020, Shweta Agrawal wrote:
>       >
>       >       > Hi Dormando,
>       >       > Got the stats for production. Please find attached files for 
> stats settings. stats items, stats, stats slabs. Summary for all slabs.
>       >       >
>       >       > Other details that might help:
>       >       >  *  TTL is two days or more. 
>       >       >  *  Key length is in the range of 40-80 bytes.
>       >       > Below are the parameters that we plan to change from the 
> current settings:
>       >       >  1. slab_automove : from 0 to 1
>       >       >  2. hash_algorithm: from jenkins to murmur
>       >       >  3. chunk_size: from 48 to 297 (as we don't have data of size 
> less than that)
>       >       >  4. growth_factor: 1.25 to 1.20 ( Can reducing this more 
> help? Do more slab classes affect performance?)
>       >       >  5. max_item_size : from 4MB to 1MB (as our data will never 
> be more than 1MB large)
>       >       > Please let me know if different values for above paramters 
> can be more beneficial.
>       >       > Are there any other parameters which we should consider to 
> change or set?
>       >       >
>       >       > Also below are the calculations used for columns in the 
> summary shared. Can you please confirm if calculations are fine.
>       >       > 1) Total_Mem = total_pages*page_size  --> total memory 
>       >       > 2) Strg_ovrHd = (mem_requested/(used_chunks*chunk_size)) * 
> 100 --> storage overhead
>       >       > 3) Free Memory = free_chunks * chunk_size   ---> free memory
>       >       > 4) To Store = mem_requested      -->   actual memory 
> requested for storing data
>       >       >
>       >       > Thank you for your time and efforts in explaining concepts.
>       >       > Shweta
>       >       >
>       >       >             > > the rest is free memory, which should be 
> measured separately.
>       >       >             > free memory for a class will be : (free_chunks 
> * chunk_size) 
>       >       >             > And total memory reserved by a class will be : 
> (total_pages*page_size)
>       >       >             >
>       >       >             > > If you're getting evictions in class A but 
> there's too much free memory in classes C, D, etc 
>       >       >             > > then you have a balance issue. for example. 
> An efficiency stat which just 
>       >       >             > > adds up the total pages doesn't tell you what 
> to do with it. 
>       >       >             > I see. Got your point.Storage overhead can help 
> in deciding the chunk_size and growth_factor. Let me add
>       storage-overhead and
>       >       >             free memory as well for
>       >       >             > calculation.
>       >       >
>       >       >             Most people don't have to worry about 
> growth_factor very much. Especially
>       >       >             since the large item code was added, but it has 
> its own caveats. Growth
>       >       >             factor is only typically useful if you have 
> _very_ statically sized
>       >       >             objects.
>       >       >
>       >       >             > One curious question: If we have an item of 
> 500Bytes and there is free memory only in class A(chunk_size: 100Bytes).
>       Do cache
>       >       >             evict items from class with
>       >       >             > largeer chunk_size or use multiple chunks from 
> class A?
>       >       >
>       >       >             No, it will evict an item matching the 500 byte 
> chunk size, and not touch
>       >       >             A. This is where the memory balancer comes in; it 
> will move pages of
>       >       >             memory between slab classes to keep the tail age 
> roughly the same between
>       >       >             classes. It does this slowly.
>       >       >
>       >       >             > Example:
>       >       >             > In below scenario, when we try to store item 
> with 3MB, even when there was memory in class with smaller chunk_size, it
>       evicts
>       >       >             items from 512K class and
>       >       >             > other memory is blocked by smaller slabs.
>       >       >
>       >       >             Large (> 512KB) items are an exception. It will 
> try to evict from the
>       >       >             "large item" bucket, which is 512kb. It will try 
> to do this up to a few
>       >       >             times, trying to free up enough memory to make 
> space for the large item.
>       >       >
>       >       >             So to make space for a 3MB item, if the tail item 
> is 5MB in size or 1MB in
>       >       >             size, they will still be evicted. If the tail age 
> is low compared to all
>       >       >             other classes, the memory balancer will 
> eventually move more pages into
>       >       >             the 512K slab class.
>       >       >
>       >       >             If you tend to store a lot of very large items, 
> it works better if the
>       >       >             instances are larger.
>       >       >
>       >       >             Memcached is more optimized for performance with 
> small items. if you try
>       >       >             to store a small item, it will evict exactly one 
> item to make space.
>       >       >             However, for very large items (1MB+), the time it 
> takes to read the data
>       >       >             from the network is so large that we can afford 
> to do extra processing.
>       >       >
>       >       >             > 3Mb_items_eviction.png
>       >       >             >
>       >       >             >
>       >       >             > Thank you,
>       >       >             > Shweta
>       >       >             >
>       >       >             >
>       >       >             > On Sunday, July 5, 2020 at 1:13:19 AM UTC+5:30, 
> Dormando wrote:
>       >       >             >       (memory_requested / (chunk_size * 
> chunk_used)) * 100
>       >       >             >
>       >       >             >       is roughly the storage overhead of memory 
> used in the system. the rest is
>       >       >             >       free memory, which should be measured 
> separately. If you're getting
>       >       >             >       evictions in class A but there's too much 
> free memory in classes C, D, etc
>       >       >             >       then you have a balance issue. for 
> example. An efficiency stat which just
>       >       >             >       adds up the total pages doesn't tell you 
> what to do with it.
>       >       >             >
>       >       >             >       On Sat, 4 Jul 2020, Shweta Agrawal wrote:
>       >       >             >
>       >       >             >       > > I'll need the raw output from "stats 
> items" and "stats slabs". I don't 
>       >       >             >       > > think that efficiency column is very 
> helpful. ohkay no worries. I can get by Tuesday and will share. 
>       >       >             >       >
>       >       >             >       > Efficiency for each slab is calcuated 
> as 
>       >       >             >       >  (("stats slabs" -> memory_requested) / 
> (("stats slabs" -> total_pages) * page_size)) * 100
>       >       >             >       >
>       >       >             >       >
>       >       >             >       > Attaching script which has calculations 
> for the same. The script is from memcahe repo with additional
>       calculation for
>       >       >             efficiency. 
>       >       >             >       > Will it be possible for you to verify 
> if the efficiency calculation is correct?
>       >       >             >       >
>       >       >             >       > Thank you,
>       >       >             >       > Shweta
>       >       >             >       >
>       >       >             >       > On Saturday, July 4, 2020 at 1:08:23 PM 
> UTC+5:30, Dormando wrote:
>       >       >             >       >       ah okay.
>       >       >             >       >
>       >       >             >       >       I'll need the raw output from 
> "stats items" and "stats slabs". I don't
>       >       >             >       >       think that efficiency column is 
> very helpful.
>       >       >             >       >
>       >       >             >       >       On Fri, 3 Jul 2020, Shweta 
> Agrawal wrote:
>       >       >             >       >
>       >       >             >       >       >
>       >       >             >       >       >
>       >       >             >       >       > On Saturday, July 4, 2020 at 
> 9:41:49 AM UTC+5:30, Dormando wrote:
>       >       >             >       >       >       No attachment
>       >       >             >       >       >
>       >       >             >       >       >       On Fri, 3 Jul 2020, 
> Shweta Agrawal wrote:
>       >       >             >       >       >
>       >       >             >       >       >       >
>       >       >             >       >       >       > Wooo...so quick. :):)
>       >       >             >       >       >       > > Correct, close. It 
> actually uses more like 3 512k chunks and then one 
>       >       >             >       >       >       > > smaller chunk from a 
> different class to fit exactly 1.6MB. 
>       >       >             >       >       >       > I see.Got it.
>       >       >             >       >       >       >
>       >       >             >       >       >       > >Can you share 
> snapshots from "stats items" and "stats slabs" for one of 
>       >       >             >       >       >       > these instances? 
>       >       >             >       >       >       >
>       >       >             >       >       >       > Currently I have 
> summary of it, sharing the same below. I can get snapshot by Tuesday as need
>       to
>       >       request
>       >       >             for it.
>       >       >             >       >       >       >
>       >       >             >       >       >       > pages have value from 
> total_pages from stats slab for each slab
>       >       >             >       >       >       > item_size have value 
> from chunk_size from stats slab for each slab
>       >       >             >       >       >       > Used memory is 
> calculated as pages*page size ---> This has to corrected now.
>       >       >             >       >       >       >
>       >       >             >       >       >       >
>       >       >             >       >       >       > prod_stats.png
>       >       >             >       >       >       >
>       >       >             >       >       >       >
>       >       >             >       >       >       > > 90%+ are perfectly 
> doable. You probably need to look a bit more closely
>       >       >             >       >       >       > > into why you're not 
> getting the efficiency you expect. The detailed stats
>       >       >             >       >       >       > > output should point 
> to why. I can help with that if it's confusing.
>       >       >             >       >       >       >
>       >       >             >       >       >       > Great. Will surely ask 
> for your input whenever I have question. It is really kind of you to
>       offer
>       >       help. 
>       >       >             >       >       >       >
>       >       >             >       >       >       > > Either the slab 
> rebalancer isn't keeping up or you actually do have 39GB
>       >       >             >       >       >       > > of data and your 
> expecations are a bit off. This will also depending on
>       >       >             >       >       >       > > the TTL's you're 
> setting and how often/quickly your items change size.
>       >       >             >       >       >       > > Also things like your 
> serialization method / compression / key length vs
>       >       >             >       >       >       > > data length / etc.
>       >       >             >       >       >       >
>       >       >             >       >       >       > We have much less data 
> than 39 GB. As after facing evictions, it has been always kept higher
>       than
>       >       >             expected data-size.
>       >       >             >       >       >       > TTL is two days or 
> more. 
>       >       >             >       >       >       > From my observation 
> items size(data-length) is in the range of 300Bytes to 500K after
>       compression.
>       >       >             >       >       >       > Key length is in the 
> range of 40-80 bytes.
>       >       >             >       >       >       >
>       >       >             >       >       >       > Thank you,
>       >       >             >       >       >       > Shweta
>       >       >             >       >       >       >  
>       >       >             >       >       >       > On Saturday, July 4, 
> 2020 at 8:38:31 AM UTC+5:30, Dormando wrote:
>       >       >             >       >       >       >       Hey,
>       >       >             >       >       >       >
>       >       >             >       >       >       >       > Putting my 
> understanding to re-confirm:
>       >       >             >       >       >       >       > 1) Page size 
> will always be 1MB and we cannot change it.Moreover, it's not required to
>       be
>       >       >             changed.
>       >       >             >       >       >       >
>       >       >             >       >       >       >       Correct.
>       >       >             >       >       >       >
>       >       >             >       >       >       >       > 2) We can store 
> items larger than 1MB and it is done by combining chunks together.
>       (example:
>       >       >             let's say item size:
>       >       >             >       ~1.6MB -->
>       >       >             >       >       4 slab
>       >       >             >       >       >       >       chunks(512k slab) 
> from
>       >       >             >       >       >       >       > 2 pages will be 
> used)
>       >       >             >       >       >       >
>       >       >             >       >       >       >       Correct, close. 
> It actually uses more like 3 512k chunks and then one
>       >       >             >       >       >       >       smaller chunk 
> from a different class to fit exactly 1.6MB.
>       >       >             >       >       >       >
>       >       >             >       >       >       >       > We use memcache 
> in production and in past we saw evictions even when free memory was
>       present.
>       >       >             Also currently we use
>       >       >             >       cluster
>       >       >             >       >       with
>       >       >             >       >       >       39GB RAM in
>       >       >             >       >       >       >       total to
>       >       >             >       >       >       >       > cache data even 
> when data size we expect is ~15GB to avoid eviction of active items.
>       >       >             >       >       >       >
>       >       >             >       >       >       >       Can you share 
> snapshots from "stats items" and "stats slabs" for one of
>       >       >             >       >       >       >       these instances?
>       >       >             >       >       >       >
>       >       >             >       >       >       >       > But as our data 
> varies in size, it is possible to avoid evictions by tuning
>       parameters:
>       >       >             chunk_size, growth_factor,
>       >       >             >       >       slab_automove.
>       >       >             >       >       >       Also I
>       >       >             >       >       >       >       believe memcache
>       >       >             >       >       >       >       > is efficient 
> and we can reduce cost by reducing memory size for cluster. 
>       >       >             >       >       >       >       > So I am trying 
> to find the best possible memory size and parameters we can have.So
>       want to be
>       >       >             clear with my
>       >       >             >       understanding
>       >       >             >       >       and
>       >       >             >       >       >       calculations.
>       >       >             >       >       >       >       >
>       >       >             >       >       >       >       > So while trying 
> different parameters and putting all calculations, I observed that
>       total_pages
>       >       *
>       >       >             item_size_max >
>       >       >             >       physical
>       >       >             >       >       memory for
>       >       >             >       >       >       a
>       >       >             >       >       >       >       machine. And from
>       >       >             >       >       >       >       > all blogs,and 
> docs it didnot match my understanding. But it's clear now. Thanks to
>       you.
>       >       >             >       >       >       >       >
>       >       >             >       >       >       >       > One last 
> question: From my trials I find that we can achieve ~90% storage efficiency
>       with
>       >       >             memcache. (i.e we need
>       >       >             >       10MB of
>       >       >             >       >       physical
>       >       >             >       >       >       memory to
>       >       >             >       >       >       >       store 9MB of
>       >       >             >       >       >       >       > data. Do you 
> recommend any idle memory-size interms of percentage of expected
>       data-size? 
>       >       >             >       >       >       >
>       >       >             >       >       >       >       90%+ are 
> perfectly doable. You probably need to look a bit more closely
>       >       >             >       >       >       >       into why you're 
> not getting the efficiency you expect. The detailed stats
>       >       >             >       >       >       >       output should 
> point to why. I can help with that if it's confusing.
>       >       >             >       >       >       >
>       >       >             >       >       >       >       Either the slab 
> rebalancer isn't keeping up or you actually do have 39GB
>       >       >             >       >       >       >       of data and your 
> expecations are a bit off. This will also depending on
>       >       >             >       >       >       >       the TTL's you're 
> setting and how often/quickly your items change size.
>       >       >             >       >       >       >       Also things like 
> your serialization method / compression / key length vs
>       >       >             >       >       >       >       data length / etc.
>       >       >             >       >       >       >
>       >       >             >       >       >       >       -Dormando
>       >       >             >       >       >       >
>       >       >             >       >       >       >       > On Saturday, 
> July 4, 2020 at 12:23:09 AM UTC+5:30, Dormando wrote:
>       >       >             >       >       >       >       >       Hey,
>       >       >             >       >       >       >       >
>       >       >             >       >       >       >       >       Looks 
> like I never updated the manpage. In the past the item size max was
>       >       >             >       >       >       >       >       achieved 
> by changing the slab page size, but that hasn't been true for a
>       >       >             >       >       >       >       >       long time.
>       >       >             >       >       >       >       >
>       >       >             >       >       >       >       >       From 
> ./memcached -h:
>       >       >             >       >       >       >       >       -m, 
> --memory-limit=<num>  item memory in megabytes (default: 64)
>       >       >             >       >       >       >       >
>       >       >             >       >       >       >       >       ... -m 
> just means the memory limit in megabytes, abstract from the page
>       >       >             >       >       >       >       >       size. I 
> think that was always true.
>       >       >             >       >       >       >       >
>       >       >             >       >       >       >       >       In any 
> recentish version, any item larger than half a page size (512k) is
>       >       >             >       >       >       >       >       created 
> by stitching page chunks together. This prevents waste when an
>       >       >             >       >       >       >       >       item 
> would be more than half a page size.
>       >       >             >       >       >       >       >
>       >       >             >       >       >       >       >       Is there 
> a problem you're trying to track down?
>       >       >             >       >       >       >       >
>       >       >             >       >       >       >       >       I'll 
> update the manpage.
>       >       >             >       >       >       >       >
>       >       >             >       >       >       >       >       On Fri, 3 
> Jul 2020, Shweta Agrawal wrote:
>       >       >             >       >       >       >       >
>       >       >             >       >       >       >       >       > Hi,
>       >       >             >       >       >       >       >       > Sorry 
> if I am repeating the question, I searched the list but could not find
>       definite
>       >       >             answer. So posting it.
>       >       >             >       >       >       >       >       >
>       >       >             >       >       >       >       >       > 
> Memcache version: 1.5.10 
>       >       >             >       >       >       >       >       > I have 
> started memcahce with option: -I 4m (setting maximum item size to
>       4MB).Verified
>       >       >             it is set by
>       >       >             >       command stats
>       >       >             >       >       settings ,
>       >       >             >       >       >       I can
>       >       >             >       >       >       >       see STAT
>       >       >             >       >       >       >       >       
> item_size_max
>       >       >             >       >       >       >       >       > 4194304.
>       >       >             >       >       >       >       >       >
>       >       >             >       >       >       >       >       > 
> Documentation from git repository here stats that:
>       >       >             >       >       >       >       >       >
>       >       >             >       >       >       >       >       > -I, 
> --max-item-size=<size>
>       >       >             >       >       >       >       >       > 
> Override the default size of each slab page. The default size is 1mb. Default
>       >       >             >       >       >       >       >       > value 
> for this parameter is 1m, minimum is 1k, max is 1G (1024 * 1024 * 1024).
>       >       >             >       >       >       >       >       > 
> Adjusting this value changes the item size limit.
>       >       >             >       >       >       >       >       > My 
> understanding from documentation is this option will allow to save items
>       with size
>       >       >             till 4MB and the page
>       >       >             >       size for
>       >       >             >       >       each
>       >       >             >       >       >       slab will
>       >       >             >       >       >       >       be 4MB
>       >       >             >       >       >       >       >       (as I set 
> it as
>       >       >             >       >       >       >       >       > -I 4m).
>       >       >             >       >       >       >       >       >
>       >       >             >       >       >       >       >       > I am 
> able to save items till 4MB but the page-size is still 1MB.
>       >       >             >       >       >       >       >       >
>       >       >             >       >       >       >       >       > -m 
> memory size is default 64MB.
>       >       >             >       >       >       >       >       >
>       >       >             >       >       >       >       >       > 
> Calculation:
>       >       >             >       >       >       >       >       > -> 
> Calculated total pages used from stats slabs output parameter total_pages =
>       64 (If
>       >       >             page size is 4MB then
>       >       >             >       total
>       >       >             >       >       pages
>       >       >             >       >       >       should not
>       >       >             >       >       >       >       be more
>       >       >             >       >       >       >       >       than 16. 
> Also
>       >       >             >       >       >       >       >       > when I 
> store 8 items of ~3MB it uses 25 pages but if page size is 4MB, it
>       should use 8
>       >       >             pages right.)
>       >       >             >       >       >       >       >       >
>       >       >             >       >       >       >       >       > Can you 
> please help me in understanding the behaviour?
>       >       >             >       >       >       >       >       >
>       >       >             >       >       >       >       >       > 
> Attached files with details for output of command stats settings and stats
>       slabs.
>       >       >             >       >       >       >       >       > Below 
> is the summarized view of the distribution. 
>       >       >             >       >       >       >       >       > First 
> added items with variable sizes, then then added items with 3MB and
>       above.
>       >       >             >       >       >       >       >       >
>       >       >             >       >       >       >       >       > 
> data_distribution.png
>       >       >             >       >       >       >       >       >
>       >       >             >       >       >       >       >       >
>       >       >             >       >       >       >       >       >
>       >       >             >       >       >       >       >       > Please 
> let me know in case more details are required or question is not clear.
>       >       >             >       >       >       >       >       >  
>       >       >             >       >       >       >       >       > Thank 
> You,
>       >       >             >       >       >       >       >       >  Shweta
>       >       >             >       >       >       >       >       >
>       >       >             >       >       >       >       >       > --
>       >       >             >       >       >       >       >       >
>       >       >             >       >       >       >       >       > ---
>       >       >             >       >       >       >       >       > You 
> received this message because you are subscribed to the Google Groups
>       "memcached"
>       >       >             group.
>       >       >             >       >       >       >       >       > To 
> unsubscribe from this group and stop receiving emails from it, send an
>       email to
>       >       >             memc...@googlegroups.com.
>       >       >             >       >       >       >       >       > To view 
> this discussion on the web visit
>       >       >             >       >       >       >       >      
>       >       >             
> https://groups.google.com/d/msgid/memcached/2b640e1f-9f59-4432-a930-d830cbe8566do%40googlegroups.com.
>       >       >             >       >       >       >       >       >
>       >       >             >       >       >       >       >       >
>       >       >             >       >       >       >       >
>       >       >             >       >       >       >       > --
>       >       >             >       >       >       >       >
>       >       >             >       >       >       >       > ---
>       >       >             >       >       >       >       > You received 
> this message because you are subscribed to the Google Groups "memcached"
>       group.
>       >       >             >       >       >       >       > To unsubscribe 
> from this group and stop receiving emails from it, send an email to
>       >       >             memc...@googlegroups.com.
>       >       >             >       >       >       >       > To view this 
> discussion on the web visit
>       >       >             >       >       >       >      
>       >       >             
> https://groups.google.com/d/msgid/memcached/586aad58-c6fb-4ed8-89ce-6b005d59ba12o%40googlegroups.com.
>       >       >             >       >       >       >       >
>       >       >             >       >       >       >       >
>       >       >             >       >       >       >
>       >       >             >       >       >       > prod_stats.png
>       >       >             >       >       >       >
>       >       >             >       >       >       > --
>       >       >             >       >       >       >
>       >       >             >       >       >       > ---
>       >       >             >       >       >       > You received this 
> message because you are subscribed to the Google Groups "memcached" group.
>       >       >             >       >       >       > To unsubscribe from 
> this group and stop receiving emails from it, send an email to
>       >       >             memc...@googlegroups.com.
>       >       >             >       >       >       > To view this discussion 
> on the web visit
>       >       >             >       >       >      
>       
> https://groups.google.com/d/msgid/memcached/8d011c1a-deec-463f-a17e-4e9908d97bdfo%40googlegroups.com.
>       >       >             >       >       >       >
>       >       >             >       >       >       >
>       >       >             >       >       >
>       >       >             >       >       > --
>       >       >             >       >       >
>       >       >             >       >       > ---
>       >       >             >       >       > You received this message 
> because you are subscribed to the Google Groups "memcached" group.
>       >       >             >       >       > To unsubscribe from this group 
> and stop receiving emails from it, send an email to
>       memc...@googlegroups.com.
>       >       >             >       >       > To view this discussion on the 
> web visit
>       >       >             >       >       
> https://groups.google.com/d/msgid/memcached/f0c2bfe1-d65d-4b62-9a87-68fc42446c3do%40googlegroups.com.
>       >       >             >       >       >
>       >       >             >       >       >
>       >       >             >       >
>       >       >             >       > --
>       >       >             >       >
>       >       >             >       > ---
>       >       >             >       > You received this message because you 
> are subscribed to the Google Groups "memcached" group.
>       >       >             >       > To unsubscribe from this group and stop 
> receiving emails from it, send an email to memc...@googlegroups.com.
>       >       >             >       > To view this discussion on the web visit
>       >       >             >       
> https://groups.google.com/d/msgid/memcached/bcd4da5a-ae8e-470f-beb9-2705c0f0202ao%40googlegroups.com.
>       >       >             >       >
>       >       >             >       >
>       >       >             >
>       >       >             > --
>       >       >             >
>       >       >             > ---
>       >       >             > You received this message because you are 
> subscribed to the Google Groups "memcached" group.
>       >       >             > To unsubscribe from this group and stop 
> receiving emails from it, send an email to memc...@googlegroups.com.
>       >       >             > To view this discussion on the web visit
>       >       >             
> https://groups.google.com/d/msgid/memcached/5e76fa4f-7e06-468a-8b10-d99ab89d7ec2o%40googlegroups.com.
>       >       >             >
>       >       >             >
>       >       >
>       >       > --
>       >       >
>       >       > ---
>       >       > You received this message because you are subscribed to the 
> Google Groups "memcached" group.
>       >       > To unsubscribe from this group and stop receiving emails from 
> it, send an email to memc...@googlegroups.com.
>       >       > To view this discussion on the web visit
>       >       
> https://groups.google.com/d/msgid/memcached/71fd5680-7bd2-473b-9944-6cda8271ad5fo%40googlegroups.com.
>       >       >
>       >       >
>       >
>       > --
>       >
>       > ---
>       > You received this message because you are subscribed to the Google 
> Groups "memcached" group.
>       > To unsubscribe from this group and stop receiving emails from it, 
> send an email to memc...@googlegroups.com.
>       > To view this discussion on the web visit
>       
> https://groups.google.com/d/msgid/memcached/372169f1-2a2e-4163-bf48-ca8176e76443o%40googlegroups.com.
>       >
>       >
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups 
> "memcached" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to memcached+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/memcached/d89d1650-801b-4632-8a5d-3a29b98c161fo%40googlegroups.com.
>
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"memcached" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/memcached/alpine.DEB.2.21.2007072105040.18887%40dskull.

Reply via email to