re-introducing the memcached proxy

2024-03-27 Thread dormando
Hey,

https://memcached.org/blog/proxy-intro/

- if you're using mcrouter or twemproxy to access memcached, or would like
to but those projects are long abandoned, please give this a look and try.

I'm not really sure if this mailing list is functional anymore.. it was
overtaken by spammers and I had to disable signups. If you're still around
or have any idea on where or how to spread the word of this thing, please
do. It's been a lot of work and I'd like to get some more folks using it.

Thanks!
-Dormando

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"memcached" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/memcached/bb704d2b-4519-adee-237e-3769aa4226db%40rydia.net.


Re: Memcache Connection Rejections Despite Available Space

2023-09-04 Thread dormando
Looks like it isn't memcached rejecting your connections.You should try a drupal support community instead. Please include the _exact_ error you are seeing. Copy/paste or a screenshot, so they can help you properly.On Sep 4, 2023, at 2:07 PM, Ahmet Faruk Dereli  wrote:





Here is the output of Stats and Max Connection is default 1024.STAT pid 4864
STAT uptime 2998936
STAT time 1693861446
STAT version 1.6.14
STAT libevent 2.1.12-stable
STAT pointer_size 64
STAT rusage_user 13987.394262
STAT rusage_system 82532.203530
STAT max_connections 1024
STAT curr_connections 17
STAT total_connections 20596143
STAT rejected_connections 0
STAT connection_structures 396
STAT response_obj_oom 0
STAT response_obj_count 1
STAT response_obj_bytes 65536
STAT read_buf_count 404
STAT read_buf_bytes 6619136
STAT read_buf_bytes_free 6537216
STAT read_buf_oom 0
STAT reserved_fds 20
STAT cmd_get 631330051
STAT cmd_set 8023749
STAT cmd_flush 0
STAT cmd_touch 0
STAT cmd_meta 0
STAT get_hits 611286014
STAT get_misses 20044037
STAT get_expired 242694
STAT get_flushed 0
STAT delete_misses 3121717
STAT delete_hits 497387
STAT incr_misses 0
STAT incr_hits 0
STAT decr_misses 0
STAT decr_hits 0
STAT cas_misses 0
STAT cas_hits 0
STAT cas_badval 0
STAT touch_hits 0
STAT touch_misses 0
STAT store_too_large 117
STAT store_no_memory 0
STAT auth_cmds 0
STAT auth_errors 0
STAT bytes_read 76408019517
STAT bytes_written 2220920094426
STAT limit_maxbytes 13958643712
STAT accepting_conns 1
STAT listen_disabled_num 0
STAT time_in_listen_disabled_us 0
STAT threads 4
STAT conn_yields 0
STAT hash_power_level 20
STAT hash_bytes 8388608
STAT hash_is_expanding 0
STAT slab_reassign_rescues 6085
STAT slab_reassign_chunk_rescues 0
STAT slab_reassign_evictions_nomem 0
STAT slab_reassign_inline_reclaim 5
STAT slab_reassign_busy_items 0
STAT slab_reassign_busy_deletes 0
STAT slab_reassign_running 0
STAT slabs_moved 45
STAT lru_crawler_running 0
STAT lru_crawler_starts 659029
STAT lru_maintainer_juggles 122729387
STAT malloc_fails 0
STAT log_worker_dropped 0
STAT log_worker_written 0
STAT log_watcher_skipped 0
STAT log_watcher_sent 0
STAT log_watchers 0
STAT unexpected_napi_ids 0
STAT round_robin_fallback 0
STAT bytes 3098827518
STAT curr_items 1027764
STAT total_items 8026776
STAT slab_global_page_pool 0
STAT expired_unfetched 589680
STAT evicted_unfetched 0
STAT evicted_active 0
STAT evictions 0
STAT reclaimed 81428
STAT crawler_reclaimed 1162226
STAT crawler_items_checked 5028977899
STAT lrutail_reflocked 8635
STAT moves_to_cold 5373497
STAT moves_to_warm 4364867
STAT moves_within_lru 3173096
STAT direct_reclaims 0
STAT lru_bumps_dropped 0
ENDOn Monday, September 4, 2023 at 11:53:10 PM UTC+3 dormando wrote:Hey,Can you include the output from "stats"?Connections have nothing to do with CPU/memory/disk space(??). There's a connection limit (-c) you're running into. The stats output will list the connection limit and if connections have been rejected because of it.On Sep 4, 2023, at 1:14 PM, Ahmet Faruk Dereli <ahmet@skvare.com> wrote:Hello,We've been using Memcached for our Drupal/CiviCRM site setups, and recently, I started encountering Memcache connection errors on the PHP side. Upon checking, Memcache seems to be running fine, and there are no issues with Memory, CPU, or Disk space. Additionally, Memcached is not reaching its memory limits.However, when I run in debug mode, I observe connection rejections. There appears to be sufficient space available.I'm looking for guidance on what else I can check to identify and resolve this issue. Any insights or suggestions would be greatly appreciated.Thank you in advance.Best regards,Ahmet,



-- 

--- 
You received this message because you are subscribed to the Google Groups "memcached" group.
To unsubscribe from this group and stop receiving emails from it, send an email to memcached+...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/memcached/d3b41b66-69de-4648-85c2-390be0447543n%40googlegroups.com.




-- 

--- 
You received this message because you are subscribed to the Google Groups "memcached" group.
To unsubscribe from this group and stop receiving emails from it, send an email to memcached+unsubscr...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/memcached/f1ffe1e4-66f4-45a5-9fb3-d9dea0469299n%40googlegroups.com.




-- 

--- 
You received this message because you are subscribed to the Google Groups "memcached" group.
To unsubscribe from this group and stop receiving emails from it, send an email to memcached+unsubscr...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/memcached/859324E5-2742-4CF2-86D4-2A94924A6516%40rydia.net.


Re: Memcache Connection Rejections Despite Available Space

2023-09-04 Thread dormando
Hey,Can you include the output from "stats"?Connections have nothing to do with CPU/memory/disk space(??). There's a connection limit (-c) you're running into. The stats output will list the connection limit and if connections have been rejected because of it.On Sep 4, 2023, at 1:14 PM, Ahmet Faruk Dereli  wrote:Hello,We've been using Memcached for our Drupal/CiviCRM site setups, and recently, I started encountering Memcache connection errors on the PHP side. Upon checking, Memcache seems to be running fine, and there are no issues with Memory, CPU, or Disk space. Additionally, Memcached is not reaching its memory limits.However, when I run in debug mode, I observe connection rejections. There appears to be sufficient space available.I'm looking for guidance on what else I can check to identify and resolve this issue. Any insights or suggestions would be greatly appreciated.Thank you in advance.Best regards,Ahmet,



-- 

--- 
You received this message because you are subscribed to the Google Groups "memcached" group.
To unsubscribe from this group and stop receiving emails from it, send an email to memcached+unsubscr...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/memcached/d3b41b66-69de-4648-85c2-390be0447543n%40googlegroups.com.




-- 

--- 
You received this message because you are subscribed to the Google Groups "memcached" group.
To unsubscribe from this group and stop receiving emails from it, send an email to memcached+unsubscr...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/memcached/F04CF13E-B380-405C-AE03-139594C0478A%40rydia.net.


Re: Large slab class

2023-06-06 Thread dormando
Look for cases in the code for 'it_flags & ITEM_CHUNKED'. There are a few cases where the data is looped through. (append/prepend code).On Jun 6, 2023, at 9:46 PM, boaz shavit  wrote:Thanks a lot 

Dormando.We have made some customization to the code and we are using ITEM_data macro in order to retrieve the value of the key.This work for all slabs except the last one. Is there any way to programmatically retrieve the data ?Thanks,Bob. On Tuesday, June 6, 2023 at 11:22:44 PM UTC+3 dormando wrote:Hey,

Items larger than the slab class max are "chunked" across multiple slab
chunks. See: https://github.com/memcached/memcached/wiki/ReleaseNotes1429

Since that release a "cap" chunk mode was added, so if chunk max is set to
16k and you store a 17k item, it will split into:

1) tiny chunk for key and header
2) 16k main chunk
3) attempt to "cap" with a 1k chunk

if it cannot allocate the 1k cap memory for some reason it will allocate
an extra 16k chunk instead.

The theory is that at larger item sizes we can spend a little CPU to
improve the memory efficiency. Making the "max slab class" smaller means
we can make better use of the slab classes. At some point I will be
reducing the default setting from 512k to 256k or lower, but I need to
revisit it and add some stats counters first.

-Dormando

On Tue, 6 Jun 2023, boaz shavit wrote:

> Hello,I'm trying to understand how data is saved in memcached for items with size > 0.5M.
> When I check the slabclass structure array, I see it only has values for classes up to .5 MB and another entry in place zero which looks like this:
> (gdb) p slabclass[0]
> $87 = {size = 0, perslab = 0, slots = 0x0, sl_curr = 0, slabs = 386, slab_list = 0x7f15b4015ef0, list_size = 2048}
>
> when inserting a value which is very big (the key is 100 bytes, but the value is 800k) I see that the key goes to class 2 (which is 196 bytes) but I do
> not see where the value is stored. 
>
> Can someone explain how this works for big values?
>
> Thanks in advanced,
> Bob.
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups "memcached" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to memcached+...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/memcached/dedecb91-b3d2-4a83-8253-0ad965cecd68n%40googlegroups.com.
>
>



-- 

--- 
You received this message because you are subscribed to the Google Groups "memcached" group.
To unsubscribe from this group and stop receiving emails from it, send an email to memcached+unsubscr...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/memcached/e2254937-3369-4cc2-9de5-6982d1ef9066n%40googlegroups.com.




-- 

--- 
You received this message because you are subscribed to the Google Groups "memcached" group.
To unsubscribe from this group and stop receiving emails from it, send an email to memcached+unsubscr...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/memcached/4D53899E-5268-4A88-9CED-62F18A758078%40rydia.net.


Re: Large slab class

2023-06-06 Thread dormando
Hey,

Items larger than the slab class max are "chunked" across multiple slab
chunks. See: https://github.com/memcached/memcached/wiki/ReleaseNotes1429

Since that release a "cap" chunk mode was added, so if chunk max is set to
16k and you store a 17k item, it will split into:

1) tiny chunk for key and header
2) 16k main chunk
3) attempt to "cap" with a 1k chunk

if it cannot allocate the 1k cap memory for some reason it will allocate
an extra 16k chunk instead.

The theory is that at larger item sizes we can spend a little CPU to
improve the memory efficiency. Making the "max slab class" smaller means
we can make better use of the slab classes. At some point I will be
reducing the default setting from 512k to 256k or lower, but I need to
revisit it and add some stats counters first.

-Dormando

On Tue, 6 Jun 2023, boaz shavit wrote:

> Hello,I'm trying to understand how data is saved in memcached for items with 
> size > 0.5M.
> When I check the slabclass structure array, I see it only has values for 
> classes up to .5 MB and another entry in place zero which looks like this:
> (gdb) p slabclass[0]
> $87 = {size = 0, perslab = 0, slots = 0x0, sl_curr = 0, slabs = 386, 
> slab_list = 0x7f15b4015ef0, list_size = 2048}
>
> when inserting a value which is very big (the key is 100 bytes, but the value 
> is 800k) I see that the key goes to class 2 (which is 196 bytes) but I do
> not see where the value is stored. 
>
> Can someone explain how this works for big values?
>
> Thanks in advanced,
> Bob.
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups 
> "memcached" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to memcached+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/memcached/dedecb91-b3d2-4a83-8253-0ad965cecd68n%40googlegroups.com.
>
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"memcached" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/memcached/f9c9451b-9256-7cf3-fdf-423cbd51796d%40rydia.net.


Re: Extstore revival after crash

2023-04-24 Thread dormando
Hey,Aside:I'm actually busy trying to parse the datafile with a small Go program to try and replay all the data. Solving this warming will give us a lot of confidence to roll this out in a big way across our infra.What're your thoughts on this and the above?It would be really bad for both of us if you created a mission critical backup solution based off of an undocumented, unsupported dataformat which potentially changes with version updates. I think you may have also misunderstood me; the data is actually partially in RAM.Is there any chance I could get you into the MC discord to chat a bit further about your use case? (linked from https://memcached.org/) - easier to play 20 questions there. If that's not possible I'll list a bunch of questions in the mailing list here instead :)@Javier, thanks for your thoughts here too. Replication is not an option for us at this scale; that said, your solution is pretty cool!One of many questions; is this due to cost? (ie; don't want to double the cache storage) or some other reason?On Monday, April 24, 2023 at 1:05:23 PM UTC+2 Javier Arias Losada wrote:Hi there,one thing we've done to mitigate this kind of risk is having two copies of every shard in different availability zones in our cloud provider. Also, we run in kubernetes so for us nodes leaving the cluster is a relatively frequent issue... we are playing with a small process that does the warmup of new nodes quicker.Since we have more than one copy of the data, we do a warmup process. Our cache nodes are MUCH MUCH smaller... so this approach might not be reasonable for your use-case.This is how our process works, when a new node is restarted or any other situation that involves an empty memcached process starting, our warmup process: locates the warmer node for the shardgets all the keys and TTLS with from the warmer node: lru_crawler metadump alltraverses in reverse the list of keys (lru_crawler goes from the least recently used, for this it's better to go from most recent).For each key: get the value from the warmer node and add (not set) it to the cold node, including TTL.This process might lead to some small data inconcistencies, it will depend on your use case how important that is.Since our access patterns are very skewed (a small % of keys gets the bigger % of traffic, at least during some time) going in reverse in the LRU dump helps being much more effective.BestJavier AriasOn Sunday, April 23, 2023 at 7:24:28 PM UTC+2 dormando wrote:Hey,

Thanks for reaching out!

There is no crash safety in memcached or extstore; it does look like the
data is on disk but it is actually spread across memory and disk, with
recent or heavily accessed data staying in RAM. Best case you only recover
your cold data. Further, keys can appear multiple times in the extstore
datafile and we rely on the RAM index to know which one is current.

I've never heard of anyone losing an entire cluster, but people do try to
mitigate this by replicating cache across availability zones/regions.
This can be done with a few methods, like our new proxy code. I'd be happy
to go over a few scenarios if you'd like.

-Dormando

On Sun, 23 Apr 2023, 'Danny Kopping' via memcached wrote:

> First off, thanks for the amazing work @dormando & others!
> Context:
> I work at Grafana Labs, and we are very interested in trying out extstore for some very large (>50TB) caches. We plan to split this 50TB cache into about
> 35 different nodes, each with 1.5TB of NVMe & a small memcached instance. Losing any given node will result in losing ~3% of the overall cache which is
> acceptable, however if we lose all nodes at once somehow, losing all of our cache will be pretty bad and will put severe pressure on our backend.
>
> Ask:
> Having looked at the file that extstore writes on disk, it looks like it has both keys & values contained in it. Would it be possible to "re-warm" the
> cache on startup by scanning this data and resubmitting it to itself? We could then have add some condition to our readiness check in k8s to wait until
> the data is all re-warmed and then allow traffic to flow to those instances. Is this feature planned for anytime soon?
>
> Thanks!
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups "memcached" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to memcached+...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/memcached/cc45382b-eee7-4e37-a841-d210bf18ff4bn%40googlegroups.com.
>
>




-- 

--- 
You received this message because you are subscribed to the Google Groups "memcached" group.
To unsubscribe from this group and stop receiving emails from it, send an email to memcached+unsubscr...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/memcached/daa1fca5-e1b6-48

Re: Extstore revival after crash

2023-04-23 Thread dormando
Hey,

Thanks for reaching out!

There is no crash safety in memcached or extstore; it does look like the
data is on disk but it is actually spread across memory and disk, with
recent or heavily accessed data staying in RAM. Best case you only recover
your cold data. Further, keys can appear multiple times in the extstore
datafile and we rely on the RAM index to know which one is current.

I've never heard of anyone losing an entire cluster, but people do try to
mitigate this by replicating cache across availability zones/regions.
This can be done with a few methods, like our new proxy code. I'd be happy
to go over a few scenarios if you'd like.

-Dormando

On Sun, 23 Apr 2023, 'Danny Kopping' via memcached wrote:

> First off, thanks for the amazing work @dormando & others!
> Context:
> I work at Grafana Labs, and we are very interested in trying out extstore for 
> some very large (>50TB) caches. We plan to split this 50TB cache into about
> 35 different nodes, each with 1.5TB of NVMe & a small memcached instance. 
> Losing any given node will result in losing ~3% of the overall cache which is
> acceptable, however if we lose all nodes at once somehow, losing all of our 
> cache will be pretty bad and will put severe pressure on our backend.
>
> Ask:
> Having looked at the file that extstore writes on disk, it looks like it has 
> both keys & values contained in it. Would it be possible to "re-warm" the
> cache on startup by scanning this data and resubmitting it to itself? We 
> could then have add some condition to our readiness check in k8s to wait until
> the data is all re-warmed and then allow traffic to flow to those instances. 
> Is this feature planned for anytime soon?
>
> Thanks!
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups 
> "memcached" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to memcached+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/memcached/cc45382b-eee7-4e37-a841-d210bf18ff4bn%40googlegroups.com.
>
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"memcached" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/memcached/51ca89fe-352-52b8-e145-a04f09da940%40rydia.net.


Re: memcached origins

2023-03-10 Thread dormando
Hey,Uhh well I can say I'm from the USA. I'm pretty sure Brad is too. Probably the rest is accurate.Probably worth noting the other three haven't contributed in over ten years.On Mar 10, 2023, at 1:52 PM, Jonathan Louie  wrote:Hello,Software being used by my organization is being reviewed, memcached being one of them. They're asking about for Trade Agreements Act (TAA) compliance, basically restrictions on software/hardware from certain countries. I know this is FOSS and they're still working out what they actually need, but is it possible for you to confirm the top 4 contributors countries of citizenship? I tried to do the research myself:Dormando:           USADustin Sallings:   USABrad Fitzpatrick: USATrond Norbye :    NorwayI appreciate any help in advanced. Thank you!



-- 

--- 
You received this message because you are subscribed to the Google Groups "memcached" group.
To unsubscribe from this group and stop receiving emails from it, send an email to memcached+unsubscr...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/memcached/5b78b720-3de0-4755-94fc-3ec46727fda1n%40googlegroups.com.




-- 

--- 
You received this message because you are subscribed to the Google Groups "memcached" group.
To unsubscribe from this group and stop receiving emails from it, send an email to memcached+unsubscr...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/memcached/87C02D75-0CA5-4443-BE46-6FC29F841BFC%40rydia.net.


Re: Source code, lru_lock vs former cache_lock

2023-02-27 Thread dormando
Hey,

That old "item_cachedump" command is deprecated. The locking is fine; it's
actually only looking at the COLD_LRU instead of walking all of them like
the lru_crawler.

I'd rather remove the command entirely than do any further work on it; it
has a hard limit on how many keys it can dump, it locks up the whole
worker thread while it fills the buffer, etc. The lru_crawler is superior
in all ways.

On Mon, 27 Feb 2023, Slawomir Pryczek wrote:

> Hi, I was reading about LRU lock a bit and have a question regarding 
> item_cachedump
> unsigned int id = slabs_clsid;
> id |= COLD_LRU;
> pthread_mutex_lock(_locks[id]);
>
> 1. Why in this code we're binary adding COLD_LRU, while for example in 
> lru_crawler's code we're just using slab class IDs. This way other threads are
> able to access locked resources, is that correct?
>
> 2. 
> pthread_mutex_lock(_locks[slab_class_id]);
> uint32_t hv = hash(ITEM_key(it), it->nkey);
> void *hold_lock = NULL;
> if ((hold_lock = item_trylock(hv)) == NULL) {
>      continue;
> }
> if (refcount_incr(it) == 2) {
> // LOCKED
>
> Is it still correct way to lock an item so it can be safely read?
>
> 3. for the item_cachedump I found this comment
>
> /* This is walking the line of violating lock order, but I think it's safe.
>  * If the LRU lock is held, an item in the LRU cannot be wiped and freed.
>  * The data could possibly be overwritten, but this is only accessing the
>  * headers.
>  * It may not be the best idea to leave it like this, but for now it's safe.
>  */
>
> Why it may violate lock order when we have only single lock acquired in this 
> function? IS there some doc about correct (updated) lock order, the one in
> sources seems outdated...
>
> Thanks,
> Slawomir.
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups 
> "memcached" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to memcached+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/memcached/e19afc89-ccc1-4410-b6cf-3b005529df4an%40googlegroups.com.
>
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"memcached" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/memcached/1fdc16a1-afe9-bc18-24bc-bc7e56aa86%40rydia.net.


Re: Building from sources failed

2023-02-15 Thread dormando
Hey,

Sometimes newer compilers are more strict.

What version are you trying to build? Did any previous version work or
does everything fail on Fedora 37?

I don't know about that evutil_socket_t error though.

On Tue, 14 Feb 2023, Slawomir Pryczek wrote:

> Hi Guys, any idea why building is failing for me under Fedora 37?
> Im following this script. For other OSes i was always able to build now it 
> makes issues.
> --
> In file included from memcached.h:51,
>                  from memcached.c:16:
> logger.h:59:86: error: unknown type name ‘va_list’
>    59 | cb)(logentry *e, const entry_details *d, const void *entry, va_list 
> ap);
>       |                                                             ^~~
>
> logger.h:6:1: note: ‘va_list’ is defined in header ‘’; did you 
> forget                                                                        
>   
>    to ‘#include ’?
>     5 | #include "bipbuffer.h"
>   +++ |+#include 
>     6 |
> logger.h:65:5: error: unknown type name ‘entry_log_cb’
>    65 |     entry_log_cb log_cb;
>       |     ^~~~
> memcached.c:97:33: error: unknown type name ‘evutil_socket_t’
>    97 | static void event_handler(const evutil_socket_t fd, const short 
> which, v                                                                      
>   
>     oid *arg);
>       |                                 ^~~
> memcached.c:158:36: error: unknown type name ‘evutil_socket_t’
>   158 | static void maxconns_handler(const evutil_socket_t fd, const short 
> which                                                                        
>     , void *arg) {
>       |                                    ^~~
> memcached.c:3392:26: error: unknown type name ‘evutil_socket_t’
>  3392 | void event_handler(const evutil_socket_t fd, const short which, void 
> *ar                                                                        
>     g) {
>       |                          ^~~
> memcached.c:3923:33: error: unknown type name ‘evutil_socket_t’
>  3923 | static void clock_handler(const evutil_socket_t fd, const short 
> which, v                                                                      
>   
>     oid *arg) {
>       |                                 ^~~
> --
>
> Thanks,
> Slawomir.
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups 
> "memcached" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to memcached+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/memcached/14f1652f-2ca8-4f6b-aaad-2bb2d924f94an%40googlegroups.com.
>
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"memcached" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/memcached/a449c8d3-31d4-235f-710-881c21537871%40rydia.net.


Re: Evictions, OOM and other troubleshooting suggestion request

2023-02-13 Thread dormando
Hey,

Just store more data and that ratio will rise. I don't know why that stat
is named "memory efficiency". You have a lot of RAM free.

On Mon, 13 Feb 2023, Артём Яшков wrote:

> Hello again, Dormando!
> Haven't heard you for a while :)
>
> I have started updating dashboard for our brand-new memcached server, so I've 
> checked some stats so far. They look ok, 91% hits, so we will increase
> amount of routes to be cached and the RAM amount also.
>
> On the other hand, I still don't get if it is an issue that 
> memcached_current_bytes / memcached_malloced_bytes * 100% has such a small 
> ratio (~2% of
> memory is being used efficiently).
>
> As we have a lot more routes\pages to include to caching by memcached, I 
> would like to know, if these metrics somehow upgradable, so we would not waste
> RAM.
>
> STAT limit_maxbytes ~32gb
> htop shows memory is used ~ 11gb
> so it less then 1 gb of real bytes used...
>
> Looking forward to hearing from you.
>
> Best wishes, 
> Artem Iashkov
>
> P.S.
> I use https://github.com/prometheus/memcached_exporter to get memcached stats 
> to prometheus\grafanaпонедельник, 31 октября 2022 г. в 09:03:53 UTC+2,
> Артём Яшков:
>   Hello, Dormando!
>
>   Happy to show you actual stats after being a while on the new and 
> updated memcached server we've done. It is many times better, than it was
>   before.
>   The only thing is, there are thousands of errors 'store_too_large' by 
> now, around 8 times. Should I increase -I even more (it's 2m now)?
>
>   stats
>   STAT pid 50132
>   STAT uptime 2673148
>   STAT time 1667199498
>   STAT version 1.6.17
>   STAT libevent 2.1.8-stable
>   STAT pointer_size 64
>   STAT rusage_user 56027.845797
>   STAT rusage_system 202864.587583
>   STAT max_connections 4000
>   STAT curr_connections 39
>   STAT total_connections 120820
>   STAT rejected_connections 0
>   STAT connection_structures 47
>   STAT response_obj_oom 0
>   STAT response_obj_count 1
>   STAT response_obj_bytes 65536
>   STAT read_buf_count 26
>   STAT read_buf_bytes 425984
>   STAT read_buf_bytes_free 344064
>   STAT read_buf_oom 0
>   STAT reserved_fds 20
>   STAT cmd_get 657453008
>   STAT cmd_set 60148720
>   STAT cmd_flush 719
>   STAT cmd_touch 0
>   STAT cmd_meta 0
>   STAT get_hits 582449437
>   STAT get_misses 75003571
>   STAT get_expired 2587689
>   STAT get_flushed 9532
>   STAT delete_misses 0
>   STAT delete_hits 0
>   STAT incr_misses 0
>   STAT incr_hits 0
>   STAT decr_misses 0
>   STAT decr_hits 0
>   STAT cas_misses 0
>   STAT cas_hits 0
>   STAT cas_badval 0
>   STAT touch_hits 0
>   STAT touch_misses 0
>   STAT store_too_large 75998
>   STAT store_no_memory 0
>   STAT auth_cmds 0
>   STAT auth_errors 0
>   STAT bytes_read 6035480065990
>   STAT bytes_written 11467585205846
>   STAT limit_maxbytes 32212254720
>   STAT accepting_conns 1
>   STAT listen_disabled_num 0
>   STAT time_in_listen_disabled_us 0
>   STAT threads 4
>   STAT conn_yields 11876
>   STAT hash_power_level 16
>   STAT hash_bytes 524288
>   STAT hash_is_expanding 0
>   STAT slab_reassign_rescues 1607820
>   STAT slab_reassign_chunk_rescues 1529479
>   STAT slab_reassign_evictions_nomem 0
>   STAT slab_reassign_inline_reclaim 36326
>   STAT slab_reassign_busy_items 2711
>   STAT slab_reassign_busy_deletes 0
>   STAT slab_reassign_running 0
>   STAT slabs_moved 1171688
>   STAT lru_crawler_running 0
>   STAT lru_crawler_starts 1978777
>   STAT lru_maintainer_juggles 425476885
>   STAT malloc_fails 0
>   STAT log_worker_dropped 0
>   STAT log_worker_written 0
>   STAT log_watcher_skipped 0
>   STAT log_watcher_sent 0
>   STAT log_watchers 0
>   STAT unexpected_napi_ids 0
>   STAT round_robin_fallback 0
>   STAT bytes 1687793908
>   STAT curr_items 21466
>   STAT total_items 61756540
>   STAT slab_global_page_pool 8411
>   STAT expired_unfetched 34136549
>   STAT evicted_unfetched 0
>   STAT evicted_active 0
>   STAT evictions 0
>   STAT reclaimed 29198074
>   STAT crawler_reclaimed 27677584
>   STAT crawler_items_checked 3573821082
>   STAT lrutail_reflocked 12569
>   STAT moves_to_cold 65950430
>   STAT moves_to_warm 34051328
>   STAT moves_within_lru 35745382
>   STAT direct_reclaims 0
>   STAT lru_bumps_

Re: Add date/timestamp information to memcached log

2022-12-29 Thread dormando
Hey,

Thanks! We've had a few PR's/questions like this before and unfortunately I 
don't take them.

1) STDOUT/STDERR logs don't typically have timestamps, as they're usually put 
through syslog or similar systems which adds their own timestamp. Without 
options this would make memcached logs have double timestamps. if you're not 
using syslog/etc there are plenty of cli tools which add timestamps via piping.
2) I've been moving logging to the "watch" system, which does have timestamps 
when you're accessing it directly. It is supposed to get a stdout/stderr/syslog 
mode which would drop the timestamp from the output. I've not done this yet 
since the watch logs don't have parity with the original debug logs.
3) most of these logs are for debugging and don't make a ton of sense to be 
timed.

-Dormando

> On Dec 29, 2022, at 12:24 AM, Xuesen Liang  wrote:
> 
> 
> Hello,
> 
> In this PR https://github.com/memcached/memcached/pull/971, date/timestamp 
> information is added to memcached log.
> Before:
> 
> $ ./memcached -vv
>  ... ... 
> <17 server listening (auto-negotiate)
> <18 server listening (auto-negotiate)
> ^CSignal handled: Interrupt: 2.
> stopped assoc
> asking workers to stop
> asking background threads to stop
> stopped lru crawler
> stopped maintainer
> stopped slab mover
> stopped logger thread
> stopped idle timeout thread
> closing connections
> <17 connection closed.
> <18 connection closed.
> reaping worker threads
> all background threads stopped
> 
> After:
> 
> $ ./memcached -vv
>  ... ... 
> 2022-12-29_16:08:17,932431 <17 server listening (auto-negotiate)
> 2022-12-29_16:08:17,932629 <18 server listening (auto-negotiate)
> 2022-12-29_16:08:22,078384 <19 new auto-negotiating client connection
> 2022-12-29_16:08:32,304416 <19 connection closed.
> ^CSignal handled: Interrupt: 2.
> 2022-12-29_16:08:43,015118 stopped assoc
> 2022-12-29_16:08:43,015134 asking workers to stop
> 2022-12-29_16:08:43,015183 asking background threads to stop
> 2022-12-29_16:08:43,015209 stopped lru crawler
> 2022-12-29_16:08:43,407434 stopped maintainer
> 2022-12-29_16:08:43,407467 stopped slab mover
> 2022-12-29_16:08:43,407498 stopped logger thread
> 2022-12-29_16:08:43,407511 stopped idle timeout thread
> 2022-12-29_16:08:43,407519 closing connections
> 2022-12-29_16:08:43,407523 <17 connection closed.
> 2022-12-29_16:08:43,407543 <18 connection closed.
> 2022-12-29_16:08:43,407554 reaping worker threads
> 2022-12-29_16:08:43,407630 all background threads stopped
> 
> If this patch is ok, I will continue to replace other fprintf with 
> time_fprintf.
> Thanks~
> 
> -- 
> 
> --- 
> You received this message because you are subscribed to the Google Groups 
> "memcached" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to memcached+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/memcached/805a10c5-8dfe-4c81-baa8-93be366a9824n%40googlegroups.com.

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"memcached" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/memcached/A9F74CE5-EEE4-4ACA-9FCD-56D2D4209EB7%40rydia.net.


Re: memcached Logo Usage - License? Policy?

2022-09-26 Thread dormando
Hey,

This e-mail response serves as permission.

Policy is stated at the bottom of https://memcached.org/ and probably some
other place I forget.

Thanks for asking first!

On Mon, 26 Sep 2022, Jim St Leger wrote:

> Can someone point me to the memcached logo usage policy?
> A colleague wants to use the logo in a public talk on Wed. Trying to see if 
> it is allowed by license or permission.
>
> Thanks,
> Jim
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups 
> "memcached" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to memcached+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/memcached/cb77ac0f-7a53-4d52-a2a1-387bef18dd5bn%40googlegroups.com.
>
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"memcached" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/memcached/b5564cf3-c9a2-e7c2-ff4d-b5853379d68%40rydia.net.


Re: Evictions, OOM and other troubleshooting suggestion request

2022-09-21 Thread dormando
AT items:36:number 107
> STAT items:36:number_hot 20
> STAT items:36:number_warm 31
> STAT items:36:number_cold 56
> STAT items:36:age_hot 109
> STAT items:36:age_warm 575
> STAT items:36:age 570
> STAT items:36:evicted 30291
> STAT items:36:evicted_nonzero 30291
> STAT items:36:evicted_time 173
> STAT items:36:outofmemory 1640
> STAT items:36:tailrepairs 0
> STAT items:36:reclaimed 11730
> STAT items:36:expired_unfetched 9433
> STAT items:36:evicted_unfetched 20386
> STAT items:36:evicted_active 2
> STAT items:36:crawler_reclaimed 15338
> STAT items:36:crawler_items_checked 1107546
> STAT items:36:lrutail_reflocked 34
> STAT items:36:moves_to_cold 57328
> STAT items:36:moves_to_warm 32153
> STAT items:36:moves_within_lru 68228
> STAT items:36:direct_reclaims 65140
> STAT items:36:hits_to_hot 80622
> STAT items:36:hits_to_warm 570017
> STAT items:36:hits_to_cold 27734
> STAT items:36:hits_to_temp 0
> STAT items:37:number 76
> STAT items:37:number_hot 7
> STAT items:37:number_warm 40
> STAT items:37:number_cold 29
> STAT items:37:age_hot 112
> STAT items:37:age_warm 33
> STAT items:37:age 582
> STAT items:37:evicted 19065
> STAT items:37:evicted_nonzero 19065
> STAT items:37:evicted_time 97
> STAT items:37:outofmemory 4
> STAT items:37:tailrepairs 0
> STAT items:37:reclaimed 8212
> STAT items:37:expired_unfetched 4890
> STAT items:37:evicted_unfetched 9859
> STAT items:37:evicted_active 3
> STAT items:37:crawler_reclaimed 15988
> STAT items:37:crawler_items_checked 1322257
> STAT items:37:lrutail_reflocked 57
> STAT items:37:moves_to_cold 67418
> STAT items:37:moves_to_warm 64078
> STAT items:37:moves_within_lru 144299
> STAT items:37:direct_reclaims 19200
> STAT items:37:hits_to_hot 137400
> STAT items:37:hits_to_warm 1335663
> STAT items:37:hits_to_cold 51130
> STAT items:37:hits_to_temp 0
> STAT items:38:number 18
> STAT items:38:number_hot 5
> STAT items:38:number_warm 0
> STAT items:38:number_cold 13
> STAT items:38:age_hot 106
> STAT items:38:age_warm 0
> STAT items:38:age 549
> STAT items:38:evicted 4868
> STAT items:38:evicted_nonzero 4868
> STAT items:38:evicted_time 569
> STAT items:38:outofmemory 1134
> STAT items:38:tailrepairs 0
> STAT items:38:reclaimed 5099
> STAT items:38:expired_unfetched 5436
> STAT items:38:evicted_unfetched 4589
> STAT items:38:evicted_active 0
> STAT items:38:crawler_reclaimed 1028
> STAT items:38:crawler_items_checked 92201
> STAT items:38:lrutail_reflocked 2
> STAT items:38:moves_to_cold 11009
> STAT items:38:moves_to_warm 646
> STAT items:38:moves_within_lru 2198
> STAT items:38:direct_reclaims 44292
> STAT items:38:hits_to_hot 877
> STAT items:38:hits_to_warm 2402
> STAT items:38:hits_to_cold 1014
> STAT items:38:hits_to_temp 0
> STAT items:39:number 1122
> STAT items:39:number_hot 189
> STAT items:39:number_warm 28
> STAT items:39:number_cold 905
> STAT items:39:age_hot 115
> STAT items:39:age_warm 303
> STAT items:39:age 600
> STAT items:39:evicted 250857
> STAT items:39:evicted_nonzero 250857
> STAT items:39:evicted_time 599
> STAT items:39:outofmemory 0
> STAT items:39:tailrepairs 0
> STAT items:39:reclaimed 470571
> STAT items:39:expired_unfetched 520676
> STAT items:39:evicted_unfetched 243109
> STAT items:39:evicted_active 0
> STAT items:39:crawler_reclaimed 85379
> STAT items:39:crawler_items_checked 47759171
> STAT items:39:lrutail_reflocked 293
> STAT items:39:moves_to_cold 810927
> STAT items:39:moves_to_warm 25856
> STAT items:39:moves_within_lru 36238
> STAT items:39:direct_reclaims 250876
> STAT items:39:hits_to_hot 244110
> STAT items:39:hits_to_warm 1224055
> STAT items:39:hits_to_cold 44157
> STAT items:39:hits_to_temp 0
> вторник, 20 сентября 2022 г. в 22:21:55 UTC+5, Dormando:
>   > Hello, Dormando,
>   >
>   > I'm glad you've answered! 
>   >
>   > The goal is simple - it is to make memcached work properly as the 
> server cashing service in our project, as it seems to me that it's not
>   working so by
>   > now.
>
>   What are you specifically seeing that you disagree with? I don't really
>   want to comb 1,000 lines of stats output :) Though I think you have 
> stats
>   slabs but not stats items output.
>
>   > My decision now is to create a new virtual server having much larger 
> RAM available and adjusting configs that way:
>   > -u memcached -p 11211 -m (*max available*) -c 4000 -R 100 (as I see 
> conn_yields errors in stats) -I 2m (as I get errors when it is 1m or
>   lower)
>
>   What errors are you getting specifically? "item too large"?
>
>   conn_yields i

Re: Evictions, OOM and other troubleshooting suggestion request

2022-09-20 Thread dormando
> Hello, Dormando,
>
> I'm glad you've answered! 
>
> The goal is simple - it is to make memcached work properly as the server 
> cashing service in our project, as it seems to me that it's not working so by
> now.

What are you specifically seeing that you disagree with? I don't really
want to comb 1,000 lines of stats output :) Though I think you have stats
slabs but not stats items output.

> My decision now is to create a new virtual server having much larger RAM 
> available and adjusting configs that way:
> -u memcached -p 11211 -m (*max available*) -c 4000 -R 100 (as I see 
> conn_yields errors in stats) -I 2m (as I get errors when it is 1m or lower)

What errors are you getting specifically? "item too large"?

conn_yields isn't an error; if you raise -R you might cause excess
latency.

more RAM probably can't hurt.

> Would you suggest increasing -n or lowering -f in addiction to this? Why?
> Also, I am looking forward to reestimate TTL to give actual htmls, it seems 
> low to me now.

No, probably not. If your items are large the overhead is small enough
that it's not worth changing settings (and you will have to re-tune if
your workload changes over time). The upgrade to 1.6.17 should fix any
issues with large items.

> вторник, 20 сентября 2022 г. в 11:27:02 UTC+5, Dormando:
>   Hey,
>
>   I'm not sure if you have a specific question or not, since it sounds 
> like
>   you're just putting up stats and asking about what you can do better? Do
>   you have a goal in mind?
>
>   Since your objects are fairly large (350-450kb) you said, and you seem 
> to
>   be seeing OOM errors, can you try upgrading to 1.6.17? A fix just went 
> in
>   fixing OOM's and excess evictions for caches with mostly large objects.
>
>   -Dormando
>
>   On Mon, 19 Sep 2022, Артём Яшков wrote:
>
>   > Hello there, I am new to using memcached, but I would like to improve 
> the performance of out project by adjusting some of settings and
>   improving stats of
>   > out memcached usage.
>   > Could you please help me understand what should I change (if 
> anything) ?
>   >
>   > It's used mostly for containing compressed html-pages of a huge 
> web-app (350-450kb each) and smaller data. TTL is set to 10minutes.
>   >
>   > STAT pid 1213
>   > STAT uptime 18491072
>   > STAT time 1663584971
>   > STAT version 1.5.16
>   > STAT libevent 2.0.21-stable
>   > STAT pointer_size 64
>   > STAT rusage_user 168879.888699
>   > STAT rusage_system 628808.673825
>   > STAT max_connections 4000
>   > STAT curr_connections 60
>   > STAT total_connections 33757
>   > STAT rejected_connections 0
>   > STAT connection_structures 175
>   > STAT reserved_fds 20
>   > STAT cmd_get 81749230
>   > STAT cmd_set 7246273
>   > STAT cmd_flush 54
>   > STAT cmd_touch 0
>   > STAT get_hits 73088366
>   > STAT get_misses 8660864
>   > STAT get_expired 282887
>   > STAT get_flushed 536
>   > STAT delete_misses 0
>   > STAT delete_hits 0
>   > STAT incr_misses 0
>   > STAT incr_hits 0
>   > STAT decr_misses 0
>   > STAT decr_hits 0
>   > STAT cas_misses 0
>   > STAT cas_hits 0
>   > STAT cas_badval 0
>   > STAT touch_hits 0
>   > STAT touch_misses 0
>   > STAT auth_cmds 0
>   > STAT auth_errors 0
>   > STAT bytes_read 693399361640
>   > STAT bytes_written 1452329058754
>   > STAT limit_maxbytes 2147483648
>   > STAT accepting_conns 1
>   > STAT listen_disabled_num 0
>   > STAT time_in_listen_disabled_us 0
>   > STAT threads 4
>   > STAT conn_yields 1545
>   > STAT hash_power_level 16
>   > STAT hash_bytes 524288
>   > STAT hash_is_expanding 0
>   > STAT slab_reassign_rescues 213233
>   > STAT slab_reassign_chunk_rescues 157770
>   > STAT slab_reassign_evictions_nomem 182616
>   > STAT slab_reassign_inline_reclaim 182196
>   > STAT slab_reassign_busy_items 184670
>   > STAT slab_reassign_busy_deletes 0
>   > STAT slab_reassign_running 0
>   > STAT slabs_moved 118905
>   > STAT lru_crawler_running 0
>   > STAT lru_crawler_starts 2157221
>   > STAT lru_maintainer_juggles 48468923
>   > STAT malloc_fails 0
>   > STAT log_worker_dropped 0
>   > STAT log_worker_written 0
>   > STAT log_watcher_skipped 0
>   > STAT log_watcher_sent 0
>

Re: meta protocol Java client

2022-09-20 Thread dormando
Hey,

I'm not aware of one existing yet. What java client do you currently use?
Are there any java clients with active maintainers we can contact?

On Fri, 16 Sep 2022, Javier Arias Losada wrote:

> Hi there!I love the meta protocol for memcached, and it would help 
> significantly with some of the use cases I have currently...
>
> My applications are Java, but haven't found any meta protocol Java client... 
> is there one? 
> Thank you.
> Javi
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups 
> "memcached" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to memcached+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/memcached/93c4f170-aa5f-49a5-a856-e5c857bd9609n%40googlegroups.com.
>
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"memcached" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/memcached/5e271f6e-386e-e890-3636-89f4e493ff6f%40rydia.net.


Re: Evictions, OOM and other troubleshooting suggestion request

2022-09-20 Thread dormando
Hey,

I'm not sure if you have a specific question or not, since it sounds like
you're just putting up stats and asking about what you can do better? Do
you have a goal in mind?

Since your objects are fairly large (350-450kb) you said, and you seem to
be seeing OOM errors, can you try upgrading to 1.6.17? A fix just went in
fixing OOM's and excess evictions for caches with mostly large objects.

-Dormando

On Mon, 19 Sep 2022, Артём Яшков wrote:

> Hello there, I am new to using memcached, but I would like to improve the 
> performance of out project by adjusting some of settings and improving stats 
> of
> out memcached usage.
> Could you please help me understand what should I change (if anything) ?
>
> It's used mostly for containing compressed html-pages of a huge web-app 
> (350-450kb each) and smaller data. TTL is set to 10minutes.
>
> STAT pid 1213
> STAT uptime 18491072
> STAT time 1663584971
> STAT version 1.5.16
> STAT libevent 2.0.21-stable
> STAT pointer_size 64
> STAT rusage_user 168879.888699
> STAT rusage_system 628808.673825
> STAT max_connections 4000
> STAT curr_connections 60
> STAT total_connections 33757
> STAT rejected_connections 0
> STAT connection_structures 175
> STAT reserved_fds 20
> STAT cmd_get 81749230
> STAT cmd_set 7246273
> STAT cmd_flush 54
> STAT cmd_touch 0
> STAT get_hits 73088366
> STAT get_misses 8660864
> STAT get_expired 282887
> STAT get_flushed 536
> STAT delete_misses 0
> STAT delete_hits 0
> STAT incr_misses 0
> STAT incr_hits 0
> STAT decr_misses 0
> STAT decr_hits 0
> STAT cas_misses 0
> STAT cas_hits 0
> STAT cas_badval 0
> STAT touch_hits 0
> STAT touch_misses 0
> STAT auth_cmds 0
> STAT auth_errors 0
> STAT bytes_read 693399361640
> STAT bytes_written 1452329058754
> STAT limit_maxbytes 2147483648
> STAT accepting_conns 1
> STAT listen_disabled_num 0
> STAT time_in_listen_disabled_us 0
> STAT threads 4
> STAT conn_yields 1545
> STAT hash_power_level 16
> STAT hash_bytes 524288
> STAT hash_is_expanding 0
> STAT slab_reassign_rescues 213233
> STAT slab_reassign_chunk_rescues 157770
> STAT slab_reassign_evictions_nomem 182616
> STAT slab_reassign_inline_reclaim 182196
> STAT slab_reassign_busy_items 184670
> STAT slab_reassign_busy_deletes 0
> STAT slab_reassign_running 0
> STAT slabs_moved 118905
> STAT lru_crawler_running 0
> STAT lru_crawler_starts 2157221
> STAT lru_maintainer_juggles 48468923
> STAT malloc_fails 0
> STAT log_worker_dropped 0
> STAT log_worker_written 0
> STAT log_watcher_skipped 0
> STAT log_watcher_sent 0
> STAT bytes 1974566295
> STAT curr_items 1
> STAT total_items 7459506
> STAT slab_global_page_pool 0
> STAT expired_unfetched 3257723
> STAT evicted_unfetched 465989
> STAT evicted_active 3
> STAT evictions 515146
> STAT reclaimed 2711477
> STAT crawler_reclaimed 3485764
> STAT crawler_items_checked 433929115
> STAT lrutail_reflocked 2197
> STAT moves_to_cold 7634486
> STAT moves_to_warm 4325739
> STAT moves_within_lru 4284211
> STAT direct_reclaims 557604
> STAT lru_bumps_dropped 0
>
> stats settings
> STAT maxbytes 2147483648
> STAT maxconns 4000
> STAT tcpport 11211
> STAT udpport 0
> STAT inter NULL
> STAT verbosity 0
> STAT oldest 18482584
> STAT evictions on
> STAT domain_socket NULL
> STAT umask 700
> STAT growth_factor 1.25
> STAT chunk_size 48
> STAT num_threads 4
> STAT num_threads_per_udp 4
> STAT stat_key_prefix :
> STAT detail_enabled no
> STAT reqs_per_event 20
> STAT cas_enabled yes
> STAT tcp_backlog 1024
> STAT binding_protocol auto-negotiate
> STAT auth_enabled_sasl no
> STAT auth_enabled_ascii no
> STAT item_size_max 134217728
> STAT maxconns_fast yes
> STAT hashpower_init 0
> STAT slab_reassign yes
> STAT slab_automove 1
> STAT slab_automove_ratio 0.80
> STAT slab_automove_window 30
> STAT slab_chunk_max 524288
> STAT lru_crawler yes
> STAT lru_crawler_sleep 100
> STAT lru_crawler_tocrawl 0
> STAT tail_repair_time 0
> STAT flush_enabled yes
> STAT dump_enabled yes
> STAT hash_algorithm murmur3
> STAT lru_maintainer_thread yes
> STAT lru_segmented yes
> STAT hot_lru_pct 20
> STAT warm_lru_pct 40
> STAT hot_max_factor 0.20
> STAT warm_max_factor 2.00
> STAT temp_lru no
> STAT temporary_ttl 61
> STAT idle_timeout 0
> STAT watcher_logbuf_size 262144
> STAT worker_logbuf_size 65536
> STAT track_sizes no
> STAT inline_ascii_response no
>
> STAT 1:chunk_size 96
> STAT 1:chunks_per_page 10922
> STAT 1:total_pages 1
> STAT 1:total_chunks 10922
> STAT 1:used_chunks 352
> STAT 1:free_chunks 10570
> STAT 1:free_chunks_end 0
> STAT 1:mem_requested 31855
> STAT 1:get_hits 129

Re: "Out of memory during read" errors instead of key eviction

2022-08-27 Thread dormando
Thanks for taking the time to evaluate! It helps my confidence level with
the fix.

You caught me at a good time :) Been really behind with fixes for quite a
while and only catching up this week. I've looked at this a few times and
didn't see the easy fix before...

I think earlier versions of the item chunking code were more fragile and I
didn't revisit it after the cleanup work. In this case each chunk
remembers its original slab class, so having the final chunk be from an
unintended class doesn't break anything. Otherwise freeing the chunks
would be impossible if I had to recalculate their original slab class from
the chunk size.

So now it'll use too much memory in some cases, and lowering slab chunk
max would ease that a bit... so maybe soon will finally be a good time to
lower the default chunk max a little to at least 128k or 256k.

-Dormando

On Fri, 26 Aug 2022, Hayden wrote:

> I didn't see the docker files in the repo that could build the docker image, 
> and when I tried cloning the git repo and doing a docker build I encountered
> errors that I think were related to the web proxy on my work network. I was 
> able to grab the release tarball and the bitnami docker file, do a little
> surgery to work around my proxy issue, and build a 1.6.17 docker image though.
> I ran my application against the new version and it ran for ~2hr without any 
> errors (it previously wouldn't run more than 30s or so before encountering
> blocks of the OOM during read errors). I also made a little test loop that 
> just hammered the instance with similar sized writes (1-2MB) as fast as it
> could and let it run a few hours, and it didn't have a single blip. That 
> encompassed a couple million evictions. I'm pretty comfortable saying the 
> issue
> is fixed, at least for the kind of use I had in mind.
>
> I added a comment to the issue on GitHub to the same effect.
>
> I'm impressed by the quick turnaround, BTW. ;-)
>
> H
>
> On Friday, August 26, 2022 at 5:54:26 PM UTC-7 Dormando wrote:
>   So I tested this a bit more and released it in 1.6.17; I think bitnami
>   should pick it up soonish. if not I'll try to figure out docker this
>   weekend if you still need it.
>
>   I'm not 100% sure it'll fix your use case but it does fix some things I
>   can test and it didn't seem like a regression. would be nice to validate
>   still.
>
>   On Fri, 26 Aug 2022, dormando wrote:
>
>   > You can't build docker images or compile binaries? there's a
>   > docker-compose.yml in the repo already if that helps.
>   >
>   > If not I can try but I don't spend a lot of time with docker directly.
>   >
>   > On Fri, 26 Aug 2022, Hayden wrote:
>   >
>   > > I'd be happy to help validate the fix, but I can't do it until the 
> weekend, and I don't have a ready way to build an updated image. Any
>   chance you could
>   > > create a docker image with the fix that I could grab from somewhere?
>   > >
>   > > On Friday, August 26, 2022 at 10:38:54 AM UTC-7 Dormando wrote:
>   > > I have an opportunity to put this fix into a release today if 
> anyone wants
>   > > to help validate :)
>   > >
>   > > On Thu, 25 Aug 2022, dormando wrote:
>   > >
>   > > > Took another quick look...
>   > > >
>   > > > Think there's an easy patch that might work:
>   > > > https://github.com/memcached/memcached/pull/924
>   > > >
>   > > > If you wouldn't mind helping validate? An external validator 
> would help me
>   > > > get it in time for the next release :)
>   > > >
>   > > > Thanks,
>   > > > -Dormando
>   > > >
>   > > > On Wed, 24 Aug 2022, dormando wrote:
>   > > >
>   > > > > Hey,
>   > > > >
>   > > > > Thanks for the info. Yes; this generally confirms the issue. I 
> see some of
>   > > > > your higher slab classes with "free_chunks 0", so if you're 
> setting data
>   > > > > that requires these chunks it could error out. The "stats 
> items" confirms
>   > > > > this since there are no actual items in those lower slab 
> classes.
>   > > > >
>   > > > > You're certainly right a workaround of making your items < 512k 
> would also
>   > > > > work; but in general if I have features it'd be nice if they 
> worked well
>   > > > > :) Please open an issue so we can improve things!
>

Re: "Out of memory during read" errors instead of key eviction

2022-08-26 Thread dormando
So I tested this a bit more and released it in 1.6.17; I think bitnami
should pick it up soonish. if not I'll try to figure out docker this
weekend if you still need it.

I'm not 100% sure it'll fix your use case but it does fix some things I
can test and it didn't seem like a regression. would be nice to validate
still.

On Fri, 26 Aug 2022, dormando wrote:

> You can't build docker images or compile binaries? there's a
> docker-compose.yml in the repo already if that helps.
>
> If not I can try but I don't spend a lot of time with docker directly.
>
> On Fri, 26 Aug 2022, Hayden wrote:
>
> > I'd be happy to help validate the fix, but I can't do it until the weekend, 
> > and I don't have a ready way to build an updated image. Any chance you could
> > create a docker image with the fix that I could grab from somewhere?
> >
> > On Friday, August 26, 2022 at 10:38:54 AM UTC-7 Dormando wrote:
> >   I have an opportunity to put this fix into a release today if anyone 
> > wants
> >   to help validate :)
> >
> >   On Thu, 25 Aug 2022, dormando wrote:
> >
> >   > Took another quick look...
> >   >
> >   > Think there's an easy patch that might work:
> >   > https://github.com/memcached/memcached/pull/924
> >   >
> >   > If you wouldn't mind helping validate? An external validator would 
> > help me
> >   > get it in time for the next release :)
> >   >
> >   > Thanks,
> >   > -Dormando
> >   >
> >   > On Wed, 24 Aug 2022, dormando wrote:
> >   >
> >   > > Hey,
> >   > >
> >   > > Thanks for the info. Yes; this generally confirms the issue. I 
> > see some of
> >   > > your higher slab classes with "free_chunks 0", so if you're 
> > setting data
> >   > > that requires these chunks it could error out. The "stats items" 
> > confirms
> >   > > this since there are no actual items in those lower slab classes.
> >   > >
> >   > > You're certainly right a workaround of making your items < 512k 
> > would also
> >   > > work; but in general if I have features it'd be nice if they 
> > worked well
> >   > > :) Please open an issue so we can improve things!
> >   > >
> >   > > I intended to lower the slab_chunk_max default from 512k to much 
> > lower, as
> >   > > that actually raises the memory efficiency by a bit (less gap at 
> > the
> >   > > higher classes). That may help here. The system should also try 
> > ejecting
> >   > > items from the highest LRU... I need to double check that it 
> > wasn't
> >   > > already intending to do that and failing.
> >   > >
> >   > > Might also be able to adjust the page mover but not sure. The 
> > page mover
> >   > > can probably be adjusted to attempt to keep one page in reserve, 
> > but I
> >   > > think the algorithm isn't expecting slabs with no items in it so 
> > I'd have
> >   > > to audit that too.
> >   > >
> >   > > If you're up for experiments it'd be interesting to know if 
> > setting
> >   > > "-o slab_chunk_max=32768" or 16k (probably not more than 64) 
> > makes things
> >   > > better or worse.
> >   > >
> >   > > Also, crud.. it's documented as kilobytes but that's not working 
> > somehow?
> >   > > aaahahah. I guess the big EXPERIMENTAL tag scared people off 
> > since that
> >   > > never got reported.
> >   > >
> >   > > I'm guessing most people have a mix of small to large items, but 
> > you only
> >   > > have large items and a relatively low memory limit, so this is 
> > why you're
> >   > > seeing it so easily. I think most people setting large items have 
> > like
> >   > > 30G+ of memory so you end up with more spread around.
> >   > >
> >   > > Thanks,
> >   > > -Dormando
> >   > >
> >   > > On Wed, 24 Aug 2022, Hayden wrote:
> >   > >
> >   > > > What you're saying makes sense, and I'm pretty sure it won't be 
> > too hard to add some functionality to my writing code to break my large
> >   items up into
> >   > > > smaller parts that ca

Re: "Out of memory during read" errors instead of key eviction

2022-08-26 Thread dormando
You can't build docker images or compile binaries? there's a
docker-compose.yml in the repo already if that helps.

If not I can try but I don't spend a lot of time with docker directly.

On Fri, 26 Aug 2022, Hayden wrote:

> I'd be happy to help validate the fix, but I can't do it until the weekend, 
> and I don't have a ready way to build an updated image. Any chance you could
> create a docker image with the fix that I could grab from somewhere?
>
> On Friday, August 26, 2022 at 10:38:54 AM UTC-7 Dormando wrote:
>   I have an opportunity to put this fix into a release today if anyone 
> wants
>   to help validate :)
>
>   On Thu, 25 Aug 2022, dormando wrote:
>
>   > Took another quick look...
>   >
>   > Think there's an easy patch that might work:
>   > https://github.com/memcached/memcached/pull/924
>   >
>   > If you wouldn't mind helping validate? An external validator would 
> help me
>   > get it in time for the next release :)
>   >
>   > Thanks,
>   > -Dormando
>   >
>   > On Wed, 24 Aug 2022, dormando wrote:
>   >
>   > > Hey,
>   > >
>   > > Thanks for the info. Yes; this generally confirms the issue. I see 
> some of
>   > > your higher slab classes with "free_chunks 0", so if you're setting 
> data
>   > > that requires these chunks it could error out. The "stats items" 
> confirms
>   > > this since there are no actual items in those lower slab classes.
>   > >
>   > > You're certainly right a workaround of making your items < 512k 
> would also
>   > > work; but in general if I have features it'd be nice if they worked 
> well
>   > > :) Please open an issue so we can improve things!
>   > >
>   > > I intended to lower the slab_chunk_max default from 512k to much 
> lower, as
>   > > that actually raises the memory efficiency by a bit (less gap at the
>   > > higher classes). That may help here. The system should also try 
> ejecting
>   > > items from the highest LRU... I need to double check that it wasn't
>   > > already intending to do that and failing.
>   > >
>   > > Might also be able to adjust the page mover but not sure. The page 
> mover
>   > > can probably be adjusted to attempt to keep one page in reserve, 
> but I
>   > > think the algorithm isn't expecting slabs with no items in it so 
> I'd have
>   > > to audit that too.
>   > >
>   > > If you're up for experiments it'd be interesting to know if setting
>   > > "-o slab_chunk_max=32768" or 16k (probably not more than 64) makes 
> things
>   > > better or worse.
>   > >
>   > > Also, crud.. it's documented as kilobytes but that's not working 
> somehow?
>   > > aaahahah. I guess the big EXPERIMENTAL tag scared people off since 
> that
>   > > never got reported.
>   > >
>   > > I'm guessing most people have a mix of small to large items, but 
> you only
>   > > have large items and a relatively low memory limit, so this is why 
> you're
>   > > seeing it so easily. I think most people setting large items have 
> like
>   > > 30G+ of memory so you end up with more spread around.
>   > >
>   > > Thanks,
>   > > -Dormando
>   > >
>   > > On Wed, 24 Aug 2022, Hayden wrote:
>   > >
>   > > > What you're saying makes sense, and I'm pretty sure it won't be 
> too hard to add some functionality to my writing code to break my large
>   items up into
>   > > > smaller parts that can each fit into a single chunk. That has the 
> added benefit that I won't have to bother increasing the max item
>   size.
>   > > > In the meantime, though, I reran my pipeline and captured the 
> output of stats, stats slabs, and stats items both when evicting normally
>   and when getting
>   > > > spammed with the error.
>   > > >
>   > > > First, the output when I'm in the error state:
>   > > >  Output of stats
>   > > > STAT pid 1
>   > > > STAT uptime 11727
>   > > > STAT time 1661406229
>   > > > STAT version b'1.6.14'
>   > > > STAT libevent b'2.1.8-stable'
>   > > > STAT pointer_size 64
>   > > > STAT rusage_user 2.93837
&g

Re: "Out of memory during read" errors instead of key eviction

2022-08-25 Thread dormando
Took another quick look...

Think there's an easy patch that might work:
https://github.com/memcached/memcached/pull/924

If you wouldn't mind helping validate? An external validator would help me
get it in time for the next release :)

Thanks,
-Dormando

On Wed, 24 Aug 2022, dormando wrote:

> Hey,
>
> Thanks for the info. Yes; this generally confirms the issue. I see some of
> your higher slab classes with "free_chunks 0", so if you're setting data
> that requires these chunks it could error out. The "stats items" confirms
> this since there are no actual items in those lower slab classes.
>
> You're certainly right a workaround of making your items < 512k would also
> work; but in general if I have features it'd be nice if they worked well
> :) Please open an issue so we can improve things!
>
> I intended to lower the slab_chunk_max default from 512k to much lower, as
> that actually raises the memory efficiency by a bit (less gap at the
> higher classes). That may help here. The system should also try ejecting
> items from the highest LRU... I need to double check that it wasn't
> already intending to do that and failing.
>
> Might also be able to adjust the page mover but not sure. The page mover
> can probably be adjusted to attempt to keep one page in reserve, but I
> think the algorithm isn't expecting slabs with no items in it so I'd have
> to audit that too.
>
> If you're up for experiments it'd be interesting to know if setting
> "-o slab_chunk_max=32768" or 16k (probably not more than 64) makes things
> better or worse.
>
> Also, crud.. it's documented as kilobytes but that's not working somehow?
> aaahahah. I guess the big EXPERIMENTAL tag scared people off since that
> never got reported.
>
> I'm guessing most people have a mix of small to large items, but you only
> have large items and a relatively low memory limit, so this is why you're
> seeing it so easily. I think most people setting large items have like
> 30G+ of memory so you end up with more spread around.
>
> Thanks,
> -Dormando
>
> On Wed, 24 Aug 2022, Hayden wrote:
>
> > What you're saying makes sense, and I'm pretty sure it won't be too hard to 
> > add some functionality to my writing code to break my large items up into
> > smaller parts that can each fit into a single chunk. That has the added 
> > benefit that I won't have to bother increasing the max item size.
> > In the meantime, though, I reran my pipeline and captured the output of 
> > stats, stats slabs, and stats items both when evicting normally and when 
> > getting
> > spammed with the error.
> >
> > First, the output when I'm in the error state:
> >  Output of stats
> > STAT pid 1
> > STAT uptime 11727
> > STAT time 1661406229
> > STAT version b'1.6.14'
> > STAT libevent b'2.1.8-stable'
> > STAT pointer_size 64
> > STAT rusage_user 2.93837
> > STAT rusage_system 6.339015
> > STAT max_connections 1024
> > STAT curr_connections 2
> > STAT total_connections 8230
> > STAT rejected_connections 0
> > STAT connection_structures 6
> > STAT response_obj_oom 0
> > STAT response_obj_count 1
> > STAT response_obj_bytes 65536
> > STAT read_buf_count 8
> > STAT read_buf_bytes 131072
> > STAT read_buf_bytes_free 49152
> > STAT read_buf_oom 0
> > STAT reserved_fds 20
> > STAT cmd_get 0
> > STAT cmd_set 12640
> > STAT cmd_flush 0
> > STAT cmd_touch 0
> > STAT cmd_meta 0
> > STAT get_hits 0
> > STAT get_misses 0
> > STAT get_expired 0
> > STAT get_flushed 0
> > STAT delete_misses 0
> > STAT delete_hits 0
> > STAT incr_misses 0
> > STAT incr_hits 0
> > STAT decr_misses 0
> > STAT decr_hits 0
> > STAT cas_misses 0
> > STAT cas_hits 0
> > STAT cas_badval 0
> > STAT touch_hits 0
> > STAT touch_misses 0
> > STAT store_too_large 0
> > STAT store_no_memory 0
> > STAT auth_cmds 0
> > STAT auth_errors 0
> > STAT bytes_read 21755739959
> > STAT bytes_written 330909
> > STAT limit_maxbytes 5368709120
> > STAT accepting_conns 1
> > STAT listen_disabled_num 0
> > STAT time_in_listen_disabled_us 0
> > STAT threads 4
> > STAT conn_yields 0
> > STAT hash_power_level 16
> > STAT hash_bytes 524288
> > STAT hash_is_expanding False
> > STAT slab_reassign_rescues 0
> > STAT slab_reassign_chunk_rescues 0
> > STAT slab_reassign_evictions_nomem 0
> > STAT slab_reassign_inline_reclaim 0
> > STAT slab_reassign_busy_items 0
> > STAT slab_reassign_busy_deletes 0
> > STAT slab_reassign_running False
> &g

Re: "Out of memory during read" errors instead of key eviction

2022-08-25 Thread dormando
Hey,

Thanks for the info. Yes; this generally confirms the issue. I see some of
your higher slab classes with "free_chunks 0", so if you're setting data
that requires these chunks it could error out. The "stats items" confirms
this since there are no actual items in those lower slab classes.

You're certainly right a workaround of making your items < 512k would also
work; but in general if I have features it'd be nice if they worked well
:) Please open an issue so we can improve things!

I intended to lower the slab_chunk_max default from 512k to much lower, as
that actually raises the memory efficiency by a bit (less gap at the
higher classes). That may help here. The system should also try ejecting
items from the highest LRU... I need to double check that it wasn't
already intending to do that and failing.

Might also be able to adjust the page mover but not sure. The page mover
can probably be adjusted to attempt to keep one page in reserve, but I
think the algorithm isn't expecting slabs with no items in it so I'd have
to audit that too.

If you're up for experiments it'd be interesting to know if setting
"-o slab_chunk_max=32768" or 16k (probably not more than 64) makes things
better or worse.

Also, crud.. it's documented as kilobytes but that's not working somehow?
aaahahah. I guess the big EXPERIMENTAL tag scared people off since that
never got reported.

I'm guessing most people have a mix of small to large items, but you only
have large items and a relatively low memory limit, so this is why you're
seeing it so easily. I think most people setting large items have like
30G+ of memory so you end up with more spread around.

Thanks,
-Dormando

On Wed, 24 Aug 2022, Hayden wrote:

> What you're saying makes sense, and I'm pretty sure it won't be too hard to 
> add some functionality to my writing code to break my large items up into
> smaller parts that can each fit into a single chunk. That has the added 
> benefit that I won't have to bother increasing the max item size.
> In the meantime, though, I reran my pipeline and captured the output of 
> stats, stats slabs, and stats items both when evicting normally and when 
> getting
> spammed with the error.
>
> First, the output when I'm in the error state:
>  Output of stats
> STAT pid 1
> STAT uptime 11727
> STAT time 1661406229
> STAT version b'1.6.14'
> STAT libevent b'2.1.8-stable'
> STAT pointer_size 64
> STAT rusage_user 2.93837
> STAT rusage_system 6.339015
> STAT max_connections 1024
> STAT curr_connections 2
> STAT total_connections 8230
> STAT rejected_connections 0
> STAT connection_structures 6
> STAT response_obj_oom 0
> STAT response_obj_count 1
> STAT response_obj_bytes 65536
> STAT read_buf_count 8
> STAT read_buf_bytes 131072
> STAT read_buf_bytes_free 49152
> STAT read_buf_oom 0
> STAT reserved_fds 20
> STAT cmd_get 0
> STAT cmd_set 12640
> STAT cmd_flush 0
> STAT cmd_touch 0
> STAT cmd_meta 0
> STAT get_hits 0
> STAT get_misses 0
> STAT get_expired 0
> STAT get_flushed 0
> STAT delete_misses 0
> STAT delete_hits 0
> STAT incr_misses 0
> STAT incr_hits 0
> STAT decr_misses 0
> STAT decr_hits 0
> STAT cas_misses 0
> STAT cas_hits 0
> STAT cas_badval 0
> STAT touch_hits 0
> STAT touch_misses 0
> STAT store_too_large 0
> STAT store_no_memory 0
> STAT auth_cmds 0
> STAT auth_errors 0
> STAT bytes_read 21755739959
> STAT bytes_written 330909
> STAT limit_maxbytes 5368709120
> STAT accepting_conns 1
> STAT listen_disabled_num 0
> STAT time_in_listen_disabled_us 0
> STAT threads 4
> STAT conn_yields 0
> STAT hash_power_level 16
> STAT hash_bytes 524288
> STAT hash_is_expanding False
> STAT slab_reassign_rescues 0
> STAT slab_reassign_chunk_rescues 0
> STAT slab_reassign_evictions_nomem 0
> STAT slab_reassign_inline_reclaim 0
> STAT slab_reassign_busy_items 0
> STAT slab_reassign_busy_deletes 0
> STAT slab_reassign_running False
> STAT slabs_moved 0
> STAT lru_crawler_running 0
> STAT lru_crawler_starts 20
> STAT lru_maintainer_juggles 71777
> STAT malloc_fails 0
> STAT log_worker_dropped 0
> STAT log_worker_written 0
> STAT log_watcher_skipped 0
> STAT log_watcher_sent 0
> STAT log_watchers 0
> STAT unexpected_napi_ids 0
> STAT round_robin_fallback 0
> STAT bytes 5241499325
> STAT curr_items 4211
> STAT total_items 12640
> STAT slab_global_page_pool 0
> STAT expired_unfetched 0
> STAT evicted_unfetched 8429
> STAT evicted_active 0
> STAT evictions 8429
> STAT reclaimed 0
> STAT crawler_reclaimed 0
> STAT crawler_items_checked 4212
> STAT lrutail_reflocked 0
> STAT moves_to_cold 11872
> STAT moves_to_warm 0
> STAT moves_within_lru 0
> STAT direct_reclaims 9
> STAT lru_bumps_dropped 0
> END

Re: "Out of memory during read" errors instead of key eviction

2022-08-24 Thread dormando
To put a little more internal detail on this:

- As a SET is being processed item chunks must be made available
- If it is chunked memory, it will be fetching these data chunks from
across different slab classes (ie: 512k + 512k + sized enough for
whatever's left over)
- That full chunked item gets put in the largest slab class
- If another SET comes along and it needs 512k + 512k + an 8k, it has to
look into the 8k slab class for an item to evict.
- Except there's no memory in the 8k class: it's all actually in the
largest class.
- So there's nothing to evict to free up memory
- So you get an error.
- The slab page mover can make this worse by not leaving enough reserved
memory in the lower slab classes.

I wasn't sure how often this would happen in practice and fixed a few edge
cases in the past. Though I always figured I would've revisited it years
ago, so sorry about the trouble.

There are a few tuning options:
1) more memory, lol.
2) you can override slab_chunk_max to be much lower (like 8k or 16k),
which will make a lot more chunks but you won't realistically notice a
performance difference. This can reduce the number of total slab classes,
making it easier for more "end cap" memory to be found.
3) delete items as you use them so it doesn't have to evict. not the best
option.

There're code fixes I can try but I need to see what the exact symptom is
first, which is why I ask for the stats stuff.

On Wed, 24 Aug 2022, dormando wrote:

> Hey,
>
> You're probably hitting an edge case in the "large item support".
>
> Basically to store values > 512k memcached internally splits them up into
> chunks. When storing items memcached first allocates the item storage,
> then reads data from the client socket directly into the data storage.
>
> For chunked items it will be allocating chunks of memory as it reads from
> the socket, which can lead to that (thankfully very specific) "during
> read" error. I've long suspected some edge cases but haven't revisited
> that code in ... a very long time.
>
> If you can grab snapshots of "stats items" and "stats slabs" when it's
> both evicting normally and when it's giving you errors, I might be able to
> figure out what's causing it to bottom out and see if there's some tuning
> to do. Normal "stats" output is also helpful.
>
> It kind of smells like some slab classes are running low on memory
> sometimes, and the items in them are being read for a long time... but we
> have to see the data to be sure.
>
> If you're feeling brave you can try building the current "next" branch
> from github and try it out, as some fixes to the page mover went in there.
> Those fixes may have caused too much memory to be moved away from a slab
> class sometimes.
>
> Feel free to open an issue on github to track this if you'd like.
>
> have fun,
> -Dormando
>
> On Wed, 24 Aug 2022, Hayden wrote:
>
> > Hello,
> > I'm trying to use memcached for a use case I don't think is outlandish, but 
> > it's not behaving the way I expect. I
> > wanted to sanity check what I'm doing to see if it should be working but 
> > there's maybe something I've done wrong
> > with my configuration, or if my idea of how it's supposed to work is wrong, 
> > or if there's a problem with
> > memcached itself.
> >
> > I'm using memcached as a temporary shared image store in a distributed 
> > video processing application. At the front
> > of the pipeline is a process (actually all these processes are pods in a 
> > kubernetes cluster, if it matters, and
> > memcached is running in the cluster as well) that consumes a video stream 
> > over RTSP, saves each frame to
> > memcached, and outputs events to a message bus (kafka) with metadata about 
> > each frame. At the end of the pipeline
> > is another process that consumes these metadata events, and when it sees 
> > events it thinks are interesting it
> > retrieves the corresponding frame from memcached and adds the frame to a 
> > web UI. The video is typically 30fps, so
> > there are about 30 set() operations each second, and since each value is 
> > effectively an image the values are a
> > bit big (around 1MB... I upped the maximum value size in memcached to 2MB 
> > to make sure they'd fit, and I haven't
> > had any problems with my writes being rejected because of size).
> >
> > The video stream is processed in real-time, and effectively infinite, but 
> > the memory available to memcached
> > obviously isn't (I've configured it to use 5GB, FWIW). That's OK, because 
> > the cache is only supposed to be
> > temporary storage. My expectation is that once the available memory is 
> > fille

Re: "Out of memory during read" errors instead of key eviction

2022-08-24 Thread dormando
Hey,

You're probably hitting an edge case in the "large item support".

Basically to store values > 512k memcached internally splits them up into
chunks. When storing items memcached first allocates the item storage,
then reads data from the client socket directly into the data storage.

For chunked items it will be allocating chunks of memory as it reads from
the socket, which can lead to that (thankfully very specific) "during
read" error. I've long suspected some edge cases but haven't revisited
that code in ... a very long time.

If you can grab snapshots of "stats items" and "stats slabs" when it's
both evicting normally and when it's giving you errors, I might be able to
figure out what's causing it to bottom out and see if there's some tuning
to do. Normal "stats" output is also helpful.

It kind of smells like some slab classes are running low on memory
sometimes, and the items in them are being read for a long time... but we
have to see the data to be sure.

If you're feeling brave you can try building the current "next" branch
from github and try it out, as some fixes to the page mover went in there.
Those fixes may have caused too much memory to be moved away from a slab
class sometimes.

Feel free to open an issue on github to track this if you'd like.

have fun,
-Dormando

On Wed, 24 Aug 2022, Hayden wrote:

> Hello,
> I'm trying to use memcached for a use case I don't think is outlandish, but 
> it's not behaving the way I expect. I
> wanted to sanity check what I'm doing to see if it should be working but 
> there's maybe something I've done wrong
> with my configuration, or if my idea of how it's supposed to work is wrong, 
> or if there's a problem with
> memcached itself.
>
> I'm using memcached as a temporary shared image store in a distributed video 
> processing application. At the front
> of the pipeline is a process (actually all these processes are pods in a 
> kubernetes cluster, if it matters, and
> memcached is running in the cluster as well) that consumes a video stream 
> over RTSP, saves each frame to
> memcached, and outputs events to a message bus (kafka) with metadata about 
> each frame. At the end of the pipeline
> is another process that consumes these metadata events, and when it sees 
> events it thinks are interesting it
> retrieves the corresponding frame from memcached and adds the frame to a web 
> UI. The video is typically 30fps, so
> there are about 30 set() operations each second, and since each value is 
> effectively an image the values are a
> bit big (around 1MB... I upped the maximum value size in memcached to 2MB to 
> make sure they'd fit, and I haven't
> had any problems with my writes being rejected because of size).
>
> The video stream is processed in real-time, and effectively infinite, but the 
> memory available to memcached
> obviously isn't (I've configured it to use 5GB, FWIW). That's OK, because the 
> cache is only supposed to be
> temporary storage. My expectation is that once the available memory is filled 
> up (which takes a few minutes),
> then roughly speaking for every new frame added to memcached another entry 
> (ostensibly the oldest one) will be
> evicted. If the consuming process at the end of the pipeline doesn't get to a 
> frame it wants before it gets
> evicted that's OK.
>
> That's not what I'm seeing, though, or at least that's not all that I'm 
> seeing. There are lots of evictions
> happening, but the process that's writing to memcached also goes through 
> periods where every set() operation is
> rejected with an "Out of memory during read" error. It seems to happen in 
> bursts where for several seconds every
> write encounters the error, then for several seconds the set() calls work 
> just fine (and presumably other keys
> are being evicted), then the cycle repeats. It goes on this way for as long 
> as I let the process run.
>
> I'm using memcached v1.6.14, installed into my k8s cluster using the bitnami 
> helm chart v6.0.5. My reading and
> writing applications are both using pymemcache v3.5.2 for their access.
>
> Can anyone tell me if it seems like what I'm doing should work the way I 
> described, and where I should try
> investigating to see what's going wrong? Or alternatively, why what I'm 
> trying to do shouldn't work the way I
> expected it to, so I can figure out how to make my applications behave 
> differently?
>
> Thanks,
> Hayden
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups 
> "memcached" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to
> memcached+unsubscr...@googlegroups.com.
> To view this discussion on the 

any mcrouter/twemproxy users up for a chat?

2022-08-16 Thread dormando
Hey,

I'm working on polishing the user experience for the new proxy mode. See:
https://github.com/memcached/memcached/issues/827
and:
https://github.com/memcached/memcached/wiki/Proxy

The proxy has an extremely flexible configuration system, so it can be
bent into the needs of most organizations. However it's too verbose by
default and most people don't need to know the underpinnings. So to that
end I am including a "simple" configuration library to get people off the
ground:
https://github.com/memcached/memcached-proxylibs/blob/main/examples/multizone.lua

I'm currently expanding the options for the simple library before I make
another pass at documentation and an intro blog post.

Unfortunately I'm sitting here guessing as to what people's needs are and
what common configurations look like, so if you have a few minutes it'd be
nice to hear so I can try to make 'simple' work for more folks :)

Thanks,
-Dormando

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"memcached" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/memcached/5da51948-206a-d349-bf69-ddc5d6489553%40rydia.net.


Re: Question regarding possibly LAN card bottlenecks when using memcached

2022-04-11 Thread dormando
Hey,

Definitely not enough information from what you provide here. Typically
"CPU usage goes up" isn't correlated to "memcached is slow". You usually
see CPU usage go down, because the servers are waiting on data over the
network and are thus idle.

This can change in a few ways, ie; if your PHP servers have low timeouts
and are spin-looping/etc. Or some change in traffic is making lots of
calls to memcached, which translates to lots of syscalls, which translates
to CPU usage on the PHP servers.

Sorry, unfortunately this is too general of an issue. It's common to
wonder if an issue is memcached or not, and we have this wiki page to help
troubleshoot that: https://github.com/memcached/memcached/wiki/Timeouts -
don't skip steps! use the conn tester, read it carefully, etc.

On Mon, 11 Apr 2022, Scaler wrote:

> Hello,
> In production environment I have seen CPU usage go up in PHP servers, with 
> PHP profiler showing possibly memcached(or DB or something else) as culprit.
> Profiler shows definite memcached being slow but also other things too, so 
> not definite.
>
> memcached server hardware(and hardware connecting to it) is
> Intel Corporation Ethernet Controller 10-Gigabit X540-AT2 (rev 01)
> with 10GBASE-T, short less than 5 m cat 7 cable hooked up to Cisco Nexus 
> switch as layer 2 network.
>
> I noticed at about 400,000 memcached requests per second and at about 3~5 
> Gbps traffic the above problems occur(many small requests per second).
> Benchmarking the LAN card with simply iperf iperf3 I can easily get 9.x Gbps 
> flow.
>
> Do you think the LAN card could be the limiting factor is such case(many 
> small packets)?
> Or should such LAN cards be sufficient for such traffic and I should look 
> elsewhere for the performance issues.
>
> Thank you.
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups 
> "memcached" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to memcached+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/memcached/c2c68e33-3683-46e3-8f42-5b7f0040c768n%40googlegroups.com.
>
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"memcached" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/memcached/a919ef1d-8f75-6b3e-204e-85fb6aff7f2%40rydia.net.


Re: warm restart to avoid cold memcached nodes?

2022-03-12 Thread dormando
Hey,

RE: your rationale, if you're actually fine with the stale cache then just
save the restart file + the .meta file. You can't do a live snapshot, you
have to capture the files once the server has completely stopped. That
works fine though, I've brought up caches on completely different
machines.

As a curiosity on read performance: that is pretty unusual to run
memcached out of CPU unless the instances are very slow. Are you running
out of network bandwidth or CPU? Or perhaps you have millions of rps per
instance? :) It'd be operationally a lot simpler to have a small static
number of read replicas and memcached should excel at read heavy
workloads.

but if your instances are odd, I guess you do what you gotta do.

On Thu, 10 Mar 2022, Javier Arias Losada wrote:

> Thank you for your response.
> I think I'd better share some more details of our use case so that the 
> rationale of my question is more clear.
>
> Our use case is CPU bounded for memcached.
>
> I mean, a big enough part of the dataset can be fit into memory. On the other 
> side the number of requests grows
> and decreases by some orders of magnitude thorough the day organically with 
> user's traffic. 
> So, what we do is having N memcached pods with a relatively small number of 
> cores and each one of can fit a
> significant part of the dataset in memory... when load increases we 
> (Kubernetes) start a new, empty, node. Our
> clients replicate all write operations to all memcached nodes, and do load 
> balancing for read operations. This is
> OK in our case because we are very read heavy.
>
> When scaling up, the node is empty and we see an increase int he number of 
> misses... but nothing very bad... for
> us this is more convenient than having a huge amount of servers sitting 
> almost idle for over 16 hours.
>
> So we were thinking on leveraging WarmRestarts to warm faster newly created 
> memcached nodes. It's true that still
> there would be some inconsistencies between new and old nodes, but much less 
> than with our current setup.
>
> This is why I was asking about the option for creating some kind of snapshot 
> from a live node... or try to
> leverage the WarmRestarts to increase our efficiency.
>
> Not sure if this would bring more ideas... but I hope our use case is now 
> more clear.
>
> Again, thank you.
> On Wednesday, March 9, 2022 at 11:58:11 PM UTC+1 Dormando wrote:
>   Hey,
>
>   Unfortuantely I don't think it works that way. Warm restart is useful 
> for
>   upgrading or slightly changing the configuration of an independent cache
>   node without losing the data.
>
>   However since you're expanding and contracting a cluster, keys get
>   remapped inbetween hosts. If you're saving the data of a downed machine
>   and bringing it back up later, you will have stale cache and still cause
>   some extra cache misses.
>
>   As an aside you should be autoscaling based on cache hit/miss rate and 
> not
>   the number of requests, unless your rate is huge memcached will scale
>   traffic very well. Hopefully you're already doing that :)
>
>   On Wed, 9 Mar 2022, Javier Arias Losada wrote:
>
>   > Hi all,
>   > recently discovered about the Warm restart feature... simply awesome!
>   > we use memcached as a look-aside cache and we run it in kubernetes, 
> also have autoscaling based on
>   cpu... so when
>   > the number of requests increase enough, a new memcached node is 
> started... we can tolerate a
>   temporary decrease
>   > in hit/miss ratio... but I think we can improve the situation by 
> warming up new memcached nodes.
>   >
>   > Wondering if the warm restart could be used for that regards. Is it 
> possible to dump the files
>   before stopping a
>   > running node? I was thinking about maintaining periodic dumps that 
> are used by the new node(s)
>   started. Not sure
>   > if this is an option.
>   >
>   > Anyone has solved a similar problem? I'd appreciate hearing others' 
> experiences.
>   >
>   > Thank you
>   > Javier
>   >
>   >
>   > --
>   >
>   > ---
>   > You received this message because you are subscribed to the Google 
> Groups "memcached" group.
>   > To unsubscribe from this group and stop receiving emails from it, 
> send an email to
>   > memcached+...@googlegroups.com.
>   > To view this discussion on the web visit
>   >
>   
> https://groups.google.com/d/msgid/memcached/d0f09c1c-760f-44bb-95e9-95afa7dd9c43n%40googlegroups.com.
>   >
> 

Re: warm restart to avoid cold memcached nodes?

2022-03-09 Thread dormando
Hey,

Unfortuantely I don't think it works that way. Warm restart is useful for
upgrading or slightly changing the configuration of an independent cache
node without losing the data.

However since you're expanding and contracting a cluster, keys get
remapped inbetween hosts. If you're saving the data of a downed machine
and bringing it back up later, you will have stale cache and still cause
some extra cache misses.

As an aside you should be autoscaling based on cache hit/miss rate and not
the number of requests, unless your rate is huge memcached will scale
traffic very well. Hopefully you're already doing that :)

On Wed, 9 Mar 2022, Javier Arias Losada wrote:

> Hi all,
> recently discovered about the Warm restart feature... simply awesome!
> we use memcached as a look-aside cache and we run it in kubernetes, also have 
> autoscaling based on cpu... so when
> the number of requests increase enough, a new memcached node is started... we 
> can tolerate a temporary decrease
> in hit/miss ratio... but I think we can improve the situation by warming up 
> new memcached nodes.
>
> Wondering if the warm restart could be used for that regards. Is it possible 
> to dump the files before stopping a
> running node? I was thinking about maintaining periodic dumps that are used 
> by the new node(s) started. Not sure
> if this is an option.
>
> Anyone has solved a similar problem? I'd appreciate hearing others' 
> experiences.
>
> Thank you
> Javier
>
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups 
> "memcached" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to
> memcached+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/memcached/d0f09c1c-760f-44bb-95e9-95afa7dd9c43n%40googlegroups.com.
>
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"memcached" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/memcached/de928623-bddb-53be-8598-b4b0b0dd75a0%40rydia.net.


Memcached builtin proxy

2022-02-21 Thread dormando
Hey,

Sorry, I rarely post to the ML anymore. Unclear how many people still read
it :)

Just in case, the proxy is in early release now:
https://github.com/memcached/memcached/wiki/Proxy - if you're using
twemproxy or mcrouter both are basically abandonware. This should replace
that and still be trivial to build and use and perform well.

Plus fancy future features as development continues.

have fun,
-Dormando

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"memcached" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/memcached/5c62b5b7-3a93-8564-bdf8-beb187316f75%40rydia.net.


Re: Cannot test Warm Restart on Windows

2021-12-17 Thread dormando
Hey,

Sorry for the late response; I have no idea who this person is or how it's
being built and there doesn't seem like a code patch?

So you should file an issue with them if you haven't already.

On Wed, 8 Dec 2021, Damian Chapman wrote:

> Hi Dormando,
> Thanks for responding to me.
>
> I got the Windows build from https://github.com/nono303/memcached
>
> Kind regards,
> Damian.
>
> On Wednesday, 8 December 2021 at 19:14:11 UTC Dormando wrote:
>   Where did you get a windows build of 1.6.12?
> We don't officially support windows, and I hadn't heard of anyone even making 
> recent builds of a windows
> fork. You're best off asking whomever's doing that build.
>
>   On Dec 8, 2021, at 8:17 AM, Damian Chapman  wrote:
>
>   It looks like Crtl-C is incorrect here to stop Memcached gracefully.
>
>
> To stop gracefully it needs 
>
> kill -SIGUSR1  where  is the process id 
> of Memcached
>
> but SIGUSR1 is a Unix/Linux signal for inter process communication and it is 
> not used in Windows.
>
> This is why my testing does not work in Windows.
>
> On Tuesday, 7 December 2021 at 11:35:00 UTC Damian Chapman wrote:
>   Hi all,
> I am trying to test Memcached warm restart on Windows.
> I am using v1.6.12.
>
> I used ImDisk to create a RAM disk on Windows on the D: drive (1G)
>
> I start Memcached with the -e option
>
> C:\Users\chapmand\memcached\1.6.12\libevent-2.1\x64>memcached.exe -e D:\backup
> [restart] no metadata save file, starting with a clean cache
>
> I check the file system and I can see that the backup file is created:
>
> D:\>dir
>  Volume in drive D has no label.
>  Volume Serial Number is 5881-2000
>
>  Directory of D:\
>
> 07/12/2021  11:17        67,108,864 backup
>                1 File(s)     67,108,864 bytes
>                0 Dir(s)     988,749,824 bytes free
>    
> I start a telnet session and set and get mykey
>
> telnet localhost 11211
>
> set mykey 0 300 4
> data
> STORED
> get mykey
> VALUE mykey 0 4
> END    
>
> I stop Memcached using Ctrl-C and restart
>
> C:\Users\chapmand\memcached\1.6.12\libevent-2.1\x64>memcached.exe -e D:\backup
> [restart] no metadata save file, starting with a clean cache
> Signal handled: Interrupt.
>
> C:\Users\chapmand\memcached\1.6.12\libevent-2.1\x64>memcached.exe -e D:\backup
> [restart] no metadata save file, starting with a clean cache
>
> I think the issue may be here as I would not expect to restart with a clean 
> cache. I see no
> metadata file on the D: drive.
>
> When I run the telnet session again and type the get mykey command it does 
> not find the data
> value.
>
> Any help would be really appreciated.
> Thank you in advance.
>
> Kind regards,
> Damian.
>
>   --
>
>   ---
>   You received this message because you are subscribed to the Google 
> Groups "memcached" group.
>   To unsubscribe from this group and stop receiving emails from it, send 
> an email to
>   memcached+...@googlegroups.com.
>   To view this discussion on the web visit
>   
> https://groups.google.com/d/msgid/memcached/de3cd914-52c6-4fe2-b5eb-422407247d0an%40googlegroups.com.
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups 
> "memcached" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to
> memcached+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/memcached/59845faf-dc91-4944-afc0-7643356eb68an%40googlegroups.com.
>
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"memcached" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/memcached/c4f89c86-7720-38a0-3f8b-5822b81d77e7%40rydia.net.


Re: Cannot test Warm Restart on Windows

2021-12-08 Thread dormando
Where did you get a windows build of 1.6.12?

We don't officially support windows, and I hadn't heard of anyone even making 
recent builds of a windows fork. You're best off asking whomever's doing that 
build.

> On Dec 8, 2021, at 8:17 AM, Damian Chapman  wrote:
> 
> It looks like Crtl-C is incorrect here to stop Memcached gracefully.
> 
> To stop gracefully it needs 
> 
> kill -SIGUSR1  where  is the process id 
> of Memcached
> 
> but SIGUSR1 is a Unix/Linux signal for inter process communication and it is 
> not used in Windows.
> 
> This is why my testing does not work in Windows.
> 
>> On Tuesday, 7 December 2021 at 11:35:00 UTC Damian Chapman wrote:
>> Hi all,
>> 
>> I am trying to test Memcached warm restart on Windows.
>> I am using v1.6.12.
>> 
>> I used ImDisk to create a RAM disk on Windows on the D: drive (1G)
>> 
>> I start Memcached with the -e option
>> 
>> C:\Users\chapmand\memcached\1.6.12\libevent-2.1\x64>memcached.exe -e 
>> D:\backup
>> [restart] no metadata save file, starting with a clean cache
>> 
>> I check the file system and I can see that the backup file is created:
>> 
>> D:\>dir
>>  Volume in drive D has no label.
>>  Volume Serial Number is 5881-2000
>> 
>>  Directory of D:\
>> 
>> 07/12/2021  11:1767,108,864 backup
>>1 File(s) 67,108,864 bytes
>>0 Dir(s) 988,749,824 bytes free
>> 
>> I start a telnet session and set and get mykey
>> 
>> telnet localhost 11211
>> 
>> set mykey 0 300 4
>> data
>> STORED
>> get mykey
>> VALUE mykey 0 4
>> END 
>> 
>> I stop Memcached using Ctrl-C and restart
>> 
>> C:\Users\chapmand\memcached\1.6.12\libevent-2.1\x64>memcached.exe -e 
>> D:\backup
>> [restart] no metadata save file, starting with a clean cache
>> Signal handled: Interrupt.
>> 
>> C:\Users\chapmand\memcached\1.6.12\libevent-2.1\x64>memcached.exe -e 
>> D:\backup
>> [restart] no metadata save file, starting with a clean cache
>> 
>> I think the issue may be here as I would not expect to restart with a clean 
>> cache. I see no
>> metadata file on the D: drive.
>> 
>> When I run the telnet session again and type the get mykey command it does 
>> not find the data value.
>> 
>> Any help would be really appreciated.
>> Thank you in advance.
>> 
>> Kind regards,
>> Damian.
>> 
> 
> -- 
> 
> --- 
> You received this message because you are subscribed to the Google Groups 
> "memcached" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to memcached+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/memcached/de3cd914-52c6-4fe2-b5eb-422407247d0an%40googlegroups.com.

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"memcached" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/memcached/300EEF98-4C1A-4D1D-8C5A-691DD74DD4D4%40rydia.net.


Memcached proxy API help

2021-08-05 Thread dormando
https://github.com/memcached/memcached/issues/796

Currently using a memcached proxy? (mcrouter/twemproxy/etc). Want to use a
for-real OSS community oriented and actively supported proxy instead? -
please help me work out an API. We're now weeks away from first
production-stable release.

I don't personally maintain any clusters right now, so I want to be sure
to hear from folks who develop for and support clusters. Get your devs
involved as well, these features go well beyond operational support!

Thanks,
-Dormando

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"memcached" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/memcached/b4e61bba-da1c-a2ef-b5c6-98945c5ad94%40rydia.net.


Re: Proposal discussion: Invalidating multiple keys in a set or with a dependency relation

2021-07-23 Thread dormando
Hey,

Thought I had a specific FAQ page for this but maybe not?

The reason why memcached can't delete groups of keys atomically, is
because it's a distributed system by default and the servers don't
communicate. Keys are spread across many servers. You can use namespacing
instead:

https://github.com/memcached/memcached/wiki/ProgrammingTricks#namespacing

which uses a tertiary key as an index.

Large objects are stored internally by splitting them into small objects.
Set the item size limit (-I) as high as you need, reasonably.

RE: sets in general. I have ideas, but not sure when I'd get to them.

On Thu, 22 Jul 2021, Mihai D wrote:

> I think I should provide some common usage patterns related to the first idea 
> in the previous mail (delete sets):It is common to store a set
> of objects using setm in a single command and retrieve all of them together 
> in a single getm command. Set aliases would spare the user having
> to care about creating and storing or recomputing the keys for each 
> individual object. I think this would not add much complexity since I'm
> not proposing set operations like union or intersection.
>
> On different note, it is also common to split a big object (>1MB) into small 
> individual 1MB objects to store. There could be a command that
> would allow storing a big object and let memcached do the spliting as the 
> data arrives. I wonder if this would prompt users to start using
> memcached in use cases it is not design for tho. I also wonder how common 
> this use case is.
>
> Regarding the DAG keys, maybe it is too general and adds too much complexity.
> El jueves, 22 de julio de 2021 a las 17:34:53 UTC+2, Mihai D escribió:
>   I wonder if memcached implements a mechanism for tagging keys with a 
> piece of metadata such that in a single operation you can
>   invalidate all keys with the same metadata tag, basically supporting 
> sets of keys for the delete operation.
> Alternatively, and more generally, I wonder if memcached can maintain a 
> tree/DAG of relationships or dependencies between keys, like
> this:
>
> k1 -> k2
>      |-> k3 -> k4
>
> such that when you delete k1,   k2, k3, and k4 are also deleted but if you 
> delete k3, only k3 and k4 are deleted.
>
> I'm not aware if these mechanisms exist already, since I'm not familiar with 
> all of memcached. 
>
> If they don't, I'd like to start a discussion on implementing one or both of 
> these mechanisms or similar ones. I understand that
> memcached tries to be minimal and with predictable amortized constant time 
> performance, so *maybe* these features are overkill.
> Nevertheless, I'd like to hear the devs opinion.
>
> Currently, if you need such a feature you would implement it in the client, 
> keeping the metadata there and performing multiple delete
> commands when needed.
>
> Users who do not need such a feature should not suffer performance overhead 
> if they don't use them.
>
>
> Regards,
> Mihai (gh:hMihaiDavid)
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups 
> "memcached" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to memcached+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/memcached/da094a37-f637-43d5-9306-4cc058c9d3c6n%40googlegroups.com.
>
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"memcached" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/memcached/40908cc2-c9a2-cb99-beb2-a6cb8c1ba34f%40rydia.net.


Re: bug: undefined reference to `cache_error' when running make after CFLAGS='-DNDEBUG' ./configure

2021-06-29 Thread dormando
Hey,

I'm not going to fix this one I think; testapp needs to be built without
NDEBUG. You shouldn't be passing that flag in; the build system makes the
main memcached binary with NDEBUG already.

On Wed, 23 Jun 2021, 張俊芝 wrote:

>
> Version: 1.6.9
>
> Steps to reproduce the bug:
>
>   Run the following commands:
>     CFLAGS='-DNDEBUG' ./configure
>     make
>
>   When compiling testapp.c, the error of undefined reference to `cache_error' 
> would happen.
>
> Path that fixes the bug:
> @@ -1,5 +1,8 @@
>  /* -*- Mode: C; tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*- */
> +#ifdef NDEBUG
>  #undef NDEBUG
> +#define PRODUCT_BUILT_WITH_NDEBUG
> +#endif
>  #include 
>  #include 
>  #include 
> @@ -236,12 +239,16 @@
>  char old = *(p - 1);
>  *(p - 1) = 0;
>  cache_free(cache, p);
> +    #ifndef PRODUCT_BUILT_WITH_NDEBUG
>  assert(cache_error == -1);
> +    #endif
>  *(p - 1) = old;
>  
>  p[sizeof(uint32_t)] = 0;
>  cache_free(cache, p);
> +    #ifndef PRODUCT_BUILT_WITH_NDEBUG
>  assert(cache_error == 1);
> +    #endif
>  
>  /* restore signal handler */
>  sigaction(SIGABRT, _action, NULL);
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups 
> "memcached" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to memcached+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/memcached/cd5e8bd8-af2d-429a-af97-1e66a964ee91n%40googlegroups.com.
>
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"memcached" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/memcached/c7bd5228-bf41-465a-145-4e3a7beeeda1%40rydia.net.


Embedded memcached proxy API discussion

2021-06-10 Thread dormando
github.com/memcached/memcached/issues/796 - Been working on an embedded
memcached proxy. An OSS/community replacement for mcrouter/similar. If
this is interesting to you, please take an early look and help me work out
a clean route API for it!

-Dormando

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"memcached" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/memcached/13f77157-4ebf-6958-9739-3d770683fe7%40rydia.net.


Re: SegFault in Crawler Part

2021-06-01 Thread dormando
You can't evict memory that's being used to load data from the network.
So if you have a low amount of memory and run a benchmark doing a bunch of
parallel writes you're going to be sad.

On Tue, 1 Jun 2021, Qingchen Dang wrote:

> Thank you very much! Yes your guess is correct, I forgot the possibility of 
> evicting a crawler item :(
> Furthermore, I have a similar problem as this post: 
> https://github.com/memcached/memcached/issues/467
> I gave a very limited memory usage to Memcached to test eviction and it does 
> cause the similar error.
> When I use Memtier_Benchmark, the error looks like:
>
> [RUN #1] Preparing benchmark client...
>
> [RUN #1] Launching threads now...
>
> error: response parsing failed.
>
> error: response parsing failed.
>
> server 127.0.0.1:11211 handle error response: SERVER_ERROR out of memory 
> storing object
>
> error: response parsing failed.
>
> server 127.0.0.1:11211 handle error response: SERVER_ERROR out of memory 
> storing object
>
> error: response parsing failed.
>
> [RUN #1 17%,   0 secs]  1 threads:       87137 ops,   87213 (avg:   87213) 
> ops/sec, 65.66MB/sec (avg: 65.66MB/sec
>
> [RUN #1 36%,   1 secs]  1 threads:      179012 ops,   91864 (avg:   89540) 
> ops/sec, 69.87MB/sec (avg: 67.76MB/sec
>
> [RUN #1 56%,   2 secs]  1 threads:      279971 ops,  100947 (avg:   93343) 
> ops/sec, 76.76MB/sec (avg: 70.76MB/sec
>
> [RUN #1 75%,   3 secs]  1 threads:      375715 ops,   95732 (avg:   93941) 
> ops/sec, 72.87MB/sec (avg: 71.29MB/sec
>
> [RUN #1 92%,   4 secs]  1 threads:      462054 ops,   93910 (avg:   93935) 
> ops/sec, 71.41MB/sec (avg: 71.31MB/sec
>
> [RUN #1 92%,   4 secs]  1 threads:      462054 ops,       0 (avg:   92431) 
> ops/sec, 0.00KB/sec (avg: 70.17MB/sec)
>
> [RUN #1 92%,   5 secs]  1 threads:      462054 ops,       0 (avg:   90975) 
> ops/sec, 0.00KB/sec (avg: 69.06MB/sec)
>
> [RUN #1 92%,   5 secs]  1 threads:      462054 ops,       0 (avg:   89564) 
> ops/sec, 0.00KB/sec (avg: 67.99MB/sec)
>
> When I use Memaslap, it looks like 
>
> set proportion: set_prop=0.10
>
> get proportion: get_prop=0.90
>
> <12 SERVER_ERROR out of memory storing object
>
> <10 SERVER_ERROR out of memory storing object
>
> <12 SERVER_ERROR out of memory storing object
>
> <7 SERVER_ERROR out of memory storing object
>
> The unmodified Memcached gives errors less frequently than Memcached with my 
> eviction framework (especially using Memtier_Benchmark), so I wonder the
> reason. I read your post message in the above link, but I am still confused 
> about why memory limitation affect Memcached's usage. Could you give a more
> detailed explanation? If I have to give limited memory, is there a way to 
> avoid this issue?
> Thank you very much for helping!
>
> Best,
> Qingchen
> On Tuesday, June 1, 2021 at 2:36:09 AM UTC-4 Dormando wrote:
>   try '-o no_lru_crawler' ? That definitely works.
>
>   I don't know what you're doing since no code has been provided. The 
> locks
>   around managing LRU tails is pretty strict; so make sure you are 
> actually
>   using them correctly.
>
>   The LRU crawler works by injecting a fake item into the LRU, then using
>   that to keep its position and walk. If I had to guess I bet you've
>   "evicted" the LRU crawler, which then immediately dies when it tries to
>   continue crawling.
>
>   On Mon, 31 May 2021, Qingchen Dang wrote:
>
>   > Furthermore, I tried to disable the crawler with the '- 
> no_lru_crawler' command parameter, and it gives the same error. I wonder why 
> it
>   does not disable
>   > the crawler lru as it supposes to do.
>   >
>   > On Monday, May 31, 2021 at 1:02:38 AM UTC-4 Qingchen Dang wrote:
>   > Hi,
>   > I am implementing a framework based on Memcached. There's a problem 
> that confused me a lot. The framework basically change the eviction
>   policy, so
>   > when it calls to evict an item, it might not evict the tail item at 
> COLD LRU, instead it will look for a "more suitable" item to evict and
>   it will
>   > reinsert the tail items to the head of COLD queue.
>   >
>   > It mostly works fine, but sometimes it causes a SegFault when 
> reinsertion happens very frequently (like in almost each eviction). The
>   SegFault is
>   > triggered in the crawler part. As attached, it seems when the crawler 
> loops through the item queue, it reaches an invalid memory address.
>   The bug
>   > happens after around 5000~1000 GET/SET (9:1) operations. I 
> used Memaslap for testi

Re: SegFault in Crawler Part

2021-06-01 Thread dormando
try '-o no_lru_crawler' ? That definitely works.

I don't know what you're doing since no code has been provided. The locks
around managing LRU tails is pretty strict; so make sure you are actually
using them correctly.

The LRU crawler works by injecting a fake item into the LRU, then using
that to keep its position and walk. If I had to guess I bet you've
"evicted" the LRU crawler, which then immediately dies when it tries to
continue crawling.

On Mon, 31 May 2021, Qingchen Dang wrote:

> Furthermore, I tried to disable the crawler with the '- no_lru_crawler' 
> command parameter, and it gives the same error. I wonder why it does not 
> disable
> the crawler lru as it supposes to do.
>
> On Monday, May 31, 2021 at 1:02:38 AM UTC-4 Qingchen Dang wrote:
>   Hi,
> I am implementing a framework based on Memcached. There's a problem that 
> confused me a lot. The framework basically change the eviction policy, so
> when it calls to evict an item, it might not evict the tail item at COLD LRU, 
> instead it will look for a "more suitable" item to evict and it will
> reinsert the tail items to the head of COLD queue.
>
> It mostly works fine, but sometimes it causes a SegFault when reinsertion 
> happens very frequently (like in almost each eviction). The SegFault is
> triggered in the crawler part. As attached, it seems when the crawler loops 
> through the item queue, it reaches an invalid memory address. The bug
> happens after around 5000~1000 GET/SET (9:1) operations. I used 
> Memaslap for testing.
>
> Could anyone give me some suggestions of the reasons which cause such error?
>
> Here is the gdb messages:
>
> Thread 8 "memcached" received signal SIGSEGV, Segmentation fault.
>
> [Switching to Thread 0x74d6c700 (LWP 36414)]
>
> do_item_crawl_q (it=it@entry=0x5579e7e0 )
>
>     at items.c:2015
>
> 2015             it->prev->next = it->next;
>
> (gdb) print it->prev
>
> $5 = (struct _stritem *) 0x4f4d6355616d5471
>
> (gdb) print it->prev->next
>
> Cannot access memory at address 0x4f4d6355616d5479
>
> (gdb) print it->next
>
> $6 = (struct _stritem *) 0x7a59324376753351
>
> (gdb) print it->next->prev
>
> Cannot access memory at address 0x7a59324376753361
>
> (gdb) print it->nkey
>
> $7 = 0 '\000'
>
> (gdb) 
>
> Here is the part that triggers the error:
>
> 2012         assert(it->next != it);
>
> 2013         if (it->next) {
>
> 2014             assert(it->prev->next == it);
>
> 2015             it->prev->next = it->next;
>
> 2016             it->next->prev = it->prev;
>
> 2017         } else {
>
> 2018             /* Tail. Move this above? */
>
> 2019             it->prev->next = 0;
>
> 2020         }
>
> (I'm also confused why the assert function in line 2014 does not give error?)
>
> Thank you very much for helping!
>
> Best,
>
> Qingchen
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups 
> "memcached" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to memcached+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/memcached/1398d377-06b8-4a43-8811-f299d044d055n%40googlegroups.com.
>
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"memcached" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/memcached/1f184a63-c220-c949-91f9-9aeca3ff1d85%40rydia.net.


Re: Plan for WarmStart with Extstore support,and what is the progress now

2021-05-08 Thread dormando
Hey,

Probably not. I'm still working toward driving the first usable
version fo the proxy code. Sorry :(

On Fri, 7 May 2021, Eric Zhang wrote:

> Hello. 
> Will warmstart with extstore support be released this moth?
> 在2021年4月7日星期三 UTC+8 下午4:11:22 写道:
>   I don't remember what needed to be done at this point. I'll check 
> possibly
>   next week; I have a large change that I'm in the middle of working on 
> and
>   need to finish first.
>
>   fwiw 'next' branch is for basing development work on. My personal repo 
> is
>   full of recent development branches. I don't typically update 
> master/next
>   on there.
>
>   If your co is able to sponsor development that can speed things up too.
>
>   -Dormando
>
>   On Tue, 6 Apr 2021, Eric Zhang wrote:
>
>   > This month or next month is OK for me, and I will be the first one to 
> test it. I can make some patches to make it
>   > work, based on https://github.com/memcached/memcached/tree/master or 
> you have your own branch for it. I note that
>   > your own repo is not updated for a long time 
> https://github.com/dormando/memcached.
>   >
>   > 在2021年4月7日星期三 UTC+8 上午4:06:26 写道:
>   > Hey,
>   >
>   > Sorry about that not being done yet. Mostly due to lack of demand for 
> it I
>   > was working on other things. I can and should get to that soon (this 
> month
>   > or next month?).
>   >
>   > It's not too hard but there are a lot of places that need to be 
> patched.
>   > Please note that warm restart with extstore does not make it crash or
>   > reboot safe. There is still a memory segment that needs to be saved to
>   > survive reboots.
>   >
>   > -Dormando
>   >
>   > On Tue, 6 Apr 2021, Shihu Zhang wrote:
>   >
>   > >         Our program have test the feature of WarmStart which is 
> useful. Data cached in Memcached is
>   > nearly never
>   > > modified and is 10TB large (value is 100kb+). So WarmStart with 
> Exstore support  is a better
>   > feature.         So
>   > > I want to know the plan for it, and what is the complex mentioned
>   > > here https://github.com/memcached/memcached/issues/531.
>   > >
>   > > --
>   > >
>   > > ---
>   > > You received this message because you are subscribed to the Google 
> Groups "memcached" group.
>   > > To unsubscribe from this group and stop receiving emails from it, 
> send an email to
>   > > memcached+...@googlegroups.com.
>   > > To view this discussion on the web visit
>   > >
>   > 
> https://groups.google.com/d/msgid/memcached/17c8b54e-76c7-41e9-8797-47bb66457892n%40googlegroups.com.
>   > >
>   > >
>   >
>   > --
>   >
>   > ---
>   > You received this message because you are subscribed to the Google 
> Groups "memcached" group.
>   > To unsubscribe from this group and stop receiving emails from it, 
> send an email to
>   > memcached+...@googlegroups.com.
>   > To view this discussion on the web visit
>   > 
> https://groups.google.com/d/msgid/memcached/50b76cfc-1892-4f34-94a4-b14e214c2e27n%40googlegroups.com.
>   >
>   >
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups 
> "memcached" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to memcached+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/memcached/1c41f7dd-c5a8-49d5-bf64-0e6655dc2377n%40googlegroups.com.
>
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"memcached" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/memcached/5173db6f-025-af17-af85-dc28d9a358f0%40rydia.net.


Re: Questions about slabs rebalance thread

2021-05-07 Thread dormando
Hey,

There are user commands which can optionally control the slab rebalancer,
so the lock is mostly for that interaction from worker threads. The
restart system also needs to stop the thread gracefully.

On Fri, 7 May 2021, Wenxin Zheng wrote:

> It seems that in 'slabs.c', slab_rebalance_thread will be created only once. 
> Which variables are required to be protected by lock `slabs_rebalance_lock` ?
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups 
> "memcached" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to memcached+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/memcached/636d45a7-76a5-47c4-b90f-433a13d73c61n%40googlegroups.com.
>
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"memcached" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/memcached/193f9972-1c60-153-cf4f-d7d1b1dd8231%40rydia.net.


Re: Memcached Testing Evictions

2021-04-29 Thread dormando
Hey,

I don't know what you're doing but I can say there's nothing buffering
prints. Maybe you're starting memcached as a daemon? Prints won't work in
that case.

On Thu, 29 Apr 2021, Qingchen Dang wrote:

> Thank you for helping! Yes I actually figured it out that I used wrong 
> parameters to load data which causes the overwrite.
>
> But I do have one more question, how to add print statements to memcached 
> source code? When I use ‘make test’, printf works as usual. But in the actual 
> run, it seems the print is re-bufferred to somewhere else which makes it very 
> hard to debug. Is there a way to print to stdout?
>
> Thank you very much!
> QD
>
> Sent from my iPhone
>
> > On Apr 28, 2021, at 10:33 PM, dormando  wrote:
> >
> > How're you loading the data?
> >
> > From the stats it looks like you're probably overwriting the same values
> > over and over (high total_items but low curr_items and no get_expired)
> >
> >> On Wed, 28 Apr 2021, Qingchen Dang wrote:
> >>
> >> Hi,
> >> I am trying to test my optimization of Memcached eviction, but it seems 
> >> even though I got a lot of misses, the evictions stats is always 0.
> >> Here is the stats for the original Memcached without any change. I wonder 
> >> what memcached config/benchmark config can make eviction happen?
> >>
> >> STAT pointer_size 64
> >>
> >> STAT rusage_user 110.668826
> >>
> >> STAT rusage_system 769.767632
> >>
> >> STAT max_connections 1024
> >>
> >> STAT curr_connections 2
> >>
> >> STAT total_connections 8004
> >>
> >> STAT rejected_connections 0
> >>
> >> STAT connection_structures 402
> >>
> >> STAT response_obj_oom 0
> >>
> >> STAT response_obj_count 1
> >>
> >> STAT response_obj_bytes 131072
> >>
> >> STAT read_buf_count 16
> >>
> >> STAT read_buf_bytes 262144
> >>
> >> STAT read_buf_bytes_free 114688
> >>
> >> STAT read_buf_oom 0
> >>
> >> STAT reserved_fds 40
> >>
> >> STAT cmd_get 7272
> >>
> >> STAT cmd_set 728
> >>
> >> STAT cmd_flush 0
> >>
> >> STAT get_hits 626512
> >>
> >> STAT get_misses 72093488
> >>
> >> STAT get_expired 0
> >>
> >> STAT get_flushed 0
> >>
> >> STAT delete_misses 0
> >>
> >> STAT delete_hits 0
> >>
> >> STAT incr_misses 0
> >>
> >> STAT incr_hits 0
> >>
> >> STAT decr_misses 0
> >>
> >> STAT decr_hits 0
> >>
> >> STAT bytes_read 1809568092
> >>
> >> STAT bytes_written 459401348
> >>
> >> STAT limit_maxbytes 2097152
> >>
> >> STAT accepting_conns 1
> >>
> >> STAT listen_disabled_num 0
> >>
> >> STAT time_in_listen_disabled_us 0
> >>
> >> STAT threads 8
> >>
> >> STAT conn_yields 0
> >>
> >> STAT hash_power_level 16
> >>
> >> STAT hash_bytes 524288
> >>
> >> STAT hash_is_expanding 0
> >>
> >> STAT slab_reassign_rescues 0
> >>
> >> STAT slab_reassign_chunk_rescues 0
> >>
> >> STAT slab_reassign_evictions_nomem 0
> >>
> >> STAT slab_reassign_inline_reclaim 0
> >>
> >> STAT slab_reassign_busy_items 0
> >>
> >> STAT slab_reassign_busy_deletes 0
> >>
> >> STAT slab_reassign_running 0
> >>
> >> STAT slabs_moved 0
> >>
> >> STAT lru_crawler_running 0
> >>
> >> STAT lru_crawler_starts 4
> >>
> >> STAT lru_maintainer_juggles 101778
> >>
> >> STAT malloc_fails 0
> >>
> >> STAT bytes 94437
> >>
> >> STAT curr_items 909
> >>
> >> STAT total_items 728
> >>
> >> STAT slab_global_page_pool 0
> >>
> >> STAT expired_unfetched 0
> >>
> >> STAT evicted_unfetched 0
> >>
> >> STAT evicted_active 0
> >>
> >> STAT evictions 0
> >>
> >> STAT reclaimed 0
> >>
> >> STAT crawler_reclaimed 0
> >>
> >> STAT crawler_items_checked 2732
> >>
> >> STAT lrutail_reflocked 12
> >>
> >> STAT moves_to_cold 116840
> >>
> >> STAT moves_to_warm 3185
> >>
> >> STAT moves_within_lru 538
> >>

Re: Memcached Testing Evictions

2021-04-28 Thread dormando
How're you loading the data?

>From the stats it looks like you're probably overwriting the same values
over and over (high total_items but low curr_items and no get_expired)

On Wed, 28 Apr 2021, Qingchen Dang wrote:

> Hi,
> I am trying to test my optimization of Memcached eviction, but it seems even 
> though I got a lot of misses, the evictions stats is always 0. 
> Here is the stats for the original Memcached without any change. I wonder 
> what memcached config/benchmark config can make eviction happen?
>
> STAT pointer_size 64
>
> STAT rusage_user 110.668826
>
> STAT rusage_system 769.767632
>
> STAT max_connections 1024
>
> STAT curr_connections 2
>
> STAT total_connections 8004
>
> STAT rejected_connections 0
>
> STAT connection_structures 402
>
> STAT response_obj_oom 0
>
> STAT response_obj_count 1
>
> STAT response_obj_bytes 131072
>
> STAT read_buf_count 16
>
> STAT read_buf_bytes 262144
>
> STAT read_buf_bytes_free 114688
>
> STAT read_buf_oom 0
>
> STAT reserved_fds 40
>
> STAT cmd_get 7272
>
> STAT cmd_set 728
>
> STAT cmd_flush 0
>
> STAT get_hits 626512
>
> STAT get_misses 72093488
>
> STAT get_expired 0
>
> STAT get_flushed 0
>
> STAT delete_misses 0
>
> STAT delete_hits 0
>
> STAT incr_misses 0
>
> STAT incr_hits 0
>
> STAT decr_misses 0
>
> STAT decr_hits 0
>
> STAT bytes_read 1809568092
>
> STAT bytes_written 459401348
>
> STAT limit_maxbytes 2097152
>
> STAT accepting_conns 1
>
> STAT listen_disabled_num 0
>
> STAT time_in_listen_disabled_us 0
>
> STAT threads 8
>
> STAT conn_yields 0
>
> STAT hash_power_level 16
>
> STAT hash_bytes 524288
>
> STAT hash_is_expanding 0
>
> STAT slab_reassign_rescues 0
>
> STAT slab_reassign_chunk_rescues 0
>
> STAT slab_reassign_evictions_nomem 0
>
> STAT slab_reassign_inline_reclaim 0
>
> STAT slab_reassign_busy_items 0
>
> STAT slab_reassign_busy_deletes 0
>
> STAT slab_reassign_running 0
>
> STAT slabs_moved 0
>
> STAT lru_crawler_running 0
>
> STAT lru_crawler_starts 4
>
> STAT lru_maintainer_juggles 101778
>
> STAT malloc_fails 0
>
> STAT bytes 94437
>
> STAT curr_items 909
>
> STAT total_items 728
>
> STAT slab_global_page_pool 0
>
> STAT expired_unfetched 0
>
> STAT evicted_unfetched 0
>
> STAT evicted_active 0
>
> STAT evictions 0
>
> STAT reclaimed 0
>
> STAT crawler_reclaimed 0
>
> STAT crawler_items_checked 2732
>
> STAT lrutail_reflocked 12
>
> STAT moves_to_cold 116840
>
> STAT moves_to_warm 3185
>
> STAT moves_within_lru 538
>
> STAT direct_reclaims 0
>
> STAT lru_bumps_dropped 0
>
>
> Thanks!
>
> QD
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups 
> "memcached" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to memcached+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/memcached/95e1a261-7ad5-49c2-a917-0635b7e3cbbfn%40googlegroups.com.
>
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"memcached" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/memcached/e274ce6a-63ab-2896-d7c0-befa57222a%40rydia.net.


Re: Plan for WarmStart with Extstore support,and what is the progress now

2021-04-07 Thread dormando
I don't remember what needed to be done at this point. I'll check possibly
next week; I have a large change that I'm in the middle of working on and
need to finish first.

fwiw 'next' branch is for basing development work on. My personal repo is
full of recent development branches. I don't typically update master/next
on there.

If your co is able to sponsor development that can speed things up too.

-Dormando

On Tue, 6 Apr 2021, Eric Zhang wrote:

> This month or next month is OK for me, and I will be the first one to test 
> it. I can make some patches to make it
> work, based on https://github.com/memcached/memcached/tree/master or you have 
> your own branch for it. I note that
> your own repo is not updated for a long time 
> https://github.com/dormando/memcached.
>
> 在2021年4月7日星期三 UTC+8 上午4:06:26 写道:
>   Hey,
>
>   Sorry about that not being done yet. Mostly due to lack of demand for 
> it I
>   was working on other things. I can and should get to that soon (this 
> month
>   or next month?).
>
>   It's not too hard but there are a lot of places that need to be patched.
>   Please note that warm restart with extstore does not make it crash or
>   reboot safe. There is still a memory segment that needs to be saved to
>   survive reboots.
>
>   -Dormando
>
>   On Tue, 6 Apr 2021, Shihu Zhang wrote:
>
>   >         Our program have test the feature of WarmStart which is 
> useful. Data cached in Memcached is
>   nearly never
>   > modified and is 10TB large (value is 100kb+). So WarmStart with 
> Exstore support  is a better
>   feature.         So
>   > I want to know the plan for it, and what is the complex mentioned
>   > here https://github.com/memcached/memcached/issues/531.
>   >
>   > --
>   >
>   > ---
>   > You received this message because you are subscribed to the Google 
> Groups "memcached" group.
>   > To unsubscribe from this group and stop receiving emails from it, 
> send an email to
>   > memcached+...@googlegroups.com.
>   > To view this discussion on the web visit
>   >
>   
> https://groups.google.com/d/msgid/memcached/17c8b54e-76c7-41e9-8797-47bb66457892n%40googlegroups.com.
>   >
>   >
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups 
> "memcached" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to
> memcached+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/memcached/50b76cfc-1892-4f34-94a4-b14e214c2e27n%40googlegroups.com.
>
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"memcached" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/memcached/ea8e2a5d-7636-f49-a03d-68c5e4a23a1b%40rydia.net.


Re: Plan for WarmStart with Extstore support,and what is the progress now

2021-04-06 Thread dormando
Hey,

Sorry about that not being done yet. Mostly due to lack of demand for it I
was working on other things. I can and should get to that soon (this month
or next month?).

It's not too hard but there are a lot of places that need to be patched.
Please note that warm restart with extstore does not make it crash or
reboot safe. There is still a memory segment that needs to be saved to
survive reboots.

-Dormando

On Tue, 6 Apr 2021, Shihu Zhang wrote:

>         Our program have test the feature of WarmStart which is useful. Data 
> cached in Memcached is nearly never
> modified and is 10TB large (value is 100kb+). So WarmStart with Exstore 
> support  is a better feature.         So
> I want to know the plan for it, and what is the complex mentioned
> here https://github.com/memcached/memcached/issues/531.
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups 
> "memcached" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to
> memcached+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/memcached/17c8b54e-76c7-41e9-8797-47bb66457892n%40googlegroups.com.
>
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"memcached" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/memcached/496c1e95-e819-bcba-cfe9-fbe59e3037d7%40rydia.net.


Re: benchmarking issues

2021-03-26 Thread dormando
Hey,

> This worked! However it seems like TCP and UDP latency now is about the same 
> with my code as well as with a real
> benchmarking tool (memaslap).

I don't use memaslap so I can't speak to it. I use mc-crusher for the
"official" testing, though admittedly it's harder to configure.

> Not sure I understand the scalability point. From my observations, if I do a 
> multiget, I get separate packet
> sequences for each response. So each get value could be about 2^16 * 1400 
> bytes big and still be ok via UDP
> (assuming everything arrives)? One thing that seemed hard is each separate 
> sequence has the same requestId, which
> makes deciding what to do difficult in out-of-order arrival scenarios. 

mostly RE: kernel/syscall stuff. Especially after the TCP optimizations in
1.6, UDP mode will just be slower at high request rates. It will end up
running a lot more syscalls.

> SO_REUSEPORT seems to be supported in the linux kernel in 3.9. But I 
> definitely understand the decision to not
> spend much time optimizing the UDP protocol. I did see higher rusage_user and 
> much higher rusage_system when
> using UDP, which maybe corresponds to what you are saying. I tried with 
> memaslap and observed the same thing.

Yeah, see above.

> No pressing issue really.  We saw this (admittedly old) paper discussing how 
> Facebook was able to reduce get
> latency by 20% by switching to UDP. Memcached get latency is a key factor in 
> our overall system latency so we
> thought it would be worth a try, and it would ease some pressure on our 
> network infrastructure as well. Do you
> know if Facebook's changes ever made it back into the main memcached 
> distribution?

I wish there was some way I could make that paper stop existing. Those
changes went into memcached 1.2, 13+ years ago. I'm reasonably certain
facebook doesn't use UDP for memcached and hasn't in a long time. None of
their more recent papers (Which also stop around 2014) mention UDP at all.

The best performance you can get is by ensuring multiple requests are
pipelined at once, and there are a reasonable number of worker threads
(not more than one per CPU). If you see anything odd or have quetions
please bring up specifics, share server settings, etc.

> Thanks
> Kireet
>  
>
>   -Dormando
>
>   On Fri, 26 Mar 2021, kmr wrote:
>
>   > We are trying to experiment with using UDP vs TCP for gets to see 
> what kind of speedup we can
>   achieve. I wrote a
>   > very simple benchmark that just uses a single thread to set a key 
> once and do gets to retrieve the
>   key over and
>   > over. We didn't notice any speedup using UDP. If anything we saw a 
> slight slowdown which seemed
>   strange. 
>   > When checking the stats delta, I noticed a really high value for 
> lrutail_reflocked. For a test
>   doing 100K gets,
>   > this value increased by 76K. In our production system, memcached 
> processes that have been running
>   for weeks have
>   > a very low value for this stat, less than 100. Also the latency 
> measured by the benchmark seems to
>   correlate to
>   > the rate at which that value increases. 
>   >
>   > I tried to reproduce using the spy java client and I see the same 
> behavior, so I think it must be
>   something wrong
>   > with my benchmark design rather than a protocol issue. We are using 
> 1.6.9. Here is a list of all
>   the stats values
>   > that changed during a recent run using TCP:
>   >
>   > stats diff:
>   >   * bytes_read: 10,706,007
>   >   * bytes_written: 426,323,216
>   >   * cmd_get: 101,000
>   >   * get_hits: 101,000
>   >   * lru_maintainer_juggles: 8,826
>   >   * lrutail_reflocked: 76,685
>   >   * moves_to_cold: 76,877
>   >   * moves_to_warm: 76,917
>   >   * moves_within_lru: 450
>   >   * rusage_system: 0.95
>   >   * rusage_user: 0.37
>   >   * time: 6
>   >   * total_connections: 2
>   >   * uptime: 6
>   >
>   > --
>   >
>   > ---
>   > You received this message because you are subscribed to the Google 
> Groups "memcached" group.
>   > To unsubscribe from this group and stop receiving emails from it, 
> send an email to
>   > memcached+...@googlegroups.com.
>   > To view this discussion on the web visit
>   >
>   
> https://groups.google.com/d/msgid/memcached/8efbc45d-1d6c-4563-a533-fdbd95457223n%40googlegroups.com.
>   >
>   >
>
> --
>
> ---
> You received this message because you are 

Re: benchmarking issues

2021-03-26 Thread dormando
Hey,

Usually it's good to include the benchark code, but I think I can answer
this off the top of my head:

1) set at least 1,000 keys and fetch them randomly. all of memcached's
internal scale-up is based around... not just fetching a single key. I
typically test with a million or more. There are internal threads which
poke at the LRU, and since you're always accessing the one key, that key
is in use, and those internal threads report on that (lrutail_reflocked)

2) UDP mode has not had any love in a long time. It's not very popular and
has caused some strife on the internet as it doesn't have any
authentication. The UDP protocol wrapper is also not scalable. :( I wish
it were done like DNS with a redirect for too-large values.

3) Since UDP mode isn't using SO_REUSEPORT, recvmmsg, sendmmsg, or any
other modern linux API it's going to be a lot slower than the TCP mode.

4) TCP mode actually scales pretty well. Linearly for reads vs the number
of worker threads at tens of millions of requests per second on large
machines. What probems are you running into?

-Dormando

On Fri, 26 Mar 2021, kmr wrote:

> We are trying to experiment with using UDP vs TCP for gets to see what kind 
> of speedup we can achieve. I wrote a
> very simple benchmark that just uses a single thread to set a key once and do 
> gets to retrieve the key over and
> over. We didn't notice any speedup using UDP. If anything we saw a slight 
> slowdown which seemed strange. 
> When checking the stats delta, I noticed a really high value for 
> lrutail_reflocked. For a test doing 100K gets,
> this value increased by 76K. In our production system, memcached processes 
> that have been running for weeks have
> a very low value for this stat, less than 100. Also the latency measured by 
> the benchmark seems to correlate to
> the rate at which that value increases. 
>
> I tried to reproduce using the spy java client and I see the same behavior, 
> so I think it must be something wrong
> with my benchmark design rather than a protocol issue. We are using 1.6.9. 
> Here is a list of all the stats values
> that changed during a recent run using TCP:
>
> stats diff:
>   * bytes_read: 10,706,007
>   * bytes_written: 426,323,216
>   * cmd_get: 101,000
>   * get_hits: 101,000
>   * lru_maintainer_juggles: 8,826
>   * lrutail_reflocked: 76,685
>   * moves_to_cold: 76,877
>   * moves_to_warm: 76,917
>   * moves_within_lru: 450
>   * rusage_system: 0.95
>   * rusage_user: 0.37
>   * time: 6
>   * total_connections: 2
>   * uptime: 6
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups 
> "memcached" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to
> memcached+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/memcached/8efbc45d-1d6c-4563-a533-fdbd95457223n%40googlegroups.com.
>
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"memcached" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/memcached/8f9f9e71-50dd-c7d-d357-ce2df4c2162d%40rydia.net.


Re: Request permission to use memcached logo

2021-03-25 Thread dormando
Hey,

Sorry for the delay: I grant permission for you to use the logo in your
blog post. Please link the blog post here when posted!

Thanks,
-Dormando

On Tue, 23 Mar 2021, Atsushi Sato wrote:

> To whom it may concern,
>
> Let me check again.
> We would like to use the memcached logo in our blog, could you give us 
> permission to do so?
>
> We would like to use the M-shaped logo in the upper left part of the 
> following site.
> https://memcached.org/
>
> If the method of obtaining permission is incorrect, could you tell me how to 
> confirm the usage permission?
>
> Best regards,
> Atsushi Sato
>
> 2021年3月16日火曜日 18:13:41 UTC+9 Atsushi Sato:
>   To whom it may concern,
>   We plan to publish an engineer blog new article in the near future 
> (within about a month).
>   We would like to use the memcached logo on our engineer's blog.
>   Therefore, we would like to request permission to use the memcached 
> logo.
>
>   reference) Our engineer's blog
>   Japanese only, sorry.
>   https://developers.gnavi.co.jp/
>
> Best regards,
> Atsushi Sato
>
> --
> //
> 株式会社ぐるなび
> 企画開発統括室
>
> 佐藤 敦司 (sat...@gnavi.co.jp)
> --
> Atsushi Sato (sat...@gnavi.co.jp)
>
> Senior Leader,
> Planning and Development Office
> Gurunavi, Inc.
> //
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups 
> "memcached" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to
> memcached+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/memcached/1af73287-306b-4a15-967e-7eb1794b38e0n%40googlegroups.com.
>
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"memcached" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/memcached/dfbfcb4f-a931-2573-4f25-cc98e0cbcd10%40rydia.net.


Re: How to get all apended objects from a single key.

2021-03-09 Thread dormando
Hey,

Memcached only "thinks" in binary blobs. Append is just stacking the
new object at the end of the new object. If whatever you use to serialize
and de-serialize your objects doesn't understand this it won't return all
of the objects.

You'll need to modify the java client or otherwise provide it with a
custom deserializer.

On Tue, 9 Mar 2021, Pritam kumar wrote:

>
> Hello Team,
>
> I am using Memcached server (version -1.5.22) & spymemcached java client 
> (version - 2.12.3) . In a scenario i
> have appended multiple objects in single existing key as like below i 
> mentioned :-
>
> User user = new User();
> user.setName("Test");
> user.setEmail("newu...@gmail.com");
>
> Object data = get(Key);
>
> if (data != null) {
>     append(key, user));
> } else {
>     set(key, 0, user));
> }
>
> I am able to find all the appended objects from the key using terminal 
> (telnet 127.0.0.1 11211 & fetch data using
> get key). But when i am trying to get all the appended objects through java 
> client it returns only 1st object
> which was set 1st time.
>
> I am trying this like below :-
>
> Object userData = get(key);
>
> Please help me that How to get all the appended objects from the key through 
> java client?
>
> Any suggestions or help would be appreciated.
>
> Thanks 
>  
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups 
> "memcached" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to
> memcached+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/memcached/f4b2f833-45e7-4125-a048-daeb5111ea53n%40googlegroups.com.
>
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"memcached" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/memcached/8ede1771-31ef-8ff-12f7-fcdc7afb4f22%40rydia.net.


Re: is there official memcached certification?

2021-01-01 Thread dormando
nope! sorry

On Fri, 1 Jan 2021, Erjan G. wrote:

> do u have it available or plan to do it?
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups 
> "memcached" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to memcached+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/memcached/06be520f-6764-4bea-afd1-aa4e16c5d52en%40googlegroups.com.
>
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"memcached" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/memcached/alpine.DEB.2.22.394.2101011440240.3957535%40dskull.


Re: Question Regarding Slab Imbalance

2020-11-12 Thread dormando
what's the default? you probably don't need to change it from the default
at all, which should have a good spread. default is 1.25 I think.

On Thu, 12 Nov 2020, Tony Wu wrote:

> Hi,
>
> The engine version we have running right now is 1.5.16. I just saw an
> announcement that 1.6.6 is available, so we’ll probably plan to migrate.
>
> Adjusting growth factor makes sense. I did a quick google and don’t see
> any definitive formula to calculate an appropriate number. I’ll probably
> start with 2 or 3 and adjust it from there.
>
> Thanks,
>
> Tony Wu
>
> > On Nov 12, 2020, at 3:50 PM, dormando  wrote:
> >
> > 1.5.what?
> >
> > also yes, chunk_size_growth_factor. don't set it to 1.02. set it to
> > something so your slab classes are more evenly distributed. the max is 63.
> >
> > On Thu, 12 Nov 2020, Tony Wu wrote:
> >
> >> Hi Dormando,
> >>
> >> Thanks for the reply, I believe the engine version is 1.5. By multiplier
> >> I assumed you meant “chunk_size_growth_factor”, which is currently set
> >> at 1.02.
> >>
> >> Best,
> >>
> >> Tony Wu
> >>
> >>> On Nov 12, 2020, at 2:26 PM, dormando  wrote:
> >>>
> >>> What version are they running now? That stat output looks pretty sparse.
> >>>
> >>> Unfortunately when it comes to elasticache the answer is probably to just
> >>> beg them to upgrade to a newer version. they tend to run really old and
> >>> newer verisons do a lot better at balancing memory.
> >>>
> >>> This is also suspect:
> >>>> STAT 62:chunk_size 712
> >>>> STAT 62:total_chunks 4416
> >>>> STAT 62:free_chunks 631
> >>>> STAT 62:free_chunks_end 0
> >>>> STAT 63:chunk_size 524288
> >>>
> >>> anything you store that's larger than 712 bytes will use half a megabyte
> >>> of memory. If you're setting the slab multiplier (-f) to something really
> >>> aggressive, you should stop that :)
> >>>
> >>> On Thu, 12 Nov 2020, Tony Wu wrote:
> >>>
> >>>> We are currently using memcached provided by AWS ElastiCache service in 
> >>>> modern mode to store web session keys
> >>>> among other things. We are observing session keys being evicted before 
> >>>> TTL even though the cluster seems to have
> >>>> ample free memory, which hints at slab imbalance. I've included a print 
> >>>> out of slab stats below in terms of
> >>>> sizing and total / free pages.
> >>>> What would be the best way to smooth out this imbalance? We already 
> >>>> enabled slab reassign / auto-move under
> >>>> modern mode.
> >>>>
> >>>> Thanks.
> >>>>
> >>>> STAT 1:chunk_size 96
> >>>> STAT 1:total_chunks 131064
> >>>> STAT 1:free_chunks 1664
> >>>> STAT 1:free_chunks_end 0
> >>>> STAT 2:chunk_size 104
> >>>> STAT 2:total_chunks 60492
> >>>> STAT 2:free_chunks 5327
> >>>> STAT 2:free_chunks_end 0
> >>>> STAT 3:chunk_size 112
> >>>> STAT 3:total_chunks 46810
> >>>> STAT 3:free_chunks 79
> >>>> STAT 3:free_chunks_end 0
> >>>> STAT 4:chunk_size 120
> >>>> STAT 4:total_chunks 26214
> >>>> STAT 4:free_chunks 48
> >>>> STAT 4:free_chunks_end 0
> >>>> STAT 5:chunk_size 128
> >>>> STAT 5:total_chunks 32768
> >>>> STAT 5:free_chunks 6193
> >>>> STAT 5:free_chunks_end 0
> >>>> STAT 6:chunk_size 136
> >>>> STAT 6:total_chunks 23130
> >>>> STAT 6:free_chunks 4594
> >>>> STAT 6:free_chunks_end 0
> >>>> STAT 7:chunk_size 144
> >>>> STAT 7:total_chunks 7281
> >>>> STAT 7:free_chunks 5894
> >>>> STAT 7:free_chunks_end 0
> >>>> STAT 8:chunk_size 152
> >>>> STAT 8:total_chunks 13796
> >>>> STAT 8:free_chunks 12008
> >>>> STAT 8:free_chunks_end 0
> >>>> STAT 9:chunk_size 160
> >>>> STAT 9:total_chunks 6553
> >>>> STAT 9:free_chunks 6316
> >>>> STAT 9:free_chunks_end 0
> >>>> STAT 10:chunk_size 168
> >>>> STAT 10:total_chunks 6241
> >>>> STAT 10:free_chunks 6186
> >>>> STAT 10:free_chunks_end 0
> >>>> STAT 11:chunk_size 176
&

Re: Question Regarding Slab Imbalance

2020-11-12 Thread dormando
1.5.what?

also yes, chunk_size_growth_factor. don't set it to 1.02. set it to
something so your slab classes are more evenly distributed. the max is 63.

On Thu, 12 Nov 2020, Tony Wu wrote:

> Hi Dormando,
>
> Thanks for the reply, I believe the engine version is 1.5. By multiplier
> I assumed you meant “chunk_size_growth_factor”, which is currently set
> at 1.02.
>
> Best,
>
> Tony Wu
>
> > On Nov 12, 2020, at 2:26 PM, dormando  wrote:
> >
> > What version are they running now? That stat output looks pretty sparse.
> >
> > Unfortunately when it comes to elasticache the answer is probably to just
> > beg them to upgrade to a newer version. they tend to run really old and
> > newer verisons do a lot better at balancing memory.
> >
> > This is also suspect:
> >> STAT 62:chunk_size 712
> >> STAT 62:total_chunks 4416
> >> STAT 62:free_chunks 631
> >> STAT 62:free_chunks_end 0
> >> STAT 63:chunk_size 524288
> >
> > anything you store that's larger than 712 bytes will use half a megabyte
> > of memory. If you're setting the slab multiplier (-f) to something really
> > aggressive, you should stop that :)
> >
> > On Thu, 12 Nov 2020, Tony Wu wrote:
> >
> >> We are currently using memcached provided by AWS ElastiCache service in 
> >> modern mode to store web session keys
> >> among other things. We are observing session keys being evicted before TTL 
> >> even though the cluster seems to have
> >> ample free memory, which hints at slab imbalance. I've included a print 
> >> out of slab stats below in terms of
> >> sizing and total / free pages.
> >> What would be the best way to smooth out this imbalance? We already 
> >> enabled slab reassign / auto-move under
> >> modern mode.
> >>
> >> Thanks.
> >>
> >> STAT 1:chunk_size 96
> >> STAT 1:total_chunks 131064
> >> STAT 1:free_chunks 1664
> >> STAT 1:free_chunks_end 0
> >> STAT 2:chunk_size 104
> >> STAT 2:total_chunks 60492
> >> STAT 2:free_chunks 5327
> >> STAT 2:free_chunks_end 0
> >> STAT 3:chunk_size 112
> >> STAT 3:total_chunks 46810
> >> STAT 3:free_chunks 79
> >> STAT 3:free_chunks_end 0
> >> STAT 4:chunk_size 120
> >> STAT 4:total_chunks 26214
> >> STAT 4:free_chunks 48
> >> STAT 4:free_chunks_end 0
> >> STAT 5:chunk_size 128
> >> STAT 5:total_chunks 32768
> >> STAT 5:free_chunks 6193
> >> STAT 5:free_chunks_end 0
> >> STAT 6:chunk_size 136
> >> STAT 6:total_chunks 23130
> >> STAT 6:free_chunks 4594
> >> STAT 6:free_chunks_end 0
> >> STAT 7:chunk_size 144
> >> STAT 7:total_chunks 7281
> >> STAT 7:free_chunks 5894
> >> STAT 7:free_chunks_end 0
> >> STAT 8:chunk_size 152
> >> STAT 8:total_chunks 13796
> >> STAT 8:free_chunks 12008
> >> STAT 8:free_chunks_end 0
> >> STAT 9:chunk_size 160
> >> STAT 9:total_chunks 6553
> >> STAT 9:free_chunks 6316
> >> STAT 9:free_chunks_end 0
> >> STAT 10:chunk_size 168
> >> STAT 10:total_chunks 6241
> >> STAT 10:free_chunks 6186
> >> STAT 10:free_chunks_end 0
> >> STAT 11:chunk_size 176
> >> STAT 11:total_chunks 5957
> >> STAT 11:free_chunks 5946
> >> STAT 11:free_chunks_end 0
> >> STAT 12:chunk_size 184
> >> STAT 12:total_chunks 5698
> >> STAT 12:free_chunks 5664
> >> STAT 12:free_chunks_end 0
> >> STAT 13:chunk_size 192
> >> STAT 13:total_chunks 5461
> >> STAT 13:free_chunks 5380
> >> STAT 13:free_chunks_end 0
> >> STAT 14:chunk_size 200
> >> STAT 14:total_chunks 5242
> >> STAT 14:free_chunks 5135
> >> STAT 14:free_chunks_end 0
> >> STAT 15:chunk_size 208
> >> STAT 15:total_chunks 5041
> >> STAT 15:free_chunks 4892
> >> STAT 15:free_chunks_end 0
> >> STAT 16:chunk_size 216
> >> STAT 16:total_chunks 4854
> >> STAT 16:free_chunks 4770
> >> STAT 16:free_chunks_end 0
> >> STAT 17:chunk_size 224
> >> STAT 17:total_chunks 4681
> >> STAT 17:free_chunks 4655
> >> STAT 17:free_chunks_end 0
> >> STAT 18:chunk_size 232
> >> STAT 18:total_chunks 4519
> >> STAT 18:free_chunks 4489
> >> STAT 18:free_chunks_end 0
> >> STAT 19:chunk_size 240
> >> STAT 19:total_chunks 4369
> >> STAT 19:free_chunks 4345
> >> STAT 19:free_chunks_end 0
> >> STAT 20:chunk_size 248
> >

Re: Question Regarding Slab Imbalance

2020-11-12 Thread dormando
What version are they running now? That stat output looks pretty sparse.

Unfortunately when it comes to elasticache the answer is probably to just
beg them to upgrade to a newer version. they tend to run really old and
newer verisons do a lot better at balancing memory.

This is also suspect:
> STAT 62:chunk_size 712
> STAT 62:total_chunks 4416
> STAT 62:free_chunks 631
> STAT 62:free_chunks_end 0
> STAT 63:chunk_size 524288

anything you store that's larger than 712 bytes will use half a megabyte
of memory. If you're setting the slab multiplier (-f) to something really
aggressive, you should stop that :)

On Thu, 12 Nov 2020, Tony Wu wrote:

> We are currently using memcached provided by AWS ElastiCache service in 
> modern mode to store web session keys
> among other things. We are observing session keys being evicted before TTL 
> even though the cluster seems to have
> ample free memory, which hints at slab imbalance. I've included a print out 
> of slab stats below in terms of
> sizing and total / free pages.
> What would be the best way to smooth out this imbalance? We already enabled 
> slab reassign / auto-move under
> modern mode.
>
> Thanks.
>
> STAT 1:chunk_size 96
> STAT 1:total_chunks 131064
> STAT 1:free_chunks 1664
> STAT 1:free_chunks_end 0
> STAT 2:chunk_size 104
> STAT 2:total_chunks 60492
> STAT 2:free_chunks 5327
> STAT 2:free_chunks_end 0
> STAT 3:chunk_size 112
> STAT 3:total_chunks 46810
> STAT 3:free_chunks 79
> STAT 3:free_chunks_end 0
> STAT 4:chunk_size 120
> STAT 4:total_chunks 26214
> STAT 4:free_chunks 48
> STAT 4:free_chunks_end 0
> STAT 5:chunk_size 128
> STAT 5:total_chunks 32768
> STAT 5:free_chunks 6193
> STAT 5:free_chunks_end 0
> STAT 6:chunk_size 136
> STAT 6:total_chunks 23130
> STAT 6:free_chunks 4594
> STAT 6:free_chunks_end 0
> STAT 7:chunk_size 144
> STAT 7:total_chunks 7281
> STAT 7:free_chunks 5894
> STAT 7:free_chunks_end 0
> STAT 8:chunk_size 152
> STAT 8:total_chunks 13796
> STAT 8:free_chunks 12008
> STAT 8:free_chunks_end 0
> STAT 9:chunk_size 160
> STAT 9:total_chunks 6553
> STAT 9:free_chunks 6316
> STAT 9:free_chunks_end 0
> STAT 10:chunk_size 168
> STAT 10:total_chunks 6241
> STAT 10:free_chunks 6186
> STAT 10:free_chunks_end 0
> STAT 11:chunk_size 176
> STAT 11:total_chunks 5957
> STAT 11:free_chunks 5946
> STAT 11:free_chunks_end 0
> STAT 12:chunk_size 184
> STAT 12:total_chunks 5698
> STAT 12:free_chunks 5664
> STAT 12:free_chunks_end 0
> STAT 13:chunk_size 192
> STAT 13:total_chunks 5461
> STAT 13:free_chunks 5380
> STAT 13:free_chunks_end 0
> STAT 14:chunk_size 200
> STAT 14:total_chunks 5242
> STAT 14:free_chunks 5135
> STAT 14:free_chunks_end 0
> STAT 15:chunk_size 208
> STAT 15:total_chunks 5041
> STAT 15:free_chunks 4892
> STAT 15:free_chunks_end 0
> STAT 16:chunk_size 216
> STAT 16:total_chunks 4854
> STAT 16:free_chunks 4770
> STAT 16:free_chunks_end 0
> STAT 17:chunk_size 224
> STAT 17:total_chunks 4681
> STAT 17:free_chunks 4655
> STAT 17:free_chunks_end 0
> STAT 18:chunk_size 232
> STAT 18:total_chunks 4519
> STAT 18:free_chunks 4489
> STAT 18:free_chunks_end 0
> STAT 19:chunk_size 240
> STAT 19:total_chunks 4369
> STAT 19:free_chunks 4345
> STAT 19:free_chunks_end 0
> STAT 20:chunk_size 248
> STAT 20:total_chunks 4228
> STAT 20:free_chunks 4217
> STAT 20:free_chunks_end 0
> STAT 21:chunk_size 256
> STAT 21:total_chunks 4096
> STAT 21:free_chunks 4091
> STAT 21:free_chunks_end 0
> STAT 22:chunk_size 264
> STAT 22:total_chunks 3971
> STAT 22:free_chunks 3966
> STAT 22:free_chunks_end 0
> STAT 23:chunk_size 272
> STAT 23:total_chunks 3855
> STAT 23:free_chunks 3851
> STAT 23:free_chunks_end 0
> STAT 24:chunk_size 280
> STAT 24:total_chunks 3744
> STAT 24:free_chunks 3738
> STAT 24:free_chunks_end 0
> STAT 25:chunk_size 288
> STAT 25:total_chunks 3640
> STAT 25:free_chunks 3638
> STAT 25:free_chunks_end 0
> STAT 26:chunk_size 296
> STAT 26:total_chunks 3542
> STAT 26:free_chunks 3533
> STAT 26:free_chunks_end 0
> STAT 27:chunk_size 304
> STAT 27:total_chunks 3449
> STAT 27:free_chunks 3448
> STAT 27:free_chunks_end 0
> STAT 28:chunk_size 312
> STAT 28:total_chunks 3360
> STAT 28:free_chunks 3350
> STAT 28:free_chunks_end 0
> STAT 29:chunk_size 320
> STAT 29:total_chunks 3276
> STAT 29:free_chunks 3273
> STAT 29:free_chunks_end 0
> STAT 30:chunk_size 328
> STAT 30:total_chunks 3196
> STAT 30:free_chunks 3174
> STAT 30:free_chunks_end 0
> STAT 31:chunk_size 336
> STAT 31:total_chunks 3120
> STAT 31:free_chunks 3118
> STAT 31:free_chunks_end 0
> STAT 32:chunk_size 344
> STAT 32:total_chunks 3048
> STAT 32:free_chunks 3043
> STAT 32:free_chunks_end 0
> STAT 33:chunk_size 352
> STAT 33:total_chunks 2978
> STAT 33:free_chunks 2954
> STAT 33:free_chunks_end 0
> STAT 34:chunk_size 360
> STAT 34:total_chunks 2912
> STAT 34:free_chunks 2903
> STAT 34:free_chunks_end 0
> STAT 35:chunk_size 368
> STAT 35:total_chunks 2849
> STAT 35:free_chunks 2828
> STAT 35:free_chunks_end 0
> STAT 36:chunk_size 376
> STAT 36:total_chunks 2788
> STAT 36:free_chunks 

Re: Questions related to slabs

2020-10-17 Thread dormando
Hey,

I'll answer these inline, but up front: Are you having a specific problem
you're tracking down, or is this just out of curiosity?

None of these are things you should waste time thinking about. Memcached
handles it internally.

>  1.
>
> I setup memcache with 55gb, however total_malloced from stats slabs says 
> only 41657393216, will that grow as the data grows?

Yes. Memory is lazily allocated, one megabyte at a time.

>  2.
>
> How many pages are allocated per slab? Is that dynamic or is there a 
> limit?

It's dynamic. If you have a new enough version (> 1.5) they are also
rebalanced automatically as necessary.

>  3.
>
> We use not more than 5-6 slabs, and our largest slab is 300 byte, are 
> there best practices to limit the object size to 301, so that slab allocation
> logic is simplified

You can probably ignore this, unless you feel like there's a significant
memory waste going on. You can change the slab growth factor (-f) to
create more classes in the lower numbers than higher numbers but again I
wouldn't bother unless you really need to.

It doesn't "simplify" the logic either way.

-Dormando

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"memcached" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/memcached/alpine.DEB.2.22.394.2010171220270.1853123%40dskull.


Re: Packet Logo Update on Memcached

2020-09-16 Thread dormando
Hey,

I'm the right person to talk to here. You can e-mail me privately
(dormando [at] rydia dot net).

Just need the new logo + when to update it. Easy enough.

have fun,
-Dormando

On Wed, 16 Sep 2020, Kyle Gannon wrote:

> Hello,
>
> Hope all is well!
>
> My name is Kyle and I'm a Digital Marketing Coordinator at Packet. As I'm 
> sure you know Packet got acquired by Equinix back in March. We will
> officially rebrand into Equinix October 6 as one of their products "Equinix 
> Metal."
>
> This means our product will live through one of Equinix subdomains and it 
> will be metal.equinix.com. Can someone point me to the right person to help
> update the link and logo on your website once we officially rebrand?
> Let me know if you have any questions or you would like to discuss further 
> and I can set up a Zoom call.
>
> Thanks again,
> Kyle
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups 
> "memcached" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to memcached+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/memcached/6bccd9fd-97b9-4529-bc3d-b892fcebf4a4n%40googlegroups.com.
>
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"memcached" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/memcached/alpine.DEB.2.22.394.2009161610510.703476%40dskull.


Re: slab_reassign_evictions_nomem

2020-09-15 Thread dormando
Hey,

Sorry for the super late reply.

I'm tracking a fix for this here:
https://github.com/memcached/memcached/issues/541 - but not entirely sure
when I'll get to it. It's not easy to avoid but should be relatively rare.
It happens when the system thinks it has enough memory free to move a
page, but by the time it does the memory's been assigned elsewhere.
Restructuring would fix it.

If it's a huge problem, you can either disable/pause the page mover or
change the minimum pages it reserves from a source slab class... which I
think might be hard coded :(

The stat not counting in general evictions is another long standing issue.
(apparently reported in 2017?)

I'll see if I can do the restructuring sooner than later. We really
shouldn't lose items at random when moving pages, they should at least be
pulled from the LRU tail.

On Thu, 27 Aug 2020, 'theonajim' via memcached wrote:

> We are using version 1.6.6 of memcached with extstore enabled and we are 
> observing records being evicted but evictions stat reports 0.
> slab_reassign_evictions_nomem stat does show items being evicted. How do we 
> avoid losing records due to slab_reassign_evictions_nomem ? Also, is there a
> reason it is not counted as a regular eviction?
>  'stats' 'stats settings' 'stats items' 'stats slabs' output are attached.
>
> Thanks,
> --Theo
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups 
> "memcached" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to memcached+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/memcached/4e63aa27-d0e9-4887-a299-e7888d655864n%40googlegroups.com.
>
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"memcached" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/memcached/alpine.DEB.2.22.394.2009152055570.587674%40dskull.


Re: Request permission to use memcached logo

2020-07-14 Thread dormando
Thanks,

I'll allow this for a one-time use.

Thanks,
-Dormando

On Tue, 14 Jul 2020, Kiran Patil wrote:

>
>
> Hi Dormondo.
>
>
> Sorry I was not able to spend time on that patch due to internal priority 
> change and my focus got shifted meanwhile.
>
>
> Now we have assigned resource (my co-workder: Sridhar Samudrala) to work on 
> this patch and address those outstanding comments.
>
>
> Hence can we use the logo meanwhile as one time exception?
>
>
> Thanks,
>
> -- Kiran P.
>
>
>
> On Thursday, July 9, 2020 at 2:20:22 PM UTC-7, Dormando wrote:
>   Hey,
>
>   I think we never got that patch merged? I'd love to allow this but I'm
>   worried people might get confused since it's not something you can do 
> with
>   the released version of memcached?
>
>   Thanks,
>   -Dormando
>
>   On Thu, 9 Jul 2020, Kiran Patil wrote:
>
>   >
>   > Hello Dormando,
>   >
>   >  
>   >
>   > I’ll be discussing support for memcached with Application Devices 
> Queues (ADQ) in a presentation at the Netdev virtual conference in August.
>   >
>   >  
>   >
>   > I would like to request permission to use the memcached logo for the 
> Netdev presentation, on the intel.com website, other collateral and
>   presentations.
>   >
>   >
>   > If you are OK giving us the permission to use the memcached logo for 
> NetDev Presentation, on intel.com, other related collateral ,
>   >
>   > other collateral and presentation, can you please reply back to this 
> request with your permission?
>   >
>   >
>   > Thank you!
>   >
>   >  
>   >
>   > Kiran Patil
>   >
>   > Recommended Email: kiran...@intel.com
>   >
>   > Intel Corp.
>   >
>   > --
>   >
>   > ---
>   > You received this message because you are subscribed to the Google 
> Groups "memcached" group.
>   > To unsubscribe from this group and stop receiving emails from it, 
> send an email to memc...@googlegroups.com.
>   > To view this discussion on the web visit
>   
> https://groups.google.com/d/msgid/memcached/23488c3a-7215-40fb-a6a9-69848c0ca8d4o%40googlegroups.com.
>   >
>   >
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups 
> "memcached" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to memcached+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/memcached/07a7402a-844a-4ac1-b6ec-c69f34257e23o%40googlegroups.com.
>
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"memcached" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/memcached/alpine.DEB.2.21.2007141424260.29509%40dskull.


Re: Request permission to use memcached logo

2020-07-09 Thread dormando
Hey,

I think we never got that patch merged? I'd love to allow this but I'm
worried people might get confused since it's not something you can do with
the released version of memcached?

Thanks,
-Dormando

On Thu, 9 Jul 2020, Kiran Patil wrote:

>
> Hello Dormando,
>
>  
>
> I’ll be discussing support for memcached with Application Devices Queues 
> (ADQ) in a presentation at the Netdev virtual conference in August.
>
>  
>
> I would like to request permission to use the memcached logo for the Netdev 
> presentation, on the intel.com website, other collateral and presentations.
>
>
> If you are OK giving us the permission to use the memcached logo for NetDev 
> Presentation, on intel.com, other related collateral ,
>
> other collateral and presentation, can you please reply back to this request 
> with your permission?
>
>
> Thank you!
>
>  
>
> Kiran Patil
>
> Recommended Email: kiran.pa...@intel.com
>
> Intel Corp.
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups 
> "memcached" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to memcached+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/memcached/23488c3a-7215-40fb-a6a9-69848c0ca8d4o%40googlegroups.com.
>
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"memcached" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/memcached/alpine.DEB.2.21.2007091419230.30709%40dskull.


Re: End of life policy

2020-07-08 Thread dormando
I guess that's fair.

On Wed, 8 Jul 2020, Nicolas Motte wrote:

> Thx Dormando!
> I ll then use this rule for the time being:
>
> - 1.4 is dead
> - 1.5 is still supported (in the sense that a major security issue could be 
> fixed)
> - 1.6 is the preferred version 
>
> Cheers
> Nico
>
>
> On Wed, Jul 8, 2020 at 9:51 AM dormando  wrote:
>   Hey,
>
>   In extreme cases we would provide patches, and there's nothing stopping 
> me
>   from releasing a new 1.5 version. Most distro's just patch what versions
>   they maintain, which is a wide swath of them.
>
>   The only difference between later 1.4 and early 1.5 versions were the
>   defaults enabled, so releasing more 1.4's had no point.
>
>   I think the one CVE that's happened since 1.6 came out only affected 
> 1.6+.
>   1.6 is also not a huge jump.
>
>   In short, nobody's asked for one, so I havent't done one, I guess. The
>   project moves pretty slowly and conservatively so I don't personally 
> view
>   the dot versions to be something people should hold onto dearly. It just
>   makes things harder when something goes go wrong since they have less
>   observability and miss out on bug fixes.
>
>   On Wed, 8 Jul 2020, Nicolas Motte wrote:
>
>   > Hi everyone, 
>   > I d like to know what is the end of life policy for major memcached 
> versions.
>   >
>   > At the moment we re using 1.4 and 1.5. Looking at the release notes, 
> it feels like only the latest major version (1.6) has new releases,
>   which makes me
>   > think in case of a security issue found on a previous major version, 
> it would not be fixed and we would have to migrate to 1.6.
>   >
>   > It would mean the policy is simple (but a bit drastic) : "every time 
> a new major version is released, the previous one is dead."
>   >
>   > Is my understanding correct?
>   >
>   > Cheers
>   > Nico
>   >
>   > --
>   >
>   > ---
>   > You received this message because you are subscribed to the Google 
> Groups "memcached" group.
>   > To unsubscribe from this group and stop receiving emails from it, 
> send an email to memcached+unsubscr...@googlegroups.com.
>   > To view this discussion on the web visit
>   > 
> https://groups.google.com/d/msgid/memcached/CAB7O_Y_BN%2BewH-z%3DqCe2LGMU_Qj7nuZ9fHp9K-jWsDrbUhfTLQ%40mail.gmail.com.
>   >
>   >
>
>   --
>
>   ---
>   You received this message because you are subscribed to the Google 
> Groups "memcached" group.
>   To unsubscribe from this group and stop receiving emails from it, send 
> an email to memcached+unsubscr...@googlegroups.com.
>   To view this discussion on the web visit 
> https://groups.google.com/d/msgid/memcached/alpine.DEB.2.21.2007080043570.18887%40dskull.
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups 
> "memcached" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to memcached+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/memcached/CAB7O_Y-XUJhyL-gVmCGHfRDTqqHvSvpD6A4xzPjmA_B%2BF6B-6Q%40mail.gmail.com.
>
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"memcached" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/memcached/alpine.DEB.2.21.2007080144460.18887%40dskull.


Re: End of life policy

2020-07-08 Thread dormando
Hey,

In extreme cases we would provide patches, and there's nothing stopping me
from releasing a new 1.5 version. Most distro's just patch what versions
they maintain, which is a wide swath of them.

The only difference between later 1.4 and early 1.5 versions were the
defaults enabled, so releasing more 1.4's had no point.

I think the one CVE that's happened since 1.6 came out only affected 1.6+.
1.6 is also not a huge jump.

In short, nobody's asked for one, so I havent't done one, I guess. The
project moves pretty slowly and conservatively so I don't personally view
the dot versions to be something people should hold onto dearly. It just
makes things harder when something goes go wrong since they have less
observability and miss out on bug fixes.

On Wed, 8 Jul 2020, Nicolas Motte wrote:

> Hi everyone, 
> I d like to know what is the end of life policy for major memcached versions.
>
> At the moment we re using 1.4 and 1.5. Looking at the release notes, it feels 
> like only the latest major version (1.6) has new releases, which makes me
> think in case of a security issue found on a previous major version, it would 
> not be fixed and we would have to migrate to 1.6.
>
> It would mean the policy is simple (but a bit drastic) : "every time a new 
> major version is released, the previous one is dead."
>
> Is my understanding correct?
>
> Cheers
> Nico
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups 
> "memcached" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to memcached+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/memcached/CAB7O_Y_BN%2BewH-z%3DqCe2LGMU_Qj7nuZ9fHp9K-jWsDrbUhfTLQ%40mail.gmail.com.
>
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"memcached" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/memcached/alpine.DEB.2.21.2007080043570.18887%40dskull.


Re: Total memory allocated with -m NOT EQUAL to (total pages * max_item_size). Request to provide clarification on how page size works.

2020-07-07 Thread dormando
> >Also your instance hasn't even malloc'ed half of its memory limit. You 
> have over 6 gigabytes unused. There aren't any evictions despite the 
> uptime being over two months. 
> Was eviction of active items expeted as well? We have eviction of unsed and 
> unfetched items. 

Your evictions are literally zero, in these stats. You saw them before,
when the instances were smaller?

> >Otherwise: 
> 1. is the default in 1.5 anyway 
> 2. is the default in 1.5. 
> 3. don't bother changing this; it'll change the way the slabs scale. 
> 4. 1.20 is probably fine. reducing it only helps if you have very little 
> memory. 
> 5. also fine. 

> Does increasing slab classes by reducing growth factor affect
> performance? I understand if we have more slab classes it can help in
> increasing storage overhead as less memory as we may find chunk size closer 
> to item size.

There're a maximum of 63 classes, so making the number smaller has a
limited effect. The more slab classes you have, the harder the automove
balancer has to work to keep things even. I don't really recommend
adjusting the value much if at all.

All you probably had to do was turn on automove, but I don't have your
stats from when you did have evictions so I can't say for sure.

> >If it were full and automove was off like it is now, you would see 
> problems over time. Noted.Thank you for the input. :)
>
> Thank you,
> Shweta
>
> On Wednesday, July 8, 2020 at 10:00:30 AM UTC+5:30, Dormando wrote:
>   you said you were seeing evictions? Was this on a different instance?
>
>   I don't really have any control or influence over what amazon deploys 
> for
>   elasticache. They've also changed the daemon. Some of your settings are
>   different from the defaults that 1.5.10 has (automove should default to 
> 1
>   and hash_Algo should default to murmur).
>
>   Also your instance hasn't even malloc'ed half of its memory limit. You
>   have over 6 gigabytes unused. There aren't any evictions despite the
>   uptime being over two months.
>
>   So far as I can see you don't have to do anything? Unless a different
>   instance was giving you trouble.
>
>   Otherwise:
>   1. is the default in 1.5 anyway
>   2. is the default in 1.5.
>   3. don't bother changing this; it'll change the way the slabs scale.
>   4. 1.20 is probably fine. reducing it only helps if you have very little
>   memory.
>   5. also fine.
>
>   but mainly 1) I can't really guarantee anything I say has relevance 
> since
>   I don't know what code is in elasticache and 2) your instance isn't even
>   remotely full so I don't have any recommendations.
>
>   If it were full and automove was off like it is now, you would see
>   problems over time.
>
>   On Tue, 7 Jul 2020, Shweta Agrawal wrote:
>
>   > yes
>   >
>   > On Wednesday, July 8, 2020 at 9:35:19 AM UTC+5:30, Dormando wrote:
>   >       Oh, so this is amazon elasticache?
>   >
>   >       On Tue, 7 Jul 2020, Shweta Agrawal wrote:
>   >
>   >       > We use aws for deployment and don't have that information. 
> What particularly looks odd in settings? 
>   >       >
>   >       > On Wednesday, July 8, 2020 at 8:10:04 AM UTC+5:30, Dormando 
> wrote:
>   >       >       what're your start arguments? the settings look a 
> little odd. ie; the full
>   >       >       commandline (censoring anything important) that you 
> used to start
>   >       >       memcached
>   >       >
>   >       >       On Tue, 7 Jul 2020, Shweta Agrawal wrote:
>   >       >
>   >       >       > Sorry. Here it is.
>   >       >       >
>   >       >       > On Wednesday, July 8, 2020 at 12:38:38 AM UTC+5:30, 
> Dormando wrote:
>   >       >       >       'stats settings' file is empty
>   >       >       >
>   >       >       >       On Tue, 7 Jul 2020, Shweta Agrawal wrote:
>   >       >       >
>   >       >       >       > Hi Dormando,
>   >       >       >       > Got the stats for production. Please find 
> attached files for stats settings. stats items, stats, stats slabs.
>   Summary for
>   >       all slabs.
>   >       >       >       >
>   >       >       >       > Other details that might help:
>   >       >       >       >  *  TTL is two days or more. 
>   >       >       >       >  *  Key length is in the range of 40-80 bytes.
> 

Re: Total memory allocated with -m NOT EQUAL to (total pages * max_item_size). Request to provide clarification on how page size works.

2020-07-07 Thread dormando
you said you were seeing evictions? Was this on a different instance?

I don't really have any control or influence over what amazon deploys for
elasticache. They've also changed the daemon. Some of your settings are
different from the defaults that 1.5.10 has (automove should default to 1
and hash_Algo should default to murmur).

Also your instance hasn't even malloc'ed half of its memory limit. You
have over 6 gigabytes unused. There aren't any evictions despite the
uptime being over two months.

So far as I can see you don't have to do anything? Unless a different
instance was giving you trouble.

Otherwise:
1. is the default in 1.5 anyway
2. is the default in 1.5.
3. don't bother changing this; it'll change the way the slabs scale.
4. 1.20 is probably fine. reducing it only helps if you have very little
memory.
5. also fine.

but mainly 1) I can't really guarantee anything I say has relevance since
I don't know what code is in elasticache and 2) your instance isn't even
remotely full so I don't have any recommendations.

If it were full and automove was off like it is now, you would see
problems over time.

On Tue, 7 Jul 2020, Shweta Agrawal wrote:

> yes
>
> On Wednesday, July 8, 2020 at 9:35:19 AM UTC+5:30, Dormando wrote:
>   Oh, so this is amazon elasticache?
>
>   On Tue, 7 Jul 2020, Shweta Agrawal wrote:
>
>   > We use aws for deployment and don't have that information. What 
> particularly looks odd in settings? 
>   >
>   > On Wednesday, July 8, 2020 at 8:10:04 AM UTC+5:30, Dormando wrote:
>   >       what're your start arguments? the settings look a little odd. 
> ie; the full
>   >       commandline (censoring anything important) that you used to 
> start
>   >       memcached
>   >
>   >       On Tue, 7 Jul 2020, Shweta Agrawal wrote:
>   >
>   >       > Sorry. Here it is.
>   >       >
>   >       > On Wednesday, July 8, 2020 at 12:38:38 AM UTC+5:30, Dormando 
> wrote:
>   >       >       'stats settings' file is empty
>   >       >
>   >       >       On Tue, 7 Jul 2020, Shweta Agrawal wrote:
>   >       >
>   >       >       > Hi Dormando,
>   >       >       > Got the stats for production. Please find attached 
> files for stats settings. stats items, stats, stats slabs. Summary for
>   all slabs.
>   >       >       >
>   >       >       > Other details that might help:
>   >       >       >  *  TTL is two days or more. 
>   >       >       >  *  Key length is in the range of 40-80 bytes.
>   >       >       > Below are the parameters that we plan to change from 
> the current settings:
>   >       >       >  1. slab_automove : from 0 to 1
>   >       >       >  2. hash_algorithm: from jenkins to murmur
>   >       >       >  3. chunk_size: from 48 to 297 (as we don't have data 
> of size less than that)
>   >       >       >  4. growth_factor: 1.25 to 1.20 ( Can reducing this 
> more help? Do more slab classes affect performance?)
>   >       >       >  5. max_item_size : from 4MB to 1MB (as our data will 
> never be more than 1MB large)
>   >       >       > Please let me know if different values for above 
> paramters can be more beneficial.
>   >       >       > Are there any other parameters which we should 
> consider to change or set?
>   >       >       >
>   >       >       > Also below are the calculations used for columns in 
> the summary shared. Can you please confirm if calculations are fine.
>   >       >       > 1) Total_Mem = total_pages*page_size  --> total 
> memory 
>   >       >       > 2) Strg_ovrHd = 
> (mem_requested/(used_chunks*chunk_size)) * 100 --> storage overhead
>   >       >       > 3) Free Memory = free_chunks * chunk_size   ---> free 
> memory
>   >       >       > 4) To Store = mem_requested      -->   actual memory 
> requested for storing data
>   >       >       >
>   >       >       > Thank you for your time and efforts in explaining 
> concepts.
>   >       >       > Shweta
>   >       >       >
>   >       >       >             > > the rest is free memory, which should 
> be measured separately.
>   >       >       >             > free memory for a class will be : 
> (free_chunks * chunk_size) 
>   >       >       >             > And total memory reserved by a class 
> will be : (total_pages*page_size)
>

Re: Total memory allocated with -m NOT EQUAL to (total pages * max_item_size). Request to provide clarification on how page size works.

2020-07-07 Thread dormando
Oh, so this is amazon elasticache?

On Tue, 7 Jul 2020, Shweta Agrawal wrote:

> We use aws for deployment and don't have that information. What particularly 
> looks odd in settings? 
>
> On Wednesday, July 8, 2020 at 8:10:04 AM UTC+5:30, Dormando wrote:
>   what're your start arguments? the settings look a little odd. ie; the 
> full
>   commandline (censoring anything important) that you used to start
>   memcached
>
>   On Tue, 7 Jul 2020, Shweta Agrawal wrote:
>
>   > Sorry. Here it is.
>   >
>   > On Wednesday, July 8, 2020 at 12:38:38 AM UTC+5:30, Dormando wrote:
>   >       'stats settings' file is empty
>   >
>   >       On Tue, 7 Jul 2020, Shweta Agrawal wrote:
>   >
>   >       > Hi Dormando,
>   >       > Got the stats for production. Please find attached files for 
> stats settings. stats items, stats, stats slabs. Summary for all slabs.
>   >       >
>   >       > Other details that might help:
>   >       >  *  TTL is two days or more. 
>   >       >  *  Key length is in the range of 40-80 bytes.
>   >       > Below are the parameters that we plan to change from the 
> current settings:
>   >       >  1. slab_automove : from 0 to 1
>   >       >  2. hash_algorithm: from jenkins to murmur
>   >       >  3. chunk_size: from 48 to 297 (as we don't have data of size 
> less than that)
>   >       >  4. growth_factor: 1.25 to 1.20 ( Can reducing this more 
> help? Do more slab classes affect performance?)
>   >       >  5. max_item_size : from 4MB to 1MB (as our data will never 
> be more than 1MB large)
>   >       > Please let me know if different values for above paramters 
> can be more beneficial.
>   >       > Are there any other parameters which we should consider to 
> change or set?
>   >       >
>   >       > Also below are the calculations used for columns in the 
> summary shared. Can you please confirm if calculations are fine.
>   >       > 1) Total_Mem = total_pages*page_size  --> total memory 
>   >       > 2) Strg_ovrHd = (mem_requested/(used_chunks*chunk_size)) * 
> 100 --> storage overhead
>   >       > 3) Free Memory = free_chunks * chunk_size   ---> free memory
>   >       > 4) To Store = mem_requested      -->   actual memory 
> requested for storing data
>   >       >
>   >       > Thank you for your time and efforts in explaining concepts.
>   >       > Shweta
>   >       >
>   >       >             > > the rest is free memory, which should be 
> measured separately.
>   >       >             > free memory for a class will be : (free_chunks 
> * chunk_size) 
>   >       >             > And total memory reserved by a class will be : 
> (total_pages*page_size)
>   >       >             >
>   >       >             > > If you're getting evictions in class A but 
> there's too much free memory in classes C, D, etc 
>   >       >             > > then you have a balance issue. for example. 
> An efficiency stat which just 
>   >       >             > > adds up the total pages doesn't tell you what 
> to do with it. 
>   >       >             > I see. Got your point.Storage overhead can help 
> in deciding the chunk_size and growth_factor. Let me add
>   storage-overhead and
>   >       >             free memory as well for
>   >       >             > calculation.
>   >       >
>   >       >             Most people don't have to worry about 
> growth_factor very much. Especially
>   >       >             since the large item code was added, but it has 
> its own caveats. Growth
>   >       >             factor is only typically useful if you have 
> _very_ statically sized
>   >       >             objects.
>   >       >
>   >       >             > One curious question: If we have an item of 
> 500Bytes and there is free memory only in class A(chunk_size: 100Bytes).
>   Do cache
>   >       >             evict items from class with
>   >       >             > largeer chunk_size or use multiple chunks from 
> class A?
>   >       >
>   >       >             No, it will evict an item matching the 500 byte 
> chunk size, and not touch
>   >       >             A. This is where the memory balancer comes in; it 
> will move pages of
>

Re: Total memory allocated with -m NOT EQUAL to (total pages * max_item_size). Request to provide clarification on how page size works.

2020-07-07 Thread dormando
what're your start arguments? the settings look a little odd. ie; the full
commandline (censoring anything important) that you used to start
memcached

On Tue, 7 Jul 2020, Shweta Agrawal wrote:

> Sorry. Here it is.
>
> On Wednesday, July 8, 2020 at 12:38:38 AM UTC+5:30, Dormando wrote:
>   'stats settings' file is empty
>
>   On Tue, 7 Jul 2020, Shweta Agrawal wrote:
>
>   > Hi Dormando,
>   > Got the stats for production. Please find attached files for stats 
> settings. stats items, stats, stats slabs. Summary for all slabs.
>   >
>   > Other details that might help:
>   >  *  TTL is two days or more. 
>   >  *  Key length is in the range of 40-80 bytes.
>   > Below are the parameters that we plan to change from the current 
> settings:
>   >  1. slab_automove : from 0 to 1
>   >  2. hash_algorithm: from jenkins to murmur
>   >  3. chunk_size: from 48 to 297 (as we don't have data of size less 
> than that)
>   >  4. growth_factor: 1.25 to 1.20 ( Can reducing this more help? Do 
> more slab classes affect performance?)
>   >  5. max_item_size : from 4MB to 1MB (as our data will never be more 
> than 1MB large)
>   > Please let me know if different values for above paramters can be 
> more beneficial.
>   > Are there any other parameters which we should consider to change or 
> set?
>   >
>   > Also below are the calculations used for columns in the summary 
> shared. Can you please confirm if calculations are fine.
>   > 1) Total_Mem = total_pages*page_size  --> total memory 
>   > 2) Strg_ovrHd = (mem_requested/(used_chunks*chunk_size)) * 100 --> 
> storage overhead
>   > 3) Free Memory = free_chunks * chunk_size   ---> free memory
>   > 4) To Store = mem_requested      -->   actual memory requested for 
> storing data
>   >
>   > Thank you for your time and efforts in explaining concepts.
>   > Shweta
>   >
>   >             > > the rest is free memory, which should be measured 
> separately.
>   >             > free memory for a class will be : (free_chunks * 
> chunk_size) 
>   >             > And total memory reserved by a class will be : 
> (total_pages*page_size)
>   >             >
>   >             > > If you're getting evictions in class A but there's 
> too much free memory in classes C, D, etc 
>   >             > > then you have a balance issue. for example. An 
> efficiency stat which just 
>   >             > > adds up the total pages doesn't tell you what to do 
> with it. 
>   >             > I see. Got your point.Storage overhead can help in 
> deciding the chunk_size and growth_factor. Let me add storage-overhead and
>   >             free memory as well for
>   >             > calculation.
>   >
>   >             Most people don't have to worry about growth_factor very 
> much. Especially
>   >             since the large item code was added, but it has its own 
> caveats. Growth
>   >             factor is only typically useful if you have _very_ 
> statically sized
>   >             objects.
>   >
>   >             > One curious question: If we have an item of 500Bytes 
> and there is free memory only in class A(chunk_size: 100Bytes). Do cache
>   >             evict items from class with
>   >             > largeer chunk_size or use multiple chunks from class A?
>   >
>   >             No, it will evict an item matching the 500 byte chunk 
> size, and not touch
>   >             A. This is where the memory balancer comes in; it will 
> move pages of
>   >             memory between slab classes to keep the tail age roughly 
> the same between
>   >             classes. It does this slowly.
>   >
>   >             > Example:
>   >             > In below scenario, when we try to store item with 3MB, 
> even when there was memory in class with smaller chunk_size, it evicts
>   >             items from 512K class and
>   >             > other memory is blocked by smaller slabs.
>   >
>   >             Large (> 512KB) items are an exception. It will try to 
> evict from the
>   >             "large item" bucket, which is 512kb. It will try to do 
> this up to a few
>   >             times, trying to free up enough memory to make space for 
> the large item.
>   >
>   >             So to make space for a 3MB item, if

Re: Total memory allocated with -m NOT EQUAL to (total pages * max_item_size). Request to provide clarification on how page size works.

2020-07-07 Thread dormando
'stats settings' file is empty

On Tue, 7 Jul 2020, Shweta Agrawal wrote:

> Hi Dormando,
> Got the stats for production. Please find attached files for stats settings. 
> stats items, stats, stats slabs. Summary for all slabs.
>
> Other details that might help:
>  *  TTL is two days or more. 
>  *  Key length is in the range of 40-80 bytes.
> Below are the parameters that we plan to change from the current settings:
>  1. slab_automove : from 0 to 1
>  2. hash_algorithm: from jenkins to murmur
>  3. chunk_size: from 48 to 297 (as we don't have data of size less than that)
>  4. growth_factor: 1.25 to 1.20 ( Can reducing this more help? Do more slab 
> classes affect performance?)
>  5. max_item_size : from 4MB to 1MB (as our data will never be more than 1MB 
> large)
> Please let me know if different values for above paramters can be more 
> beneficial.
> Are there any other parameters which we should consider to change or set?
>
> Also below are the calculations used for columns in the summary shared. Can 
> you please confirm if calculations are fine.
> 1) Total_Mem = total_pages*page_size  --> total memory 
> 2) Strg_ovrHd = (mem_requested/(used_chunks*chunk_size)) * 100 --> storage 
> overhead
> 3) Free Memory = free_chunks * chunk_size   ---> free memory
> 4) To Store = mem_requested      -->   actual memory requested for storing 
> data
>
> Thank you for your time and efforts in explaining concepts.
> Shweta
>
> > > the rest is free memory, which should be measured separately.
> > free memory for a class will be : (free_chunks * chunk_size) 
> > And total memory reserved by a class will be : 
> (total_pages*page_size)
> >
> > > If you're getting evictions in class A but there's too much 
> free memory in classes C, D, etc 
> > > then you have a balance issue. for example. An efficiency 
> stat which just 
> > > adds up the total pages doesn't tell you what to do with it. 
> > I see. Got your point.Storage overhead can help in deciding the 
> chunk_size and growth_factor. Let me add storage-overhead and
> free memory as well for
> > calculation.
>
> Most people don't have to worry about growth_factor very much. 
> Especially
> since the large item code was added, but it has its own caveats. 
> Growth
> factor is only typically useful if you have _very_ statically 
> sized
> objects.
>
> > One curious question: If we have an item of 500Bytes and there 
> is free memory only in class A(chunk_size: 100Bytes). Do cache
> evict items from class with
> > largeer chunk_size or use multiple chunks from class A?
>
> No, it will evict an item matching the 500 byte chunk size, and 
> not touch
> A. This is where the memory balancer comes in; it will move pages 
> of
> memory between slab classes to keep the tail age roughly the same 
> between
> classes. It does this slowly.
>
> > Example:
> > In below scenario, when we try to store item with 3MB, even 
> when there was memory in class with smaller chunk_size, it evicts
> items from 512K class and
> > other memory is blocked by smaller slabs.
>
> Large (> 512KB) items are an exception. It will try to evict from 
> the
> "large item" bucket, which is 512kb. It will try to do this up to 
> a few
> times, trying to free up enough memory to make space for the 
> large item.
>
> So to make space for a 3MB item, if the tail item is 5MB in size 
> or 1MB in
> size, they will still be evicted. If the tail age is low compared 
> to all
> other classes, the memory balancer will eventually move more 
> pages into
> the 512K slab class.
>
> If you tend to store a lot of very large items, it works better 
> if the
> instances are larger.
>
> Memcached is more optimized for performance with small items. if 
> you try
> to store a small item, it will evict exactly one item to make 
> space.
> However, for very large items (1MB+), the time it takes to read 
> the data
> from the network is so large that we can afford to do extra 
> processing.
>
> > 3Mb_items_eviction.png
> >
> >
> > Thank you,
> > Shweta
> >
> >
>  

Re: Total memory allocated with -m NOT EQUAL to (total pages * max_item_size). Request to provide clarification on how page size works.

2020-07-04 Thread dormando
On Sat, 4 Jul 2020, Shweta Agrawal wrote:

> > the rest is free memory, which should be measured separately.
> free memory for a class will be : (free_chunks * chunk_size) 
> And total memory reserved by a class will be : (total_pages*page_size)
>
> > If you're getting evictions in class A but there's too much free memory in 
> > classes C, D, etc 
> > then you have a balance issue. for example. An efficiency stat which just 
> > adds up the total pages doesn't tell you what to do with it. 
> I see. Got your point.Storage overhead can help in deciding the chunk_size 
> and growth_factor. Let me add storage-overhead and free memory as well for
> calculation.

Most people don't have to worry about growth_factor very much. Especially
since the large item code was added, but it has its own caveats. Growth
factor is only typically useful if you have _very_ statically sized
objects.

> One curious question: If we have an item of 500Bytes and there is free memory 
> only in class A(chunk_size: 100Bytes). Do cache evict items from class with
> largeer chunk_size or use multiple chunks from class A?

No, it will evict an item matching the 500 byte chunk size, and not touch
A. This is where the memory balancer comes in; it will move pages of
memory between slab classes to keep the tail age roughly the same between
classes. It does this slowly.

> Example:
> In below scenario, when we try to store item with 3MB, even when there was 
> memory in class with smaller chunk_size, it evicts items from 512K class and
> other memory is blocked by smaller slabs.

Large (> 512KB) items are an exception. It will try to evict from the
"large item" bucket, which is 512kb. It will try to do this up to a few
times, trying to free up enough memory to make space for the large item.

So to make space for a 3MB item, if the tail item is 5MB in size or 1MB in
size, they will still be evicted. If the tail age is low compared to all
other classes, the memory balancer will eventually move more pages into
the 512K slab class.

If you tend to store a lot of very large items, it works better if the
instances are larger.

Memcached is more optimized for performance with small items. if you try
to store a small item, it will evict exactly one item to make space.
However, for very large items (1MB+), the time it takes to read the data
from the network is so large that we can afford to do extra processing.

> 3Mb_items_eviction.png
>
>
> Thank you,
> Shweta
>
>
> On Sunday, July 5, 2020 at 1:13:19 AM UTC+5:30, Dormando wrote:
>   (memory_requested / (chunk_size * chunk_used)) * 100
>
>   is roughly the storage overhead of memory used in the system. the rest 
> is
>   free memory, which should be measured separately. If you're getting
>   evictions in class A but there's too much free memory in classes C, D, 
> etc
>   then you have a balance issue. for example. An efficiency stat which 
> just
>   adds up the total pages doesn't tell you what to do with it.
>
>   On Sat, 4 Jul 2020, Shweta Agrawal wrote:
>
>   > > I'll need the raw output from "stats items" and "stats slabs". I 
> don't 
>   > > think that efficiency column is very helpful. ohkay no worries. I 
> can get by Tuesday and will share. 
>   >
>   > Efficiency for each slab is calcuated as 
>   >  (("stats slabs" -> memory_requested) / (("stats slabs" -> 
> total_pages) * page_size)) * 100
>   >
>   >
>   > Attaching script which has calculations for the same. The script is 
> from memcahe repo with additional calculation for efficiency. 
>   > Will it be possible for you to verify if the efficiency calculation 
> is correct?
>   >
>   > Thank you,
>   > Shweta
>   >
>   > On Saturday, July 4, 2020 at 1:08:23 PM UTC+5:30, Dormando wrote:
>   >       ah okay.
>   >
>   >       I'll need the raw output from "stats items" and "stats slabs". 
> I don't
>   >       think that efficiency column is very helpful.
>   >
>   >       On Fri, 3 Jul 2020, Shweta Agrawal wrote:
>   >
>   >       >
>   >       >
>   >       > On Saturday, July 4, 2020 at 9:41:49 AM UTC+5:30, Dormando 
> wrote:
>   >       >       No attachment
>   >       >
>   >       >       On Fri, 3 Jul 2020, Shweta Agrawal wrote:
>   >       >
>   >       >       >
>   >       >       > Wooo...so quick. :):)
>   >       >       > > Correct, close. It actually uses more like 3 512k 
> chunks and then

Re: Total memory allocated with -m NOT EQUAL to (total pages * max_item_size). Request to provide clarification on how page size works.

2020-07-04 Thread dormando
(memory_requested / (chunk_size * chunk_used)) * 100

is roughly the storage overhead of memory used in the system. the rest is
free memory, which should be measured separately. If you're getting
evictions in class A but there's too much free memory in classes C, D, etc
then you have a balance issue. for example. An efficiency stat which just
adds up the total pages doesn't tell you what to do with it.

On Sat, 4 Jul 2020, Shweta Agrawal wrote:

> > I'll need the raw output from "stats items" and "stats slabs". I don't 
> > think that efficiency column is very helpful. ohkay no worries. I can get 
> > by Tuesday and will share. 
>
> Efficiency for each slab is calcuated as 
>  (("stats slabs" -> memory_requested) / (("stats slabs" -> total_pages) * 
> page_size)) * 100
>
>
> Attaching script which has calculations for the same. The script is from 
> memcahe repo with additional calculation for efficiency. 
> Will it be possible for you to verify if the efficiency calculation is 
> correct?
>
> Thank you,
> Shweta
>
> On Saturday, July 4, 2020 at 1:08:23 PM UTC+5:30, Dormando wrote:
>   ah okay.
>
>   I'll need the raw output from "stats items" and "stats slabs". I don't
>   think that efficiency column is very helpful.
>
>   On Fri, 3 Jul 2020, Shweta Agrawal wrote:
>
>   >
>   >
>   > On Saturday, July 4, 2020 at 9:41:49 AM UTC+5:30, Dormando wrote:
>   >       No attachment
>   >
>   >       On Fri, 3 Jul 2020, Shweta Agrawal wrote:
>   >
>   >       >
>   >       > Wooo...so quick. :):)
>   >       > > Correct, close. It actually uses more like 3 512k chunks 
> and then one 
>   >       > > smaller chunk from a different class to fit exactly 1.6MB. 
>   >       > I see.Got it.
>   >       >
>   >       > >Can you share snapshots from "stats items" and "stats slabs" 
> for one of 
>   >       > these instances? 
>   >       >
>   >       > Currently I have summary of it, sharing the same below. I can 
> get snapshot by Tuesday as need to request for it.
>   >       >
>   >       > pages have value from total_pages from stats slab for each 
> slab
>   >       > item_size have value from chunk_size from stats slab for each 
> slab
>   >       > Used memory is calculated as pages*page size ---> This has to 
> corrected now.
>   >       >
>   >       >
>   >       > prod_stats.png
>   >       >
>   >       >
>   >       > > 90%+ are perfectly doable. You probably need to look a bit 
> more closely
>   >       > > into why you're not getting the efficiency you expect. The 
> detailed stats
>   >       > > output should point to why. I can help with that if it's 
> confusing.
>   >       >
>   >       > Great. Will surely ask for your input whenever I have 
> question. It is really kind of you to offer help. 
>   >       >
>   >       > > Either the slab rebalancer isn't keeping up or you actually 
> do have 39GB
>   >       > > of data and your expecations are a bit off. This will also 
> depending on
>   >       > > the TTL's you're setting and how often/quickly your items 
> change size.
>   >       > > Also things like your serialization method / compression / 
> key length vs
>   >       > > data length / etc.
>   >       >
>   >       > We have much less data than 39 GB. As after facing evictions, 
> it has been always kept higher than expected data-size.
>   >       > TTL is two days or more. 
>   >       > From my observation items size(data-length) is in the range 
> of 300Bytes to 500K after compression.
>   >       > Key length is in the range of 40-80 bytes.
>   >       >
>   >       > Thank you,
>   >       > Shweta
>   >       >  
>   >       > On Saturday, July 4, 2020 at 8:38:31 AM UTC+5:30, Dormando 
> wrote:
>   >       >       Hey,
>   >       >
>   >       >       > Putting my understanding to re-confirm:
>   >       >       > 1) Page size will always be 1MB and we cannot change 
> it.Moreover, it's not required to be changed.
>   >       >
>   >       >       Correct.
>   >       >
>   >       >       > 2) We can store 

Re: Total memory allocated with -m NOT EQUAL to (total pages * max_item_size). Request to provide clarification on how page size works.

2020-07-04 Thread dormando
ah okay.

I'll need the raw output from "stats items" and "stats slabs". I don't
think that efficiency column is very helpful.

On Fri, 3 Jul 2020, Shweta Agrawal wrote:

>
>
> On Saturday, July 4, 2020 at 9:41:49 AM UTC+5:30, Dormando wrote:
>   No attachment
>
>   On Fri, 3 Jul 2020, Shweta Agrawal wrote:
>
>   >
>   > Wooo...so quick. :):)
>   > > Correct, close. It actually uses more like 3 512k chunks and then 
> one 
>   > > smaller chunk from a different class to fit exactly 1.6MB. 
>   > I see.Got it.
>   >
>   > >Can you share snapshots from "stats items" and "stats slabs" for one 
> of 
>   > these instances? 
>   >
>   > Currently I have summary of it, sharing the same below. I can get 
> snapshot by Tuesday as need to request for it.
>   >
>   > pages have value from total_pages from stats slab for each slab
>   > item_size have value from chunk_size from stats slab for each slab
>   > Used memory is calculated as pages*page size ---> This has to 
> corrected now.
>   >
>   >
>   > prod_stats.png
>   >
>   >
>   > > 90%+ are perfectly doable. You probably need to look a bit more 
> closely
>   > > into why you're not getting the efficiency you expect. The detailed 
> stats
>   > > output should point to why. I can help with that if it's confusing.
>   >
>   > Great. Will surely ask for your input whenever I have question. It is 
> really kind of you to offer help. 
>   >
>   > > Either the slab rebalancer isn't keeping up or you actually do have 
> 39GB
>   > > of data and your expecations are a bit off. This will also 
> depending on
>   > > the TTL's you're setting and how often/quickly your items change 
> size.
>   > > Also things like your serialization method / compression / key 
> length vs
>   > > data length / etc.
>   >
>   > We have much less data than 39 GB. As after facing evictions, it has 
> been always kept higher than expected data-size.
>   > TTL is two days or more. 
>   > From my observation items size(data-length) is in the range of 
> 300Bytes to 500K after compression.
>   > Key length is in the range of 40-80 bytes.
>   >
>   > Thank you,
>   > Shweta
>   >  
>   > On Saturday, July 4, 2020 at 8:38:31 AM UTC+5:30, Dormando wrote:
>   >       Hey,
>   >
>   >       > Putting my understanding to re-confirm:
>   >       > 1) Page size will always be 1MB and we cannot change 
> it.Moreover, it's not required to be changed.
>   >
>   >       Correct.
>   >
>   >       > 2) We can store items larger than 1MB and it is done by 
> combining chunks together. (example: let's say item size: ~1.6MB --> 4 slab
>   >       chunks(512k slab) from
>   >       > 2 pages will be used)
>   >
>   >       Correct, close. It actually uses more like 3 512k chunks and 
> then one
>   >       smaller chunk from a different class to fit exactly 1.6MB.
>   >
>   >       > We use memcache in production and in past we saw evictions 
> even when free memory was present. Also currently we use cluster with
>   39GB RAM in
>   >       total to
>   >       > cache data even when data size we expect is ~15GB to avoid 
> eviction of active items.
>   >
>   >       Can you share snapshots from "stats items" and "stats slabs" 
> for one of
>   >       these instances?
>   >
>   >       > But as our data varies in size, it is possible to avoid 
> evictions by tuning parameters: chunk_size, growth_factor, slab_automove.
>   Also I
>   >       believe memcache
>   >       > is efficient and we can reduce cost by reducing memory size 
> for cluster. 
>   >       > So I am trying to find the best possible memory size and 
> parameters we can have.So want to be clear with my understanding and
>   calculations.
>   >       >
>   >       > So while trying different parameters and putting all 
> calculations, I observed that total_pages * item_size_max > physical memory 
> for
>   a
>   >       machine. And from
>   >       > all blogs,and docs it didnot match my understanding. But it's 
> clear now. Thanks to you.
>   >       >
>   >       

Re: Total memory allocated with -m NOT EQUAL to (total pages * max_item_size). Request to provide clarification on how page size works.

2020-07-03 Thread dormando
No attachment

On Fri, 3 Jul 2020, Shweta Agrawal wrote:

>
> Wooo...so quick. :):)
> > Correct, close. It actually uses more like 3 512k chunks and then one 
> > smaller chunk from a different class to fit exactly 1.6MB. 
> I see.Got it.
>
> >Can you share snapshots from "stats items" and "stats slabs" for one of 
> these instances? 
>
> Currently I have summary of it, sharing the same below. I can get snapshot by 
> Tuesday as need to request for it.
>
> pages have value from total_pages from stats slab for each slab
> item_size have value from chunk_size from stats slab for each slab
> Used memory is calculated as pages*page size ---> This has to corrected now.
>
>
> prod_stats.png
>
>
> > 90%+ are perfectly doable. You probably need to look a bit more closely
> > into why you're not getting the efficiency you expect. The detailed stats
> > output should point to why. I can help with that if it's confusing.
>
> Great. Will surely ask for your input whenever I have question. It is really 
> kind of you to offer help. 
>
> > Either the slab rebalancer isn't keeping up or you actually do have 39GB
> > of data and your expecations are a bit off. This will also depending on
> > the TTL's you're setting and how often/quickly your items change size.
> > Also things like your serialization method / compression / key length vs
> > data length / etc.
>
> We have much less data than 39 GB. As after facing evictions, it has been 
> always kept higher than expected data-size.
> TTL is two days or more. 
> From my observation items size(data-length) is in the range of 300Bytes to 
> 500K after compression.
> Key length is in the range of 40-80 bytes.
>
> Thank you,
> Shweta
>  
> On Saturday, July 4, 2020 at 8:38:31 AM UTC+5:30, Dormando wrote:
>   Hey,
>
>   > Putting my understanding to re-confirm:
>   > 1) Page size will always be 1MB and we cannot change it.Moreover, 
> it's not required to be changed.
>
>   Correct.
>
>   > 2) We can store items larger than 1MB and it is done by combining 
> chunks together. (example: let's say item size: ~1.6MB --> 4 slab
>   chunks(512k slab) from
>   > 2 pages will be used)
>
>   Correct, close. It actually uses more like 3 512k chunks and then one
>   smaller chunk from a different class to fit exactly 1.6MB.
>
>   > We use memcache in production and in past we saw evictions even when 
> free memory was present. Also currently we use cluster with 39GB RAM in
>   total to
>   > cache data even when data size we expect is ~15GB to avoid eviction 
> of active items.
>
>   Can you share snapshots from "stats items" and "stats slabs" for one of
>   these instances?
>
>   > But as our data varies in size, it is possible to avoid evictions by 
> tuning parameters: chunk_size, growth_factor, slab_automove. Also I
>   believe memcache
>   > is efficient and we can reduce cost by reducing memory size for 
> cluster. 
>   > So I am trying to find the best possible memory size and parameters 
> we can have.So want to be clear with my understanding and calculations.
>   >
>   > So while trying different parameters and putting all calculations, I 
> observed that total_pages * item_size_max > physical memory for a
>   machine. And from
>   > all blogs,and docs it didnot match my understanding. But it's clear 
> now. Thanks to you.
>   >
>   > One last question: From my trials I find that we can achieve ~90% 
> storage efficiency with memcache. (i.e we need 10MB of physical memory to
>   store 9MB of
>   > data. Do you recommend any idle memory-size interms of percentage of 
> expected data-size? 
>
>   90%+ are perfectly doable. You probably need to look a bit more closely
>   into why you're not getting the efficiency you expect. The detailed 
> stats
>   output should point to why. I can help with that if it's confusing.
>
>   Either the slab rebalancer isn't keeping up or you actually do have 39GB
>   of data and your expecations are a bit off. This will also depending on
>   the TTL's you're setting and how often/quickly your items change size.
>   Also things like your serialization method / compression / key length vs
>   data length / etc.
>
>   -Dormando
>
>   > On Saturday, July 4, 2020 at 12:23:09 AM UTC+5:30, Dormando wrote:
>   >       Hey,
>   >
>   >       Looks like I never updated the manpage. In the past the item 
> size max was
>   >       achieved by

Re: Total memory allocated with -m NOT EQUAL to (total pages * max_item_size). Request to provide clarification on how page size works.

2020-07-03 Thread dormando
Hey,

> Putting my understanding to re-confirm:
> 1) Page size will always be 1MB and we cannot change it.Moreover, it's not 
> required to be changed.

Correct.

> 2) We can store items larger than 1MB and it is done by combining chunks 
> together. (example: let's say item size: ~1.6MB --> 4 slab chunks(512k slab) 
> from
> 2 pages will be used)

Correct, close. It actually uses more like 3 512k chunks and then one
smaller chunk from a different class to fit exactly 1.6MB.

> We use memcache in production and in past we saw evictions even when free 
> memory was present. Also currently we use cluster with 39GB RAM in total to
> cache data even when data size we expect is ~15GB to avoid eviction of active 
> items.

Can you share snapshots from "stats items" and "stats slabs" for one of
these instances?

> But as our data varies in size, it is possible to avoid evictions by tuning 
> parameters: chunk_size, growth_factor, slab_automove. Also I believe memcache
> is efficient and we can reduce cost by reducing memory size for cluster. 
> So I am trying to find the best possible memory size and parameters we can 
> have.So want to be clear with my understanding and calculations.
>
> So while trying different parameters and putting all calculations, I observed 
> that total_pages * item_size_max > physical memory for a machine. And from
> all blogs,and docs it didnot match my understanding. But it's clear now. 
> Thanks to you.
>
> One last question: From my trials I find that we can achieve ~90% storage 
> efficiency with memcache. (i.e we need 10MB of physical memory to store 9MB of
> data. Do you recommend any idle memory-size interms of percentage of expected 
> data-size? 

90%+ are perfectly doable. You probably need to look a bit more closely
into why you're not getting the efficiency you expect. The detailed stats
output should point to why. I can help with that if it's confusing.

Either the slab rebalancer isn't keeping up or you actually do have 39GB
of data and your expecations are a bit off. This will also depending on
the TTL's you're setting and how often/quickly your items change size.
Also things like your serialization method / compression / key length vs
data length / etc.

-Dormando

> On Saturday, July 4, 2020 at 12:23:09 AM UTC+5:30, Dormando wrote:
>   Hey,
>
>   Looks like I never updated the manpage. In the past the item size max 
> was
>   achieved by changing the slab page size, but that hasn't been true for a
>   long time.
>
>   From ./memcached -h:
>   -m, --memory-limit=  item memory in megabytes (default: 64)
>
>   ... -m just means the memory limit in megabytes, abstract from the page
>   size. I think that was always true.
>
>   In any recentish version, any item larger than half a page size (512k) 
> is
>   created by stitching page chunks together. This prevents waste when an
>   item would be more than half a page size.
>
>   Is there a problem you're trying to track down?
>
>   I'll update the manpage.
>
>   On Fri, 3 Jul 2020, Shweta Agrawal wrote:
>
>   > Hi,
>   > Sorry if I am repeating the question, I searched the list but could 
> not find definite answer. So posting it.
>   >
>   > Memcache version: 1.5.10 
>   > I have started memcahce with option: -I 4m (setting maximum item size 
> to 4MB).Verified it is set by command stats settings , I can see STAT
>   item_size_max
>   > 4194304.
>   >
>   > Documentation from git repository here stats that:
>   >
>   > -I, --max-item-size=
>   > Override the default size of each slab page. The default size is 1mb. 
> Default
>   > value for this parameter is 1m, minimum is 1k, max is 1G (1024 * 1024 
> * 1024).
>   > Adjusting this value changes the item size limit.
>   > My understanding from documentation is this option will allow to save 
> items with size till 4MB and the page size for each slab will be 4MB
>   (as I set it as
>   > -I 4m).
>   >
>   > I am able to save items till 4MB but the page-size is still 1MB.
>   >
>   > -m memory size is default 64MB.
>   >
>   > Calculation:
>   > -> Calculated total pages used from stats slabs output parameter 
> total_pages = 64 (If page size is 4MB then total pages should not be more
>   than 16. Also
>   > when I store 8 items of ~3MB it uses 25 pages but if page size is 
> 4MB, it should use 8 pages right.)
>   >
>   > Can you please help me in understanding the behaviour?
>   >
>   > Attached files with details 

Re: Total memory allocated with -m NOT EQUAL to (total pages * max_item_size). Request to provide clarification on how page size works.

2020-07-03 Thread dormando
Hey,

Looks like I never updated the manpage. In the past the item size max was
achieved by changing the slab page size, but that hasn't been true for a
long time.

>From ./memcached -h:
-m, --memory-limit=  item memory in megabytes (default: 64)

... -m just means the memory limit in megabytes, abstract from the page
size. I think that was always true.

In any recentish version, any item larger than half a page size (512k) is
created by stitching page chunks together. This prevents waste when an
item would be more than half a page size.

Is there a problem you're trying to track down?

I'll update the manpage.

On Fri, 3 Jul 2020, Shweta Agrawal wrote:

> Hi,
> Sorry if I am repeating the question, I searched the list but could not find 
> definite answer. So posting it.
>
> Memcache version: 1.5.10 
> I have started memcahce with option: -I 4m (setting maximum item size to 
> 4MB).Verified it is set by command stats settings , I can see STAT 
> item_size_max
> 4194304.
>
> Documentation from git repository here stats that:
>
> -I, --max-item-size=
> Override the default size of each slab page. The default size is 1mb. Default
> value for this parameter is 1m, minimum is 1k, max is 1G (1024 * 1024 * 1024).
> Adjusting this value changes the item size limit.
> My understanding from documentation is this option will allow to save items 
> with size till 4MB and the page size for each slab will be 4MB (as I set it as
> -I 4m).
>
> I am able to save items till 4MB but the page-size is still 1MB.
>
> -m memory size is default 64MB.
>
> Calculation:
> -> Calculated total pages used from stats slabs output parameter total_pages 
> = 64 (If page size is 4MB then total pages should not be more than 16. Also
> when I store 8 items of ~3MB it uses 25 pages but if page size is 4MB, it 
> should use 8 pages right.)
>
> Can you please help me in understanding the behaviour?
>
> Attached files with details for output of command stats settings and stats 
> slabs.
> Below is the summarized view of the distribution. 
> First added items with variable sizes, then then added items with 3MB and 
> above.
>
> data_distribution.png
>
>
>
> Please let me know in case more details are required or question is not clear.
>  
> Thank You,
>  Shweta
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups 
> "memcached" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to memcached+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/memcached/2b640e1f-9f59-4432-a930-d830cbe8566do%40googlegroups.com.
>
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"memcached" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/memcached/alpine.DEB.2.21.2007031149160.18887%40dskull.


Re: Add client IP to the watch command response

2020-06-21 Thread dormando
Hey,

The "cfd" is the client FD, which you can resolve via "stats conns".

We could add more information. I need to rethink this a bit, maybe.

On Sun, 21 Jun 2020, chinmay gupta wrote:

> Hey
>
> Would it be possible to add client IP information to the `watch ...` command 
> log response with its current implementation?
>
> I am willing to contribute if it can be done.
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups 
> "memcached" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to memcached+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/memcached/9841ee74-2535-426a-a8fd-cf5f8ccf8c15o%40googlegroups.com.
>
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"memcached" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/memcached/alpine.DEB.2.21.2006211527230.18901%40dskull.


new C client in development

2020-06-19 Thread dormando
Hey,

Guess I don't mail here much anymore :) Kind of unsure what audience is
left to be honest. In case there is:

Prototype partially written allocation-free memcached C client:
https://github.com/dormando/mcmc (and dev issue:
https://github.com/dormando/mcmc/issues/1) - working to nail down an API
3rd party language client authors will love :P

have fun,
-Dormando

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"memcached" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/memcached/alpine.DEB.2.21.2006191703370.32653%40dskull.


Re: Warm restart setup for dummies

2020-06-11 Thread dormando
Absolutely. That's exactly the workflow it's designed for, we just haven't
updated any of the systemd scripts to be more friendly for it.

Also a caveat; there _was_ a bug fixed relatively recently with the
restart code. I don't know if ubuntu backports these. If you use large
objects (> 512k) there's a chance restart won't work sometimes. Worst case
you can probably file a bug with them to backport the patch or upgrade
memcached.

Good luck!

On Wed, 10 Jun 2020, Even Onsager wrote:

> That's extremely helpful, thank you so much for this! I will look into it and 
> test on my staging server. I don't think systemd has ever killed or restarted 
> the process apart from once before I upgraded the RAM, so I'm not too worried 
> about the daily usage. But even systemd supports custom kill signals, so it 
> should be possible to set this up?
>
> Anyway, it's the reboots I'm trying to get to work. I never upgrade apt 
> packages or reboot directly, only with Ansible after kernel upgrades or 
> similar, so I should be able to disable the systemd services (should probably 
> temporarily disable the puma webserver service too) and automate a copy to 
> disk task before the reboot takes place. A good thing with Ansible is that it 
> can automate reboots and continue with more tasks after reboot is complete, 
> so it should be ideal for this scenario. I will post back if I can get it to 
> work, should be interesting for more than me. :)
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups 
> "memcached" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to memcached+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/memcached/5ec346ab-6977-4996-b573-9d07dd0d4084o%40googlegroups.com.
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"memcached" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/memcached/alpine.DEB.2.21.2006102350440.27659%40dskull.


Re: Warm restart setup for dummies

2020-06-10 Thread dormando
Hey,

I might have to look at how ubuntu's install works.. it might not be set
up for this.

These are the basic steps for a restart:

1) set up memcached as you did, tmpfs/etc.
2) when you want to stop gracefully, issue a `kill -SIGUSR1 $(pidof
memcached)`
(kill is the command to send signals to a process)
3) start memcached again with the same options, and it will recover its
data.

This will _not_ survive reboots. This will survive software upgrades,
which ubuntu isn't going to do anyway :)

To survive reboots you need a few more steps:

1) once memcached has stopped, copy the files created in
/tmpfs_mount_memcached/ to an actual harddrive somewhere.
2) reboot.
3) copy the datafiles back in place.
4) start memcached again.

For this to be reliable you probably don't want ubuntu to automatically
manage the process (disable auto start/restart in systemd).

You might want to engage some community/chat/something for some systems
admnistration help to get this going. Sounds like you're a bit over your
head :(

good luck,
-Dormando

On Wed, 10 Jun 2020, Even Onsager wrote:

> My site runs on one webserver and we rely heavily on memcached to make it 
> snappy, to the extent that a reboot will make the site unresponsive for hours.
> So imagine my joy when I saw the warm restart addition, and the fact that 
> Ubuntu Server 20.04 LTS has a new enough version in its repo.
>  
> But the wiki left me scratching my head. This is what I have:
>  
> - The standard apt package for Ubuntu 20.04 (version 1.5.22)
> - `-e /tmpfs_mount_memcached/memory_file` in memcached.conf
> - `-m 920` in memcached.conf
> - `tmpfs /tmpfs_mount_memcached tmpfs nodev,nosuid,size=930M 0` in /etc/fstab 
> (generated by Ansible's mount module)
>  
> No type of restart (neither of the systemd service nor the server itself) 
> seems to work. After restarting the size of the cache store is 0 and all pages
> take forever to load. But is it supposed to work like this with tmpfs mounts? 
> I thought tmpfs wasn't meant to survive reboots? Am I misreading the wiki?
>  
> I'm obviously in way over my head here (I don't even really know what a 
> SIGUSR1 is), so I'd really appreciate some help as to what I'm missing!
>  
> And thanks for memcached - it's served my site well for years now, and after 
> upgrading to Rails 5 with better caching, we're using it more and more. :)
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups 
> "memcached" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to memcached+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/memcached/afb47849-1915-4183-9765-457e7e4bc153o%40googlegroups.com.
>
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"memcached" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/memcached/alpine.DEB.2.21.2006101506060.27659%40dskull.


Re: Query regarding the max_connections output from STAT command

2020-05-29 Thread dormando
on that particular node you ran the stats command on.

On Fri, 29 May 2020, Gautam Worah wrote:

> Is the max_connections variable representative of the maximum number of 
> client connections possible per node or for the entire cluster?
>
> Command used: STAT max_connections
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups 
> "memcached" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to memcached+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/memcached/7d401361-043c-4321-8a77-416078cd2271%40googlegroups.com.
>
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"memcached" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/memcached/alpine.DEB.2.21.2005291529230.19114%40dskull.


Re: memcached advice UK

2020-05-26 Thread dormando
Hey,

I can probably help if you contact me privately. If it's something you can
get help with publicly go ahead and detail what's going on :)

have fun,
-Dormando

On Tue, 26 May 2020, 'Dan' via memcached wrote:

> Hi, 
> We are a UK-based travel site looking for some help/checking of our memcached 
> setup. We rely on
> memcached quite a bit and make a large number of calls, so small improvements 
> could help us a fair
> bit.
>
> Appreciate any pointers.
>
> Thanks
> Dan
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups 
> "memcached" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to
> memcached+unsubscr...@googlegroups.com.
> To view this discussion on the web 
> visithttps://groups.google.com/d/msgid/memcached/e6ecb33b-871f-4455-b469-4d79335e6cde%40googlegroups.com
> .
>
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"memcached" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/memcached/alpine.DEB.2.21.2005261755380.5050%40dskull.


Re: memcached running, but refusing connections

2020-05-11 Thread dormando
It says it can't write the pid file, so it's probably failing to start
there.

Try starting it manually with those options and fix it until it works?

On Mon, 11 May 2020, Alexander wrote:

> I tried with and without UFW (disabled/enabled), I ran /etc/init.d/memcached 
> status: 
> ● memcached.service - memcached daemon
>    Loaded: loaded (/lib/systemd/system/memcached.service; enabled; vendor 
> preset: enabled)
>    Active: active (running) since Mon 2020-05-11 22:14:15 UTC; 11min ago
>      Docs: man:memcached(1)
>  Main PID: 845 (memcached)
>     Tasks: 10 (limit: 2361)
>    CGroup: /system.slice/memcached.service
>            └─845 /usr/bin/memcached -m 64 -p 11211 -u www-data -l 127.0.0.1 
> -P /var/run/memcached/memcached.pid -s /v…id
>
> May 11 22:14:15 atlantsecurity systemd[1]: Started memcached daemon.
> May 11 22:14:16 atlantsecurity systemd-memcached-wrapper[845]: Could not open 
> the pid file /var/run/memcached/memc…enied
> Hint: Some lines were ellipsized, use -l to show in full.
>
> telnet localhost 11211: returns: 
> telnet: Unable to connect to remote host: Connection refused
>
> After a lot of troubleshooting, I figured that if I commented these lines  in 
> /etc/memcached.conf, it starts just fine on reboot:
>
> # Use a pidfile
> #-P /var/run/memcached/memcached.pid
> #-s /var/www/memcached.sock
> #-a 0770
> #-p /tmp/memcached.pid
>
> do you have an idea why these options prevent it from starting? 
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups 
> "memcached" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to memcached+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/memcached/2eefdf04-8893-41ad-a9e5-7b1afe577911%40googlegroups.com.
>
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"memcached" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/memcached/alpine.DEB.2.21.2005111546220.22839%40dskull.


Re: Memcached config issue

2020-05-09 Thread dormando
Hey,

I hear you, I have no idea what's generating that message. it's not part
of memcached. What is generating it? How did you install memcached into
your system?

On Sat, 9 May 2020, Pablo C wrote:

> I am asking cause I have don't know much about memcached.
> I checked and it seem to be running fine, but I wonder if it is due to 
> CACHESIZE or MAXCONN values.
>
>
>
>
> El sábado, 9 de mayo de 2020, 20:32:15 (UTC-3), Pablo C escribió:
>
>   We have a installed memcached a few days ago, and we keep getting 
> messages like this every hour.
>
> Time: Fri May  8 22:16:11 2020 +
> Account:  memcached
> Resource: Process Time
> Exceeded: 77389 > 1800 (seconds)
> Executable:   /usr/bin/memcached
> Command Line: /usr/bin/memcached -u memcached -p 11211 -m 2GB -c 1024 -l 
> 127.0.0.1 -U 0
> PID:  9582 (Parent PID:9582)
> Killed:   No
>
>   Could anyone be so kind as to advice if we need to change the config 
> and how?
>
>
>   Memcached config:
>
> PORT="11211"
> USER="memcached"
> MAXCONN="1024"
> CACHESIZE="2GB"
> OPTIONS="-l 127.0.0.1 -U 0"
>
>   The server has CentOS 7.8 | Apache 2.4 | PHP 7.4
>
> sudo netstat -tulpn | grep :11211
> tcp0  0 127.0.0.1:11211 0.0.0.0:*LISTEN   
> 9582/memcached
>
>   status:
>
> Redirecting to /bin/systemctl status memcached.service
> ● memcached.service - Memcached
> Loaded: loaded (/usr/lib/systemd/system/memcached.service; enabled; vendor pr
> eset: disabled)
> Active: active (running) since Fri 2020-05-08 00:46:22 UTC; 21h ago
> Main PID: 9582 (memcached)
> CGroup: /system.slice/memcached.service
>└─9582 /usr/bin/memcached -u memcached -p 11211 -m 2GB -c 1024 -l ...`
>
>   Thanks
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups 
> "memcached" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to memcached+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/memcached/9049e413-6eeb-4602-b452-6c85c0ae5b69%40googlegroups.com.
>
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"memcached" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/memcached/alpine.DEB.2.21.2005091950370.14499%40dskull.


Re: Memcached config issue

2020-05-09 Thread dormando
What is generating that message? I have no idea what that means :(

It sounds like it's using too much CPU? But it's not killing it, so you'll
just keep getting this warning every hour?

How did you install memcached?

On Sat, 9 May 2020, Pablo C wrote:

>
> We have a installed memcached a few days ago, and we keep getting messages 
> like this every hour.
>
> Time: Fri May  8 22:16:11 2020 +
> Account:  memcached
> Resource: Process Time
> Exceeded: 77389 > 1800 (seconds)
> Executable:   /usr/bin/memcached
> Command Line: /usr/bin/memcached -u memcached -p 11211 -m 2GB -c 1024 -l 
> 127.0.0.1 -U 0
> PID:  9582 (Parent PID:9582)
> Killed:   No
>
> Could anyone be so kind as to advice if we need to change the config and how?
>
>
> Memcached config:
>
> PORT="11211"
> USER="memcached"
> MAXCONN="1024"
> CACHESIZE="2GB"
> OPTIONS="-l 127.0.0.1 -U 0"
>
> The server has CentOS 7.8 | Apache 2.4 | PHP 7.4
>
> sudo netstat -tulpn | grep :11211
> tcp0  0 127.0.0.1:11211 0.0.0.0:*LISTEN   
> 9582/memcached
>
> status:
>
> Redirecting to /bin/systemctl status memcached.service
> ● memcached.service - Memcached
> Loaded: loaded (/usr/lib/systemd/system/memcached.service; enabled; vendor pr
> eset: disabled)
> Active: active (running) since Fri 2020-05-08 00:46:22 UTC; 21h ago
> Main PID: 9582 (memcached)
> CGroup: /system.slice/memcached.service
>└─9582 /usr/bin/memcached -u memcached -p 11211 -m 2GB -c 1024 -l ...`
>
> Thanks
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups 
> "memcached" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to memcached+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/memcached/9ce648fe-7ac6-4d85-84fe-d738d5d6ea4d%40googlegroups.com.
>
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"memcached" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/memcached/alpine.DEB.2.21.2005091704210.14499%40dskull.


Re: Load testing with mc-crusher

2020-05-01 Thread dormando
Hey,

I didn't try your script but it's not quite clear how it's gathering
statistics. just once at each end of the test?

I'd left some test suites from my blog posts in the test-suites/ directory
of mc-crusher, but they weren't very clean or easy to modify...

Found another one I'd done later with a refactoring that's a lot better:
https://github.com/memcached/mc-crusher/blob/master/test-suites/test-suite-example

just cleaned up some bits and wrote a few notes into it. I don't have the
if tooling I was using to process the data into json blobs (maybe I'll add
it you want it :P)

There are a bunch of parameters to tune, or tests you can outright delete.
Any mc-crusher configuration file can be represented in there (see the
"get_*_conf" subs).

This will grab a bunch of data _while_ the perf test runs, which you can
then plot or examine for consistency. It also runs a scaled set of tests,
so you can find the performance cliffs. I don't believe any benchmark is
valid unless you find the test to failure point and show what latency
looks like on your way there.

So this will run a bunch of tests and scale up the mc-crusher config. You
get:

- stats sample output (see sample_args()). You get the rate every N
seconds for X periods, and at the end it'll have an average from the full
run. You can look into this for spikes/dips/consistency.

- the stats sampling starts _after_ the benchmark starts, so you don't end
up weighting ramp up time into the average.

- latency sampled output (along with some summary information). You can
examine these files to see when the benchmark throughput has exceeded your
latency budget.

- Some other crap I probably forget.

Anyway this is how I test stuff for those blog posts with all the graphs.
You mostly tweak it a bit and ignore it for a while, then tweak and try
again.

On Wed, 8 Apr 2020, Martin Grigorov wrote:

> Hi,
>
> Does it make sense to have a command 'flush_stats' ?
> 'flush_all' resets the caches but the stats stay, e.g. cmd_get value is not 
> reset.
>
> I'd find it useful for performance measurements:
> 1) run random warm-up
> 2) reset_stats
> 3) run perf test
> 4) collect stats
>
> Right now I do:
> 1) run random warm-up
> 2) collect stats
> 3) run perf test
> 4) collect stats
> 5) substract 2) from 4) and store the result
>
> Regards,
> Martin
>
> On Tue, Apr 7, 2020 at 3:29 PM Martin Grigorov  
> wrote:
>   Hi Dormando,
>
>   This is a continuation of the mail thread with subject "Is ARM64 
> officially supported ?" [1]
>
> I've refreshed my Perl coding skills and came up with this wrapper around 
> mc-crusher:
>
> =
> #!/usr/bin/env perl
>
> # sudo cpan Cache::Memcached
>
> use strict;
> use warnings;
>
> use Cache::Memcached;
> use Data::Dumper;
> use Time::Piece;
>
> my $TESTBED_HOME = $ENV{'TESTBED_HOME'};
> die "Env variable 'TESTBED_HOME' is not set!" if ! defined($TESTBED_HOME);
> my $MC_CRUSHER_HOME = $ENV{'MC_CRUSHER_HOME'};
> die "Env variable 'MC_CRUSHER_HOME' is not set!" if ! 
> defined($MC_CRUSHER_HOME);
>
> if (scalar(@ARGV) < 2) {
>         die "Usage: mc-crusher.pl. For 
> example: mc-crusher.pl cmd_get a.b.c.d 11211 60"
> }
>
> my $config = shift @ARGV;
> my $host = shift @ARGV;
> my $port = shift @ARGV || 8080;
> my $duration = shift @ARGV || 5 * 60; # 5 mins
>
> system("$MC_CRUSHER_HOME/mc-crusher --conf 
> $TESTBED_HOME/etc/memcached/mc-crusher/conf/$config.conf --ip $host --port 
> $port --timeout $duration");
>
> print "MC Crusher status: $!\n" if $!;
>
> my $serverAddr = "$host:$port";
> print "Server: $serverAddr\n";
> my @servers = ($serverAddr);
>
> my $memcached = new Cache::Memcached;
> $memcached->set_servers(\@servers);
>
> my $stats = $memcached->stats(['misc'])->{'hosts'}->{$serverAddr}->{'misc'};
> warn Dumper($stats);
>
> my $timestamp = localtime->datetime();
> my $cmd_per_sec = int($stats->{$config}) / $duration;
> my $bytes_written = $stats->{'bytes_written'};
> my $bytes_read = $stats->{'bytes_read'};
> my $time_system = $stats->{'rusage_system'};
> my $time_user = $stats->{'rusage_user'};
>
> my $today = localtime->ymd();
> my $folder = "$TESTBED_HOME/etc/memcached/$today";
> system("mkdir -p $folder");
> print "Cannot create '$folder': $!\n" if $!;
> my $filename = "$folder/memcached-mc-crusher-report-$host-$config.csv";
>
> my $headerPrefix = "${host}_${config}";
> open(my $fh, '>', $filename) or die "Could not open file '$filename': $!";
> print $fh 
> "timeStamp,${headerPrefix}_per_sec,$

Re: Is ARM64 officially supported ?

2020-05-01 Thread dormando
Hey,

Sorry I missed this (almost two months ago?)

If I were to do a serious performance comparison for ARM right now it
would have to be a sponsored project; I can't justify the time out of
personal curiosity right now :)

If you want some detailed analysis contact me privately and we can
discuss.

-Dormando

On Mon, 9 Mar 2020, Martin Grigorov wrote:

> Hi Dormando,
>
> On Mon, Mar 9, 2020 at 9:19 AM Martin Grigorov  
> wrote:
>   Hi Dormando,
>
> On Fri, Mar 6, 2020 at 10:15 PM dormando  wrote:
>   Yo,
>
>   Just to add in: yes we support ARM64. Though my build test platform is a
>   raspberry pi 3 and I haven't done any serious performance work. 
> packet.net
>   had an arm test platform program but I wasn't able to get time to do any
>   work.
>
>   From what I hear it does seem to perform fine on high end ARM64 
> platforms,
>   I just can't do any specific perf work unless someone donates hardware.
>
>
> I will talk with my managers!
> I think it should not be a problem to give you a SSH access to one of our 
> machines.
> What specs do you prefer ? CPU, disks, RAM, network, ...
> VM or bare metal ? 
> Preferred Linux flavor ?
>
> It would be good to compare it against whatever AMD64 instance you have. Or I 
> can also ask for two similar VMs - ARM64 and AMD64.
>
>
> My manager confirmed that we can give you access to an ARM64 machine. VM 
> would be easier to setup but bare metal is also possible.
> Please tell me the specs you prefer.
> We can give you access only temporarily though, i.e. we will have to shut it 
> down after you finish the testing, so it doesn't stay idle and waste budget.
> Later if you need it we can allocate it again.
> Would this work for you ?
>
> Martin 
>  
>
>
> Martin
>  
>
>   -Dormando
>
>   On Fri, 6 Mar 2020, Martin Grigorov wrote:
>
>   > Hi Emilio,
>   >
>   > On Fri, Mar 6, 2020 at 9:14 AM Emilio Fernandes 
>  wrote:
>   >       Thank you for sharing your experience, Martin!
>   > I've played for few days with Memcached on our ARM64 test servers and 
> so far I also didn't face any issues.
>   >
>   > Do you know of any performance benchmarks of Memcached on AMD64 and 
> ARM64 ? Or at least of a performance test suite that I can run
>   myself ?
>   >
>   >
>   > I am not aware of any public benchmark results for Memcached on AMD64 
> vs ARM64.
>   > But quick search in Google returned these promising results:
>   > 1) https://github.com/memcached/mc-crusher
>   > 2) https://github.com/scylladb/seastar/wiki/Memcached-Benchmark
>   > 3) https://github.com/RedisLabs/memtier_benchmark
>   > 4) http://www.lmdb.tech/bench/memcache/
>   >  
>   > I will try some of them next week and report back!
>   >
>   > Martin
>   >
>   >
>   > Gracias!
>   > Emilio
>   >
>   > сряда, 4 март 2020 г., 16:30:37 UTC+2, Martin Grigorov написа:
>   >       Hello Emilio!
>   > Welcome to this community!
>   >
>   > I am a regular user of Memcached and I can say that it works just 
> fine for us on ARM64!
>   > We are still at early testing stage but so far so good!
>   >
>   > I like the idea to have this mentioned on the website!
>   > It will bring confidence to more users!
>   >
>   > Regards,
>   > Martin
>   >
>   > On Wed, Mar 4, 2020 at 4:09 PM Emilio Fernandes 
>  wrote:
>   >       Hello Memcached community!
>   > I'd like to know whether ARM64 architecture is officially supported ?
>   > I've seen that Memcached is being tested on ARM64 at Travis but I do 
> not see anything on the website or in GitHub Wiki explicitly
>   saying
>   > whether it is officially supported or not.
>   >
>   > Gracias!
>   > Emilio
>   >
>   > --
>   >
>   > ---
>   > You received this message because you are subscribed to the Google 
> Groups "memcached" group.
>   > To unsubscribe from this group and stop receiving emails from it, 
> send an email to memc...@googlegroups.com.
>   > To view this discussion on the web visit
>   > 
> https://groups.google.com/d/msgid/memcached/bb39d899-643b-4901-8188-a11138c37b82%40googlegroups.com.
>   >
>   > --
>   >
>   > ---
>   > You received this message because you are subscribed to the Google 
> Groups "memcached" group.
> 

Re: Memcached repcached

2020-04-15 Thread dormando
Sorry, there's no support for repcached.

On Wed, 15 Apr 2020, pratibha sharma Jagnere wrote:

> Hi,
> has anyone used repcached package recently.
> I am trying to setup but when I run the memcached service, I am getting 
> segmentation fault.
>
> Is there any other alternative?
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups 
> "memcached" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to memcached+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/memcached/4f6bd405-fa9a-4e27-ba19-689dbe2b2040%40googlegroups.com.
>
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"memcached" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/memcached/alpine.DEB.2.21.2004151422060.23694%40dskull.


Re: Test for DOS fix

2020-04-11 Thread dormando
Bit late in responding:

That wiki page is about the older DDoS. Not about a DoS. They're
completely different. This bug is more simply a server crash, not
convincing the server to do stupid shit.

I'm not going to write tests for it, but someone else is free to.

On Thu, 2 Apr 2020, Victor Rodriguez wrote:

> Hi memcached community
>
> I was very happy to see fixes for remote DoS (segfault) in parsing of the 
> binary protocol, introduced in 1.6.0.
>
> https://github.com/memcached/memcached/commit/02c6a2b62ddcb6fa4569a591d3461a156a636305
>
> I was wondering if we have a test case for this scenario. I was able to get 
> instructions on how the DOS attack works here
> https://github.com/memcached/memcached/wiki/DDOS
>
> however, I was wondering if we a coded test case, if not happy to write one
>
> Regards
>
> Victor Rodriguez
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups 
> "memcached" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to memcached+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/memcached/b895233f-c39c-4810-b702-3027804c5864%40googlegroups.com.
>
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"memcached" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/memcached/alpine.DEB.2.21.2004112011340.23694%40dskull.


Re: memcached inservice authentication

2020-04-10 Thread dormando
Hey,

https://github.com/memcached/memcached/blob/master/doc/protocol.txt#L176

there is a no-dependency authentication system for the text protocol.

On Thu, 9 Apr 2020, Sambasivarao Gajula wrote:

> Hello Memcached community,
> Do we have in-service authentication felicity in the memcached instead of 
> using other rpms like SASL.
>
> If it is available, please share the details.
>
> Thanks,
> Samba.
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups 
> "memcached" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to memcached+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/memcached/0bac9205-ca7c-4bc1-b4cf-59317588%40googlegroups.com.
>
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"memcached" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/memcached/alpine.DEB.2.21.2004101837590.23694%40dskull.


Re: memcached for Windows (Win32 and Win64)

2020-04-04 Thread dormando
Hey,

Thanks for this! I haven't had time yet to look closely. I'll throw out a
few sentences to maybe start a convo:

I think this is the third or fourth windows port. Usually they come out,
run for a few versions, then the maintainer gets distracted and it stops.

Often the patches are huge/unwieldy or simply replace code so it won't run
on anything _but_ windows.

In that spirit, do you have any interest in finding what code can be
upstreamed to either minimize the size of the fork to something managable
longer term, or at least fiddle in that direction?

My thought would be (though again I haven't at all looked at what
you've done) is to break down the changes into small chunks to be
individually reviewed and upstreamed so that the fork simply shrinks with
time. There should be some changes that're easier than others to upstream.

thanks,
-Dormando

On Sat, 4 Apr 2020, Jefty Negapatan wrote:

> Hello Memcached community!
> Just an update:
>
> Again, the unsupported options/features (may support in the future):
>
>  1. sasl (Upstream support since v1.4.3)
>  2. extstore (Upstream support since v1.5.4) Now supported! All tests passed!
>  3. -u/user (Better use Windows runas command, Windows explorer's Run as 
> different user context
> menu, or other Windows built-in tools)
>  4. -s/unix-socket
>  5. -k/lock-memory
>  6. -r/coredumps
>  7. seccomp
>
> Build/Tests/Artifacts: https://ci.appveyor.com/project/jefty/memcached-windows
>
> Alternative Downloads: 
> https://bintray.com/jefty/generic/memcached-windows/_latestVersion
>
> Regards,
> Jefty
>
> On Tuesday, March 31, 2020 at 9:35:47 PM UTC+2, Jefty Negapatan wrote:
>
>   Hello Memcached community!
>
>
>   Just an update (v1.6.3):
>
>
>   Unsupported/Disabled options/features (may support in the future):
>
>   tls (Upstream support since v1.5.13) Now supported! Built and tested 
> with latest OpenSSL
>   (1.1.1d). Built and tested with latest BoringSSL (chromium-stable). 
> BoringSSL reduced
>   the statically-linked executable size by ~34% (Win64 2.94MB -> 1.94MB). 
> BoringSSL
>   already rejects peer renegotiations by default so the unsupported 
> OpenSSL-only
>   SSL_in_before API used in memcached is no longer necessary. I just 
> disabled it using
>   OPENSSL_IS_BORINGSSL macro.
>
>   Testing:
>
>   NOTE: Since Perl-based test suite is not executed, test is lacking on 
> some areas. Use at
>   your own risk! Perl-based test suite is now ported and it was able to 
> detect issues and
>   I already fixed! Majority of the test suite change is only disabling 
> the unsupported
>   unix socket connection but no changes in test cases/scenarios. All 
> tests PASSED! This
>   will now give Windows users the confidence to use the native Windows 
> port.
>
>   GitHub Release: 
> https://github.com/jefyt/memcached-windows/releases/tag/1.6.3_mingw
>
>   Build/Test/Artifacts:
>   https://ci.appveyor.com/project/jefty/memcached-windows/builds/31859470
>
>   Again, the unsupported options/features (may support in the future):
>
>1. sasl (Upstream support since v1.4.3)
>2. extstore (Upstream support since v1.5.4)
>3. -u/user (Can use Windows runas command or Windows explorer's Run as 
> different user
>   context menu)
>4. -s/unix-socket
>5. -k/lock-memory
>6. -r/coredumps
>7. seccomp
>
>
>   Regards,
> Jefty
>
> On Friday, March 27, 2020 at 10:25:05 PM UTC+1, Jefty Negapatan wrote:
>   Hello Memcached community!
> Just an update:
>
> Unsupported/Disabled options/features (may support in the future):
>  1. tls (Upstream support since v1.5.13) Now supported! Built and tested with 
> latest
> OpenSSL (1.1.1d).
> Just like cURL and Google Chrome for Windows, TLS library is statically 
> linked.
> Statically-linked exe size: 327KB -> 3,008KB
>
> Regards,
> Jefty
>
> On Tuesday, March 24, 2020 at 11:26:13 PM UTC+1, Jefty Negapatan wrote:
>   Hello Memcached community!
>
> Just wanna share that I've ported the latest memcached (1.6.2) to Windows. 
> Based
> on my search if I'm not mistaken, the last native Windows build (not via
> Cygwin/WSL) is already outdated (1.4.5).
>
> Unsupported/Disabled options/features (may support in the future):
>  1. sasl (Upstream support since v1.4.3)
>  2. extstore (Upstream support since v1.5.4)
>  3. tls (Upstream support since v1.5.13)
>  4. -u/user (Can use Windows runas command or Windows explorer's Run as 
> different
> user context menu)
>  5. -s/unix-socket (Windows does not currently support Unix domain socket)
>  

Re: Is ARM64 officially supported ?

2020-03-22 Thread dormando
If you're still stuck I'll write more of a guide, just let me know.

On Sun, 22 Mar 2020, dormando wrote:

> Hey,
>
> I thought I wrote this in the rest of the e-mail + the README: it doesn't
> print stats at the end. you run the benchmark and then pull stats via
> other utilities. Take a close look at what I wrote and the files in the
> repo.
>
> On Sun, 22 Mar 2020, Martin Grigorov wrote:
>
> > Hi,
> >
> > On Thu, Mar 19, 2020 at 9:06 PM dormando  wrote:
> >   memtier is trash. Check the README for mc-crusher, I just updated it 
> > a bit
> >   a day or two ago. Those numbers are incredibly low, I'd have to dig a
> >   laptop out of the 90's to get something to perform that badly.
> >
> >   mc-crusher runs blindly and you use the other utilities that come 
> > with it
> >   to find command rates and sample the latency while the benchmark runs.
> >   Almost all 3rd party memcached benchmarks end up benchmarking the
> >   benchmark tool, not the server. I know mc-crusher doesn't make it very
> >   obvious how to use though, sorry.
> >
> >
> > What I miss to find so far is how to get the statistics after a run.
> > For example, I run 
> > ./mc-crusher --conf ./conf/asciiconf --ip 192.168.1.43 --port 12345 
> > --timeout 10
> >  
> > and the output is:
> >
> > --
> > ip address default: 192.168.1.43
> > port default: 12345
> > id 0 for key send value ascii_get
> > id 1 for key recv value blind_read
> > id 5 for key conns value 50
> > id 8 for key key_prefix value foobar
> > id 26 for key key_prealloc value 0
> > id 24 for key pipelines value 8
> > id 0 for key send value ascii_set
> > id 1 for key recv value blind_read
> > id 5 for key conns value 10
> > id 8 for key key_prefix value foobar
> > id 26 for key key_prealloc value 0
> > id 24 for key pipelines value 4
> > id 19 for key stop_after value 20
> > id 3 for key usleep value 1000
> > id 12 for key value_size value 10
> > setting a timeout
> > done initializing
> > timed run complete
> > --
> >
> > And I see that the server is busy at that time.
> > How to find out how many sets/gets/... were made ?
> >
> > Martin
> >  
> >
> >   A really quick untuned test against my raspberry pi 3 nets 92,000
> >   gets/sec. (mc-crusher running on a different machine). On a xeon 
> > machine
> >   I can get tens of millions of ops/sec depending on the read/write 
> > ratio.
> >
> >   On Thu, 19 Mar 2020, Martin Grigorov wrote:
> >
> >   > Hi
> >   >
> >   > I've made some local performance testing
> >   >
> >   > First I tried with https://github.com/memcached/mc-crusher but it 
> > seems it doesn't calculate any statistics after the load runs.
> >   >
> >   > The results below are from 
> > https://github.com/RedisLabs/memtier_benchmark
> >   >
> >   > 1) Text
> >   > ./memtier_benchmark --server XYZ --port 12345 -P memcache_text
> >   >
> >   > ARM64 text
> >   > 
> > =
> >   > Type         Ops/sec     Hits/sec   Misses/sec      Latency       
> > KB/sec
> >   > 
> > -
> >   > Sets          985.28          ---          ---     20.02700        
> > 67.22
> >   > Gets         9842.00         0.00      9842.00     20.01900       
> > 248.83
> >   > Waits           0.00          ---          ---      0.0         
> >  ---
> >   > Totals      10827.28         0.00      9842.00     20.02000       
> > 316.05
> >   >
> >   >
> >   > X86 text
> >   > 
> > =
> >   > Type         Ops/sec     Hits/sec   Misses/sec      Latency       
> > KB/sec
> >   > 
> > -
> >   > Sets          931.04          ---          ---     20.06800        
> > 63.52
> >   > Gets         9300.21         0.00      9300.21     20.32600       
> > 235.13
> >   > Waits           0.00          ---          ---    

Re: Is ARM64 officially supported ?

2020-03-22 Thread dormando
Hey,

I thought I wrote this in the rest of the e-mail + the README: it doesn't
print stats at the end. you run the benchmark and then pull stats via
other utilities. Take a close look at what I wrote and the files in the
repo.

On Sun, 22 Mar 2020, Martin Grigorov wrote:

> Hi,
>
> On Thu, Mar 19, 2020 at 9:06 PM dormando  wrote:
>   memtier is trash. Check the README for mc-crusher, I just updated it a 
> bit
>   a day or two ago. Those numbers are incredibly low, I'd have to dig a
>   laptop out of the 90's to get something to perform that badly.
>
>   mc-crusher runs blindly and you use the other utilities that come with 
> it
>   to find command rates and sample the latency while the benchmark runs.
>   Almost all 3rd party memcached benchmarks end up benchmarking the
>   benchmark tool, not the server. I know mc-crusher doesn't make it very
>   obvious how to use though, sorry.
>
>
> What I miss to find so far is how to get the statistics after a run.
> For example, I run 
> ./mc-crusher --conf ./conf/asciiconf --ip 192.168.1.43 --port 12345 --timeout 
> 10
>  
> and the output is:
>
> --
> ip address default: 192.168.1.43
> port default: 12345
> id 0 for key send value ascii_get
> id 1 for key recv value blind_read
> id 5 for key conns value 50
> id 8 for key key_prefix value foobar
> id 26 for key key_prealloc value 0
> id 24 for key pipelines value 8
> id 0 for key send value ascii_set
> id 1 for key recv value blind_read
> id 5 for key conns value 10
> id 8 for key key_prefix value foobar
> id 26 for key key_prealloc value 0
> id 24 for key pipelines value 4
> id 19 for key stop_after value 20
> id 3 for key usleep value 1000
> id 12 for key value_size value 10
> setting a timeout
> done initializing
> timed run complete
> --
>
> And I see that the server is busy at that time.
> How to find out how many sets/gets/... were made ?
>
> Martin
>  
>
>   A really quick untuned test against my raspberry pi 3 nets 92,000
>   gets/sec. (mc-crusher running on a different machine). On a xeon machine
>   I can get tens of millions of ops/sec depending on the read/write ratio.
>
>   On Thu, 19 Mar 2020, Martin Grigorov wrote:
>
>   > Hi
>   >
>   > I've made some local performance testing
>   >
>   > First I tried with https://github.com/memcached/mc-crusher but it 
> seems it doesn't calculate any statistics after the load runs.
>   >
>   > The results below are from 
> https://github.com/RedisLabs/memtier_benchmark
>   >
>   > 1) Text
>   > ./memtier_benchmark --server XYZ --port 12345 -P memcache_text
>   >
>   > ARM64 text
>   > 
> =
>   > Type         Ops/sec     Hits/sec   Misses/sec      Latency       
> KB/sec
>   > 
> -
>   > Sets          985.28          ---          ---     20.02700        
> 67.22
>   > Gets         9842.00         0.00      9842.00     20.01900       
> 248.83
>   > Waits           0.00          ---          ---      0.0          
> ---
>   > Totals      10827.28         0.00      9842.00     20.02000       
> 316.05
>   >
>   >
>   > X86 text
>   > 
> =
>   > Type         Ops/sec     Hits/sec   Misses/sec      Latency       
> KB/sec
>   > 
> -
>   > Sets          931.04          ---          ---     20.06800        
> 63.52
>   > Gets         9300.21         0.00      9300.21     20.32600       
> 235.13
>   > Waits           0.00          ---          ---      0.0          
> ---
>   > Totals      10231.26         0.00      9300.21     20.30200       
> 298.66
>   >
>   >
>   >
>   > 2) Binary
>   > ./memtier_benchmark --server XYZ --port 12345 -P memcache_binary
>   >
>   > ARM64 binary
>   > 
> =
>   > Type         Ops/sec     Hits/sec   Misses/sec      Latency       
> KB/sec
>   > 
> -
>   > Sets          829.68          ---          ---    

Re: Is ARM64 officially supported ?

2020-03-19 Thread dormando
      6000.00
> Hypervisor vendor:   KVM
> Virtualization type: full
> L1d cache:           32K
> L1i cache:           32K
> L2 cache:            1024K
> L3 cache:            30976K
> NUMA node0 CPU(s):   0-3
> Flags:               fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca 
> cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm
> constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni 
> pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt
> tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 
> 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced fsgsbase 
> tsc_adjust
> bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx avx512f avx512dq rdseed adx smap 
> clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 arat
> avx512_vnni md_clear flush_l1d arch_capabilities
>
> Both with 16GB RAM.
>
>
> Regards,
> Martin
>
> On Mon, Mar 9, 2020 at 11:23 AM Martin Grigorov  
> wrote:
>   Hi Dormando,
>
> On Mon, Mar 9, 2020 at 9:19 AM Martin Grigorov  
> wrote:
>   Hi Dormando,
>
> On Fri, Mar 6, 2020 at 10:15 PM dormando  wrote:
>   Yo,
>
>   Just to add in: yes we support ARM64. Though my build test platform is a
>   raspberry pi 3 and I haven't done any serious performance work. 
> packet.net
>   had an arm test platform program but I wasn't able to get time to do any
>   work.
>
>   From what I hear it does seem to perform fine on high end ARM64 
> platforms,
>   I just can't do any specific perf work unless someone donates hardware.
>
>
> I will talk with my managers!
> I think it should not be a problem to give you a SSH access to one of our 
> machines.
> What specs do you prefer ? CPU, disks, RAM, network, ...
> VM or bare metal ? 
> Preferred Linux flavor ?
>
> It would be good to compare it against whatever AMD64 instance you have. Or I 
> can also ask for two similar VMs - ARM64 and AMD64.
>
>
> My manager confirmed that we can give you access to an ARM64 machine. VM 
> would be easier to setup but bare metal is also possible.
> Please tell me the specs you prefer.
> We can give you access only temporarily though, i.e. we will have to shut it 
> down after you finish the testing, so it doesn't stay idle and waste
> budget. Later if you need it we can allocate it again.
> Would this work for you ?
>
> Martin 
>  
>
>
> Martin
>  
>
>   -Dormando
>
>   On Fri, 6 Mar 2020, Martin Grigorov wrote:
>
>   > Hi Emilio,
>   >
>   > On Fri, Mar 6, 2020 at 9:14 AM Emilio Fernandes 
>  wrote:
>   >       Thank you for sharing your experience, Martin!
>   > I've played for few days with Memcached on our ARM64 test servers and 
> so far I also didn't face any issues.
>   >
>   > Do you know of any performance benchmarks of Memcached on AMD64 and 
> ARM64 ? Or at least of a performance test suite that I can
>   run myself ?
>   >
>   >
>   > I am not aware of any public benchmark results for Memcached on AMD64 
> vs ARM64.
>   > But quick search in Google returned these promising results:
>   > 1) https://github.com/memcached/mc-crusher
>   > 2) https://github.com/scylladb/seastar/wiki/Memcached-Benchmark
>   > 3) https://github.com/RedisLabs/memtier_benchmark
>   > 4) http://www.lmdb.tech/bench/memcache/
>   >  
>   > I will try some of them next week and report back!
>   >
>   > Martin
>   >
>   >
>   > Gracias!
>   > Emilio
>   >
>   > сряда, 4 март 2020 г., 16:30:37 UTC+2, Martin Grigorov написа:
>   >       Hello Emilio!
>   > Welcome to this community!
>   >
>   > I am a regular user of Memcached and I can say that it works just 
> fine for us on ARM64!
>   > We are still at early testing stage but so far so good!
>   >
>   > I like the idea to have this mentioned on the website!
>   > It will bring confidence to more users!
>   >
>   > Regards,
>   > Martin
>   >
>   > On Wed, Mar 4, 2020 at 4:09 PM Emilio Fernandes 
>  wrote:
>   >       Hello Memcached community!
>   > I'd like to know whether ARM64 architecture is officially supported ?
>   > I've seen that Memcached is being tested on ARM64 at Travis but I do 
> not see anything on the website or in GitHub Wiki
>   explicitly saying
>   > whether it is officially supported or not.
>   >
>   > Gracias!
>   > Emilio
>

Re: Session ID monitoring

2020-03-11 Thread dormando
Hey,

Sorry I don't know anything about tomcat so I'm not sure what you're
asking. Can you talk to a Tomcat community?

On Mon, 9 Mar 2020, 김상철 wrote:

> Thank you for your answer.
> I am going to use Tomcat and Memcached to do Session Clustering. The 
> configuration is complete  but I don't know how to check SessionID in 
> Memcached.
> Do you have any related commands?
>
> 2020년 3월 10일 화요일 오전 4시 8분 13초 UTC+9, Dormando 님의 말:
>   Hey,
>
>   I'm not completely sure on what you're trying to do, but there's the
>   `watch` command (see doc/protocol.txt). It's missing a log of log points
>   still but acts similar to redis monitor. Clients are identified by their
>   file descriptor, not any sort of unique session id.
>
>   On Mon, 9 Mar 2020, 김상철 wrote:
>
>   > I am going to use memcached.
>   > In memcached, I want to check the SessionID that is accessed from 
> was. Is there a command?
>   > For example, when you use Redis, you see the session ID coming in 
> from redis-cli to monitor.
>   > Is there a command that functions the same in memcached?
>   >
>   > Best regards
>   >
>   > --
>   >
>   > ---
>   > You received this message because you are subscribed to the Google 
> Groups "memcached" group.
>   > To unsubscribe from this group and stop receiving emails from it, 
> send an email to memc...@googlegroups.com.
>   > To view this discussion on the web visit 
> https://groups.google.com/d/msgid/memcached/c7409640-0851-4ab5-a807-be870deffea8%40googlegroups.com.
>   >
>   >
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups 
> "memcached" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to memcached+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/memcached/a011b090-3184-472e-8a38-3e6c879557b8%40googlegroups.com.
>
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"memcached" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/memcached/alpine.DEB.2.21.2003110036340.31672%40dskull.


Re: Session ID monitoring

2020-03-09 Thread dormando
Hey,

I'm not completely sure on what you're trying to do, but there's the
`watch` command (see doc/protocol.txt). It's missing a log of log points
still but acts similar to redis monitor. Clients are identified by their
file descriptor, not any sort of unique session id.

On Mon, 9 Mar 2020, 김상철 wrote:

> I am going to use memcached.
> In memcached, I want to check the SessionID that is accessed from was. Is 
> there a command?
> For example, when you use Redis, you see the session ID coming in from 
> redis-cli to monitor.
> Is there a command that functions the same in memcached?
>
> Best regards
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups 
> "memcached" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to memcached+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/memcached/c7409640-0851-4ab5-a807-be870deffea8%40googlegroups.com.
>
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"memcached" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/memcached/alpine.DEB.2.21.2003091207080.31672%40dskull.


Re: Proposal for an open standard for memcached auto-discovery

2020-03-08 Thread dormando
Hey,

So first part: https://github.com/memcached/memcached/wiki/ReleaseNotes160

We won't be adding further features to the binary protocol, instead
extending off of the new text based meta protocol. I'll be looking at
your proposal more closely in a week or so now that 1.6.0 is out.

-Dormando

On Thu, 27 Feb 2020, dormando wrote:

> Thanks for getting this started!
>
> It may take a while for me to review/think it over. I've been planning on
> tackling this but have a lot of research to do.
>
> On Thu, 27 Feb 2020, 'Iqram Mahmud' via memcached wrote:
>
> > Hi Dormando and memcached community, 
> >
> > I work in Google Cloud Platform and we want to propose an open standard for 
> > memcached node auto-discovery protocol that's independent of Cloud providers
> > and can work with on-premise infrastructure as well. Auto-discovery 
> > protocol will allow engineering teams to scale up or down the node count of 
> > a
> > memcached cluster without doing any client-side deployment. The design doc 
> > is here.
> > If we don't have an open standard on this, we might end up with the 
> > situation where every Cloud provider will have their own protocol, causing 
> > a lot of
> > migration pain to everyone else.
> >
> > We look forward to your comments. Once the design is final and approved by 
> > you, we'll start sending patches for memcached server and clients. 
> >
> > Thanks,
> > Iqram
> >
> > --
> >
> > ---
> > You received this message because you are subscribed to the Google Groups 
> > "memcached" group.
> > To unsubscribe from this group and stop receiving emails from it, send an 
> > email to memcached+unsubscr...@googlegroups.com.
> > To view this discussion on the web visit
> > https://groups.google.com/d/msgid/memcached/CA%2Bd7wM6HZ-kQa-wySB4m9aiqckBbnAUfUQVCKOVTktyuGHN92g%40mail.gmail.com.
> >
> >
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups 
> "memcached" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to memcached+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/memcached/alpine.DEB.2.21.2002271445440.25120%40dskull.
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"memcached" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/memcached/alpine.DEB.2.21.2003082131570.807%40dskull.


1.6.0

2020-03-08 Thread dormando
https://github.com/memcached/memcached/wiki/ReleaseNotes160

enjoy

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"memcached" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/memcached/alpine.DEB.2.21.2003081648091.807%40dskull.


Re: Is ARM64 officially supported ?

2020-03-08 Thread dormando
Added a blurb on the hardware page:
https://github.com/memcached/memcached/wiki/Hardware

On Sun, 8 Mar 2020, Emilio Fernandes wrote:

> Hola Dormando!
> Thank you for confirming that ARM64 is officially supported!
> Do you think it would be a good idea to mention the list of the supported 
> platforms somewhere on the website or at least in GitHub Wiki ?
>
> I don't think my employer could donate ARM64 hardware :-/ Sorry!
>
> Gracias!
> Emilio
>
>
>   Yo,
>
>   Just to add in: yes we support ARM64. Though my build test platform is a
>   raspberry pi 3 and I haven't done any serious performance work. 
> packet.net
>   had an arm test platform program but I wasn't able to get time to do any
>   work.
>
>   From what I hear it does seem to perform fine on high end ARM64 
> platforms,
>   I just can't do any specific perf work unless someone donates hardware.
>
>   -Dormando
>
>   On Fri, 6 Mar 2020, Martin Grigorov wrote:
>
>   > Hi Emilio,
>   >
>   > On Fri, Mar 6, 2020 at 9:14 AM Emilio Fernandes 
>  wrote:
>   >       Thank you for sharing your experience, Martin!
>   > I've played for few days with Memcached on our ARM64 test servers and 
> so far I also didn't face any issues.
>   >
>   > Do you know of any performance benchmarks of Memcached on AMD64 and 
> ARM64 ? Or at least of a performance test suite that I can run myself ?
>   >
>   >
>   > I am not aware of any public benchmark results for Memcached on AMD64 
> vs ARM64.
>   > But quick search in Google returned these promising results:
>   > 1) https://github.com/memcached/mc-crusher
>   > 2) https://github.com/scylladb/seastar/wiki/Memcached-Benchmark
>   > 3) https://github.com/RedisLabs/memtier_benchmark
>   > 4) http://www.lmdb.tech/bench/memcache/
>   >  
>   > I will try some of them next week and report back!
>   >
>   > Martin
>   >
>   >
>   > Gracias!
>   > Emilio
>   >
>   > сряда, 4 март 2020 г., 16:30:37 UTC+2, Martin Grigorov написа:
>   >       Hello Emilio!
>   > Welcome to this community!
>   >
>   > I am a regular user of Memcached and I can say that it works just 
> fine for us on ARM64!
>   > We are still at early testing stage but so far so good!
>   >
>   > I like the idea to have this mentioned on the website!
>   > It will bring confidence to more users!
>   >
>   > Regards,
>   > Martin
>   >
>   > On Wed, Mar 4, 2020 at 4:09 PM Emilio Fernandes 
>  wrote:
>   >       Hello Memcached community!
>   > I'd like to know whether ARM64 architecture is officially supported ?
>   > I've seen that Memcached is being tested on ARM64 at Travis but I do 
> not see anything on the website or in GitHub Wiki explicitly saying
>   > whether it is officially supported or not.
>   >
>   > Gracias!
>   > Emilio
>   >
>   > --
>   >
>   > ---
>   > You received this message because you are subscribed to the Google 
> Groups "memcached" group.
>   > To unsubscribe from this group and stop receiving emails from it, 
> send an email to memc...@googlegroups.com.
>   > To view this discussion on the web visit
>   > 
> https://groups.google.com/d/msgid/memcached/bb39d899-643b-4901-8188-a11138c37b82%40googlegroups.com.
>   >
>   > --
>   >
>   > ---
>   > You received this message because you are subscribed to the Google 
> Groups "memcached" group.
>   > To unsubscribe from this group and stop receiving emails from it, 
> send an email to memc...@googlegroups.com.
>   > To view this discussion on the web visit
>   
> https://groups.google.com/d/msgid/memcached/568921e6-0e29-4830-94be-355d1dbdab26%40googlegroups.com.
>   >
>   > --
>   >
>   > ---
>   > You received this message because you are subscribed to the Google 
> Groups "memcached" group.
>   > To unsubscribe from this group and stop receiving emails from it, 
> send an email to memc...@googlegroups.com.
>   > To view this discussion on the web visit
>   > 
> https://groups.google.com/d/msgid/memcached/CAMomwMpu%2BOcwRBhzn7_PMLe9c6_sau-wNmMTyoBGhrL1L9XTBQ%40mail.gmail.com.
>   >
>   >
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups 
> &q

Re: Is ARM64 officially supported ?

2020-03-06 Thread dormando
Yo,

Just to add in: yes we support ARM64. Though my build test platform is a
raspberry pi 3 and I haven't done any serious performance work. packet.net
had an arm test platform program but I wasn't able to get time to do any
work.

>From what I hear it does seem to perform fine on high end ARM64 platforms,
I just can't do any specific perf work unless someone donates hardware.

-Dormando

On Fri, 6 Mar 2020, Martin Grigorov wrote:

> Hi Emilio,
>
> On Fri, Mar 6, 2020 at 9:14 AM Emilio Fernandes 
>  wrote:
>   Thank you for sharing your experience, Martin!
> I've played for few days with Memcached on our ARM64 test servers and so far 
> I also didn't face any issues.
>
> Do you know of any performance benchmarks of Memcached on AMD64 and ARM64 ? 
> Or at least of a performance test suite that I can run myself ?
>
>
> I am not aware of any public benchmark results for Memcached on AMD64 vs 
> ARM64.
> But quick search in Google returned these promising results:
> 1) https://github.com/memcached/mc-crusher
> 2) https://github.com/scylladb/seastar/wiki/Memcached-Benchmark
> 3) https://github.com/RedisLabs/memtier_benchmark
> 4) http://www.lmdb.tech/bench/memcache/
>  
> I will try some of them next week and report back!
>
> Martin
>
>
> Gracias!
> Emilio
>
> сряда, 4 март 2020 г., 16:30:37 UTC+2, Martin Grigorov написа:
>   Hello Emilio!
> Welcome to this community!
>
> I am a regular user of Memcached and I can say that it works just fine for us 
> on ARM64!
> We are still at early testing stage but so far so good!
>
> I like the idea to have this mentioned on the website!
> It will bring confidence to more users!
>
> Regards,
> Martin
>
> On Wed, Mar 4, 2020 at 4:09 PM Emilio Fernandes  wrote:
>   Hello Memcached community!
> I'd like to know whether ARM64 architecture is officially supported ?
> I've seen that Memcached is being tested on ARM64 at Travis but I do not see 
> anything on the website or in GitHub Wiki explicitly saying
> whether it is officially supported or not.
>
> Gracias!
> Emilio
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups 
> "memcached" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to memc...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/memcached/bb39d899-643b-4901-8188-a11138c37b82%40googlegroups.com.
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups 
> "memcached" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to memcached+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/memcached/568921e6-0e29-4830-94be-355d1dbdab26%40googlegroups.com.
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups 
> "memcached" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to memcached+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/memcached/CAMomwMpu%2BOcwRBhzn7_PMLe9c6_sau-wNmMTyoBGhrL1L9XTBQ%40mail.gmail.com.
>
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"memcached" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/memcached/alpine.DEB.2.21.2003061214140.25120%40dskull.


Re: Proposal for an open standard for memcached auto-discovery

2020-02-27 Thread dormando
Thanks for getting this started!

It may take a while for me to review/think it over. I've been planning on
tackling this but have a lot of research to do.

On Thu, 27 Feb 2020, 'Iqram Mahmud' via memcached wrote:

> Hi Dormando and memcached community, 
>
> I work in Google Cloud Platform and we want to propose an open standard for 
> memcached node auto-discovery protocol that's independent of Cloud providers
> and can work with on-premise infrastructure as well. Auto-discovery protocol 
> will allow engineering teams to scale up or down the node count of a
> memcached cluster without doing any client-side deployment. The design doc is 
> here.
> If we don't have an open standard on this, we might end up with the situation 
> where every Cloud provider will have their own protocol, causing a lot of
> migration pain to everyone else.
>
> We look forward to your comments. Once the design is final and approved by 
> you, we'll start sending patches for memcached server and clients. 
>
> Thanks,
> Iqram
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups 
> "memcached" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to memcached+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/memcached/CA%2Bd7wM6HZ-kQa-wySB4m9aiqckBbnAUfUQVCKOVTktyuGHN92g%40mail.gmail.com.
>
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"memcached" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/memcached/alpine.DEB.2.21.2002271445440.25120%40dskull.


OS X DTrace bug fix testers?

2020-02-14 Thread dormando
https://github.com/memcached/memcached/pull/592

I don't have a mac. looking for someone to give it a look over so I can
finally merge it :)

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"memcached" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/memcached/alpine.DEB.2.21.2002142049410.5245%40dskull.


Re: Extstore

2020-02-10 Thread dormando
Hey,

If you want to play safe you can allocate up to 95% of the drive. The
limiter for how much storage you can use is by how much RAM you have
relatively. IE; if key+metadata takes 200 bytes per object, and values are
500 bytes, you need 200 bytes of RAM for every 500 bytes of disk space.

Compaction doesn't use extra space outside of the file, it uses pages
within the existing file.

I don't think there's a counter for when it's running but there're
counters for how often it's run. The actual compaction is extremely fast,
and at a default page size of 64 megabytes I doubt it'll take more than a
hundred milliseconds or per page.

-Dormando

On Mon, 10 Feb 2020, 'theonajim' via memcached wrote:

> For extstore feature, are there guidelines on size to allocate for extstore 
> file? For example, if the drive has 500 GB capacity, do we allocate all 500 
> GB, 250 GB (50%) or
> something like 70% to 90% of drive capacity?
> Does extstore compaction require free space to re-write the extstore file? Is 
> there a stats output that shows compaction is running?
>
> Thanks
> --Theo
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups 
> "memcached" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to memcached+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/memcached/3ddba9a1-04c6-4a75-97fb-c147f045e805%40googlegroups.com.
>
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"memcached" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/memcached/alpine.DEB.2.21.2002101437210.3095%40dskull.


1.5.22 -> fix regression from 1.5.20

2020-02-01 Thread dormando
https://github.com/memcached/memcached/wiki/ReleaseNotes1522

fixes a segfault introduced in 1.5.20. In case anyone now or in the future
finds a segfault in 1.5.20 or .21 :)

-Dormando

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"memcached" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/memcached/alpine.DEB.2.21.2002012115450.8524%40dskull.


Re: Non-deterministic number of Memcached children processes other than worker threads

2019-12-15 Thread dormando
Yup :) looks like you left a system installed version of memcached running
on the other one.

On Mon, 16 Dec 2019, Alireza Sanaee wrote:

> Hi,
> I did a stupid mistake, the big machine Memcached version seems different. I 
> guess that is the problem.
>
> Thanks,
> Alireza
>
> On Mon, Dec 16, 2019 at 2:16 AM Alireza Sanaee  wrote:
>   Hi,
> I'm investigating the Linux load balancer, meanwhile, I'm trying to 
> understand what is happening in Memcached and just noticed the different 
> number
> of Memcached threads on my machines. Linux doesn't provide immediate access 
> to generally latency-sensitive threads/processes like Memcached worker
> threads, causing HoL and eventually long-tail latency, not a new thing 
> though. 
>
> But before that, I should know what worker threads I need to consider. You 
> gave me some pointers to different internal threads of Memcached though.
> Even intermittent occurrence of some internal worker threads of 
> Memcached(rebalancer, crawler or ...) might block some requests (I'm not sure 
> if
> that's the case or not). ULE scheduler sounds suffering from the same flaw.
>
> The connection dispatcher is there, but I don't have so many connections 
> creation, I have a good number of clients, and send requests at some rates
> until the end of the experiment. I'm using mutilate as my workload generator.
>
> Sure, I can check whether those are idle or not, I'm actually recording 
> everything from `/proc///stat` so the status is also
> available there. It is a true fact that internal threads are IDLE most of the 
> time and it is some sort of visible in the plot, but I just want to
> make sure that all mostly idle ones are just internal threads and not 
> workers. As you said some times workloads are not loaded evenly making the
> worker threads more difficult to distinguish. I think this doesn't matter 
> now. 
>
> The main issue here is that I have 6 threads on my big machine and 10 threads 
> on my small machine, while I have the same Memcached configuration for
> both machines. I have attached the numbers for the two machines.
>
> Thanks,
> Alireza
>
> On Sun, Dec 15, 2019 at 3:40 PM dormando  wrote:
>   What're you trying to accomplish?
>
>   Can you include the output of "stats" and "stats settings" on both
>   machines?
>
>   Dumb question but you've looked at the output of `ps auxH`? If just 
> using
>   htop you may not see the threads that're idle.
>
>   TCP connections are pinned to a specific worker thread on connection.
>   Trivial benchmarks may not load the worker threads evenly, as the
>   connections are handed to threads evenly via round robin.
>
>   On Sun, 15 Dec 2019, Alireza Sanaee wrote:
>
>   > Hi,
>   > Thank you for the information,
>   >
>   > Sorry for miss using the word there, yes that's all threads. I'm 
> using the Memcached 1.5.20. I build it myself and then run my
>   experiments($MEMCACHED -u
>   > root -p 11211 -m $MAXMEM -c 1024 -t $MEMCACHED_THREADS). And I'm 
> checking the number of Memcached threads in htop output. It showed me
>   10 threads(workers
>   > included) in one machine and 6 threads(workers included) on the other 
> one.
>   >
>   > To share some more information, I have 200GB of memory for the bigger 
> machine that creates only 6 threads, and we have only 16GB of
>   memory for the machine
>   > that creates 10 threads. I'm just thinking maybe because the smaller 
> machine has less amount of space, and I'm actually filling in up
>   to 15GB then I might
>   > have more work to do and creates more threads.
>   >
>   > According to your information, I should expect at least 5 threads 
> other than the main workers. So 10 threads look OK, but how about
>   the bigger machine
>   > which spawns only 6 threads?  
>   >
>   > I also had difficulties in detecting the worker threads that respond 
> to GET/SET requests on my results, I have attached two pictures,
>   one of them shows
>   > the actual location of each worker on various cores, and the second 
> one is showing userspace time spent for each worker. Apparently
>   worker thread number
>   > 1,2,4 and 5 have spent more time in userspace, so I'm concluding here 
> that 1,2,4 and 5 are my actual worker threads, and worker 3 and
>   6 are just internal
>   > worker threads of Memcached. Does that make sense to you?
>   >
>   > Thanks,
>   > Alireza
>   >
>   >
>   > On Sun, Dec 15, 2019 at 7:19 AM dormando  wrote

Re: Non-deterministic number of Memcached children processes other than worker threads

2019-12-14 Thread dormando
What're you trying to accomplish?

Can you include the output of "stats" and "stats settings" on both
machines?

Dumb question but you've looked at the output of `ps auxH`? If just using
htop you may not see the threads that're idle.

TCP connections are pinned to a specific worker thread on connection.
Trivial benchmarks may not load the worker threads evenly, as the
connections are handed to threads evenly via round robin.

On Sun, 15 Dec 2019, Alireza Sanaee wrote:

> Hi,
> Thank you for the information,
>
> Sorry for miss using the word there, yes that's all threads. I'm using the 
> Memcached 1.5.20. I build it myself and then run my experiments($MEMCACHED -u
> root -p 11211 -m $MAXMEM -c 1024 -t $MEMCACHED_THREADS). And I'm checking the 
> number of Memcached threads in htop output. It showed me 10 threads(workers
> included) in one machine and 6 threads(workers included) on the other one.
>
> To share some more information, I have 200GB of memory for the bigger machine 
> that creates only 6 threads, and we have only 16GB of memory for the machine
> that creates 10 threads. I'm just thinking maybe because the smaller machine 
> has less amount of space, and I'm actually filling in up to 15GB then I might
> have more work to do and creates more threads.
>
> According to your information, I should expect at least 5 threads other than 
> the main workers. So 10 threads look OK, but how about the bigger machine
> which spawns only 6 threads?  
>
> I also had difficulties in detecting the worker threads that respond to 
> GET/SET requests on my results, I have attached two pictures, one of them 
> shows
> the actual location of each worker on various cores, and the second one is 
> showing userspace time spent for each worker. Apparently worker thread number
> 1,2,4 and 5 have spent more time in userspace, so I'm concluding here that 
> 1,2,4 and 5 are my actual worker threads, and worker 3 and 6 are just internal
> worker threads of Memcached. Does that make sense to you?
>
> Thanks,
> Alireza
>
>
> On Sun, Dec 15, 2019 at 7:19 AM dormando  wrote:
>   What version of memcached is on each machine?
>
>   memcached doesn't use processes, it's multi-threaded. Different versions
>   may have a different number of background threads. In the latest version
>   there should be at least:
>
>   - listener thread (main "process")
>   - N worker threads
>   - hash table maintenance thread
>   - async log thread (for `watch` commands)
>   - LRU maintainer thread
>   - LRU crawler thread
>   - slab rebalancer thread
>
>   they're all idle unless they need to do work. LRU maintenance thread is
>   probably the most active, since it executes LRU maintenance work 
> deferred
>   from the worker threads. Older versions have some of these threads, but
>   they were not enabled by default until 1.5.0.
>
>   -Dormando
>
>   On Sat, 14 Dec 2019, Alireza Sanaee wrote:
>
>   > Hello,
>   > I'm running Memcached on two different machines with different 
> specifications. And I specify the number of worker threads = 4 for both
>   machines. However,
>   > the number of child processes of the Memcached server is different on 
> two machines. On one of them, I have 6 Memcached child processes, and
>   on the other
>   > server, I have 10 Memcached child processes. I'm curious to 
> understand how many children processes Memcached is basically spawning other
>   than the worker
>   > threads, and for what tasks?
>   >
>   > I expect the Memcached to spawn only 4 children processes or a 
> certain number of children processes on two machines, however, it seems not
>   true.
>   >
>   > Thanks,
>   > Alireza
>   >
>   > --
>   >
>   > ---
>   > You received this message because you are subscribed to the Google 
> Groups "memcached" group.
>   > To unsubscribe from this group and stop receiving emails from it, 
> send an email to memcached+unsubscr...@googlegroups.com.
>   > To view this discussion on the web visit
>   
> https://groups.google.com/d/msgid/memcached/da7f492f-c12d-4763-86fc-d07311d21c5a%40googlegroups.com.
>   >
>   >
>
>   --
>
>   ---
>   You received this message because you are subscribed to a topic in the 
> Google Groups "memcached" group.
>   To unsubscribe from this topic, visit 
> https://groups.google.com/d/topic/memcached/L05nSQruHRg/unsubscribe.
>   To unsubscribe from this group and all its topics, send an email to 
>

Re: Non-deterministic number of Memcached children processes other than worker threads

2019-12-14 Thread dormando
What version of memcached is on each machine?

memcached doesn't use processes, it's multi-threaded. Different versions
may have a different number of background threads. In the latest version
there should be at least:

- listener thread (main "process")
- N worker threads
- hash table maintenance thread
- async log thread (for `watch` commands)
- LRU maintainer thread
- LRU crawler thread
- slab rebalancer thread

they're all idle unless they need to do work. LRU maintenance thread is
probably the most active, since it executes LRU maintenance work deferred
from the worker threads. Older versions have some of these threads, but
they were not enabled by default until 1.5.0.

-Dormando

On Sat, 14 Dec 2019, Alireza Sanaee wrote:

> Hello,
> I'm running Memcached on two different machines with different 
> specifications. And I specify the number of worker threads = 4 for both 
> machines. However,
> the number of child processes of the Memcached server is different on two 
> machines. On one of them, I have 6 Memcached child processes, and on the other
> server, I have 10 Memcached child processes. I'm curious to understand how 
> many children processes Memcached is basically spawning other than the worker
> threads, and for what tasks?
>
> I expect the Memcached to spawn only 4 children processes or a certain number 
> of children processes on two machines, however, it seems not true.
>
> Thanks,
> Alireza
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups 
> "memcached" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to memcached+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/memcached/da7f492f-c12d-4763-86fc-d07311d21c5a%40googlegroups.com.
>
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"memcached" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/memcached/alpine.DEB.2.21.1912141516160.3156%40dskull.


  1   2   3   4   5   6   7   8   9   10   >