Re: Total memory allocated with -m NOT EQUAL to (total pages * max_item_size). Request to provide clarification on how page size works.

2020-07-08 Thread Shweta Agrawal
>Your evictions are literally zero, in these stats. You saw them before, 
when the instances were smaller? 
Yes we have seen it and it impacted the business as well.

>There're a maximum of 63 classes, so making the number smaller has a 
limited effect. The more slab classes you have, the harder the automove 
balancer has to work to keep things even. I don't really recommend 
adjusting the value much if at all. 
Understood. Thank you for the details.

>All you probably had to do was turn on automove, but I don't have your 
stats from when you did have evictions so I can't say for sure. 
I understand. Thanks a lot for your time and valuable inputs. Will capture 
stats for sure if we face the issue again and then it should be better to 
analyse.

Grateful to you for your inputs and time. It helped. :). Cheers to Team 
Memcached for the great product :)

Thank you,
Shweta

On Wednesday, July 8, 2020 at 11:27:09 AM UTC+5:30, Dormando wrote:
>
> > >Also your instance hasn't even malloc'ed half of its memory limit. You  
> > have over 6 gigabytes unused. There aren't any evictions despite the  
> > uptime being over two months.  
> > Was eviction of active items expeted as well? We have eviction of unsed 
> and unfetched items.  
>
> Your evictions are literally zero, in these stats. You saw them before, 
> when the instances were smaller? 
>
> > >Otherwise:  
> > 1. is the default in 1.5 anyway  
> > 2. is the default in 1.5.  
> > 3. don't bother changing this; it'll change the way the slabs scale.  
> > 4. 1.20 is probably fine. reducing it only helps if you have very 
> little  
> > memory.  
> > 5. also fine.  
>
> > Does increasing slab classes by reducing growth factor affect 
> > performance? I understand if we have more slab classes it can help in 
> > increasing storage overhead as less memory as we may find chunk size 
> closer to item size. 
>
> There're a maximum of 63 classes, so making the number smaller has a 
> limited effect. The more slab classes you have, the harder the automove 
> balancer has to work to keep things even. I don't really recommend 
> adjusting the value much if at all. 
>
> All you probably had to do was turn on automove, but I don't have your 
> stats from when you did have evictions so I can't say for sure. 
>
> > >If it were full and automove was off like it is now, you would see  
> > problems over time. Noted.Thank you for the input. :) 
> > 
> > Thank you, 
> > Shweta 
> > 
> > On Wednesday, July 8, 2020 at 10:00:30 AM UTC+5:30, Dormando wrote: 
> >   you said you were seeing evictions? Was this on a different 
> instance? 
> > 
> >   I don't really have any control or influence over what amazon 
> deploys for 
> >   elasticache. They've also changed the daemon. Some of your 
> settings are 
> >   different from the defaults that 1.5.10 has (automove should 
> default to 1 
> >   and hash_Algo should default to murmur). 
> > 
> >   Also your instance hasn't even malloc'ed half of its memory limit. 
> You 
> >   have over 6 gigabytes unused. There aren't any evictions despite 
> the 
> >   uptime being over two months. 
> > 
> >   So far as I can see you don't have to do anything? Unless a 
> different 
> >   instance was giving you trouble. 
> > 
> >   Otherwise: 
> >   1. is the default in 1.5 anyway 
> >   2. is the default in 1.5. 
> >   3. don't bother changing this; it'll change the way the slabs 
> scale. 
> >   4. 1.20 is probably fine. reducing it only helps if you have very 
> little 
> >   memory. 
> >   5. also fine. 
> > 
> >   but mainly 1) I can't really guarantee anything I say has 
> relevance since 
> >   I don't know what code is in elasticache and 2) your instance 
> isn't even 
> >   remotely full so I don't have any recommendations. 
> > 
> >   If it were full and automove was off like it is now, you would see 
> >   problems over time. 
> > 
> >   On Tue, 7 Jul 2020, Shweta Agrawal wrote: 
> > 
> >   > yes 
> >   > 
> >   > On Wednesday, July 8, 2020 at 9:35:19 AM UTC+5:30, Dormando 
> wrote: 
> >   >   Oh, so this is amazon elasticache? 
> >   > 
> >   >   On Tue, 7 Jul 2020, Shweta Agrawal wrote: 
> >   > 
> >   >   > We use aws for deployment and don't have that 
> information. What particularly looks odd in settings?  
> >   >   > 
> >   >   > On Wednesday, July 8, 2020 at 8:10:04 AM UTC+5:30, 
> Dormando wrote: 
> >   >   >   what're your start arguments? the settings look a 
> little odd. ie; the full 
> >   >   >   commandline (censoring anything important) that 
> you used to start 
> >   >   >   memcached 
> >   >   > 
> >   >   >   On Tue, 7 Jul 2020, Shweta Agrawal wrote: 
> >   >   > 
> >   >   >   > Sorry. Here it is. 
> >   >   >   > 
> >   >   >   > On Wednesday, July 8, 2020 at 

Re: Total memory allocated with -m NOT EQUAL to (total pages * max_item_size). Request to provide clarification on how page size works.

2020-07-07 Thread dormando
> >Also your instance hasn't even malloc'ed half of its memory limit. You 
> have over 6 gigabytes unused. There aren't any evictions despite the 
> uptime being over two months. 
> Was eviction of active items expeted as well? We have eviction of unsed and 
> unfetched items. 

Your evictions are literally zero, in these stats. You saw them before,
when the instances were smaller?

> >Otherwise: 
> 1. is the default in 1.5 anyway 
> 2. is the default in 1.5. 
> 3. don't bother changing this; it'll change the way the slabs scale. 
> 4. 1.20 is probably fine. reducing it only helps if you have very little 
> memory. 
> 5. also fine. 

> Does increasing slab classes by reducing growth factor affect
> performance? I understand if we have more slab classes it can help in
> increasing storage overhead as less memory as we may find chunk size closer 
> to item size.

There're a maximum of 63 classes, so making the number smaller has a
limited effect. The more slab classes you have, the harder the automove
balancer has to work to keep things even. I don't really recommend
adjusting the value much if at all.

All you probably had to do was turn on automove, but I don't have your
stats from when you did have evictions so I can't say for sure.

> >If it were full and automove was off like it is now, you would see 
> problems over time. Noted.Thank you for the input. :)
>
> Thank you,
> Shweta
>
> On Wednesday, July 8, 2020 at 10:00:30 AM UTC+5:30, Dormando wrote:
>   you said you were seeing evictions? Was this on a different instance?
>
>   I don't really have any control or influence over what amazon deploys 
> for
>   elasticache. They've also changed the daemon. Some of your settings are
>   different from the defaults that 1.5.10 has (automove should default to 
> 1
>   and hash_Algo should default to murmur).
>
>   Also your instance hasn't even malloc'ed half of its memory limit. You
>   have over 6 gigabytes unused. There aren't any evictions despite the
>   uptime being over two months.
>
>   So far as I can see you don't have to do anything? Unless a different
>   instance was giving you trouble.
>
>   Otherwise:
>   1. is the default in 1.5 anyway
>   2. is the default in 1.5.
>   3. don't bother changing this; it'll change the way the slabs scale.
>   4. 1.20 is probably fine. reducing it only helps if you have very little
>   memory.
>   5. also fine.
>
>   but mainly 1) I can't really guarantee anything I say has relevance 
> since
>   I don't know what code is in elasticache and 2) your instance isn't even
>   remotely full so I don't have any recommendations.
>
>   If it were full and automove was off like it is now, you would see
>   problems over time.
>
>   On Tue, 7 Jul 2020, Shweta Agrawal wrote:
>
>   > yes
>   >
>   > On Wednesday, July 8, 2020 at 9:35:19 AM UTC+5:30, Dormando wrote:
>   >       Oh, so this is amazon elasticache?
>   >
>   >       On Tue, 7 Jul 2020, Shweta Agrawal wrote:
>   >
>   >       > We use aws for deployment and don't have that information. 
> What particularly looks odd in settings? 
>   >       >
>   >       > On Wednesday, July 8, 2020 at 8:10:04 AM UTC+5:30, Dormando 
> wrote:
>   >       >       what're your start arguments? the settings look a 
> little odd. ie; the full
>   >       >       commandline (censoring anything important) that you 
> used to start
>   >       >       memcached
>   >       >
>   >       >       On Tue, 7 Jul 2020, Shweta Agrawal wrote:
>   >       >
>   >       >       > Sorry. Here it is.
>   >       >       >
>   >       >       > On Wednesday, July 8, 2020 at 12:38:38 AM UTC+5:30, 
> Dormando wrote:
>   >       >       >       'stats settings' file is empty
>   >       >       >
>   >       >       >       On Tue, 7 Jul 2020, Shweta Agrawal wrote:
>   >       >       >
>   >       >       >       > Hi Dormando,
>   >       >       >       > Got the stats for production. Please find 
> attached files for stats settings. stats items, stats, stats slabs.
>   Summary for
>   >       all slabs.
>   >       >       >       >
>   >       >       >       > Other details that might help:
>   >       >       >       >  *  TTL is two days or more. 
>   >       >       >       >  *  Key length is in the range of 40-80 bytes.
>   >       >       >       > Below are the parameters that we plan to 
> change from the current settings:
>   >       >       >       >  1. slab_automove : from 0 to 1
>   >       >       >       >  2. hash_algorithm: from jenkins to murmur
>   >       >       >       >  3. chunk_size: from 48 to 297 (as we don't 
> have data of size less than that)
>   >       >       >       >  4. growth_factor: 1.25 to 1.20 ( Can 
> reducing this more help? Do more slab classes affect performance?)
>

Re: Total memory allocated with -m NOT EQUAL to (total pages * max_item_size). Request to provide clarification on how page size works.

2020-07-07 Thread Shweta Agrawal
>you said you were seeing evictions? Was this on a different instance? 
Yes we had this issue before and therefore we provisioned larger instance 
to fix that temporarily. But now we want to reduce cost by using instance 
with less memory.

>I don't really have any control or influence over what amazon deploys for 
elasticache. They've also changed the daemon. Some of your settings are 
different from the defaults that 1.5.10 has (automove should default to 1 
and hash_Algo should default to murmur). 
I see. Had understanding that it will be same. Thank you for clarifying. 

>Also your instance hasn't even malloc'ed half of its memory limit. You 
have over 6 gigabytes unused. There aren't any evictions despite the 
uptime being over two months. 
Was eviction of active items expeted as well? We have eviction of unsed and 
unfetched items. 

>Otherwise: 
1. is the default in 1.5 anyway 
2. is the default in 1.5. 
3. don't bother changing this; it'll change the way the slabs scale. 
4. 1.20 is probably fine. reducing it only helps if you have very little 
memory. 
5. also fine. 
Does increasing slab classes by reducing growth factor affect performance? 
I understand if we have more slab classes it can help in increasing storage 
overhead as less memory as we may find chunk size closer to item size.

>If it were full and automove was off like it is now, you would see 
problems over time. 
Noted.Thank you for the input. :)

Thank you,
Shweta

On Wednesday, July 8, 2020 at 10:00:30 AM UTC+5:30, Dormando wrote:
>
> you said you were seeing evictions? Was this on a different instance? 
>
> I don't really have any control or influence over what amazon deploys for 
> elasticache. They've also changed the daemon. Some of your settings are 
> different from the defaults that 1.5.10 has (automove should default to 1 
> and hash_Algo should default to murmur). 
>
> Also your instance hasn't even malloc'ed half of its memory limit. You 
> have over 6 gigabytes unused. There aren't any evictions despite the 
> uptime being over two months. 
>
> So far as I can see you don't have to do anything? Unless a different 
> instance was giving you trouble. 
>
> Otherwise: 
> 1. is the default in 1.5 anyway 
> 2. is the default in 1.5. 
> 3. don't bother changing this; it'll change the way the slabs scale. 
> 4. 1.20 is probably fine. reducing it only helps if you have very little 
> memory. 
> 5. also fine. 
>
> but mainly 1) I can't really guarantee anything I say has relevance since 
> I don't know what code is in elasticache and 2) your instance isn't even 
> remotely full so I don't have any recommendations. 
>
> If it were full and automove was off like it is now, you would see 
> problems over time. 
>
> On Tue, 7 Jul 2020, Shweta Agrawal wrote: 
>
> > yes 
> > 
> > On Wednesday, July 8, 2020 at 9:35:19 AM UTC+5:30, Dormando wrote: 
> >   Oh, so this is amazon elasticache? 
> > 
> >   On Tue, 7 Jul 2020, Shweta Agrawal wrote: 
> > 
> >   > We use aws for deployment and don't have that information. What 
> particularly looks odd in settings?  
> >   > 
> >   > On Wednesday, July 8, 2020 at 8:10:04 AM UTC+5:30, Dormando 
> wrote: 
> >   >   what're your start arguments? the settings look a little 
> odd. ie; the full 
> >   >   commandline (censoring anything important) that you used 
> to start 
> >   >   memcached 
> >   > 
> >   >   On Tue, 7 Jul 2020, Shweta Agrawal wrote: 
> >   > 
> >   >   > Sorry. Here it is. 
> >   >   > 
> >   >   > On Wednesday, July 8, 2020 at 12:38:38 AM UTC+5:30, 
> Dormando wrote: 
> >   >   >   'stats settings' file is empty 
> >   >   > 
> >   >   >   On Tue, 7 Jul 2020, Shweta Agrawal wrote: 
> >   >   > 
> >   >   >   > Hi Dormando, 
> >   >   >   > Got the stats for production. Please find 
> attached files for stats settings. stats items, stats, stats slabs. Summary 
> for 
> >   all slabs. 
> >   >   >   > 
> >   >   >   > Other details that might help: 
> >   >   >   >  *  TTL is two days or more.  
> >   >   >   >  *  Key length is in the range of 40-80 bytes. 
> >   >   >   > Below are the parameters that we plan to change 
> from the current settings: 
> >   >   >   >  1. slab_automove : from 0 to 1 
> >   >   >   >  2. hash_algorithm: from jenkins to murmur 
> >   >   >   >  3. chunk_size: from 48 to 297 (as we don't have 
> data of size less than that) 
> >   >   >   >  4. growth_factor: 1.25 to 1.20 ( Can reducing 
> this more help? Do more slab classes affect performance?) 
> >   >   >   >  5. max_item_size : from 4MB to 1MB (as our data 
> will never be more than 1MB large) 
> >   >   >   > Please let me know if different values for above 
> paramters can be more beneficial. 
> >   >   > 

Re: Total memory allocated with -m NOT EQUAL to (total pages * max_item_size). Request to provide clarification on how page size works.

2020-07-07 Thread dormando
you said you were seeing evictions? Was this on a different instance?

I don't really have any control or influence over what amazon deploys for
elasticache. They've also changed the daemon. Some of your settings are
different from the defaults that 1.5.10 has (automove should default to 1
and hash_Algo should default to murmur).

Also your instance hasn't even malloc'ed half of its memory limit. You
have over 6 gigabytes unused. There aren't any evictions despite the
uptime being over two months.

So far as I can see you don't have to do anything? Unless a different
instance was giving you trouble.

Otherwise:
1. is the default in 1.5 anyway
2. is the default in 1.5.
3. don't bother changing this; it'll change the way the slabs scale.
4. 1.20 is probably fine. reducing it only helps if you have very little
memory.
5. also fine.

but mainly 1) I can't really guarantee anything I say has relevance since
I don't know what code is in elasticache and 2) your instance isn't even
remotely full so I don't have any recommendations.

If it were full and automove was off like it is now, you would see
problems over time.

On Tue, 7 Jul 2020, Shweta Agrawal wrote:

> yes
>
> On Wednesday, July 8, 2020 at 9:35:19 AM UTC+5:30, Dormando wrote:
>   Oh, so this is amazon elasticache?
>
>   On Tue, 7 Jul 2020, Shweta Agrawal wrote:
>
>   > We use aws for deployment and don't have that information. What 
> particularly looks odd in settings? 
>   >
>   > On Wednesday, July 8, 2020 at 8:10:04 AM UTC+5:30, Dormando wrote:
>   >       what're your start arguments? the settings look a little odd. 
> ie; the full
>   >       commandline (censoring anything important) that you used to 
> start
>   >       memcached
>   >
>   >       On Tue, 7 Jul 2020, Shweta Agrawal wrote:
>   >
>   >       > Sorry. Here it is.
>   >       >
>   >       > On Wednesday, July 8, 2020 at 12:38:38 AM UTC+5:30, Dormando 
> wrote:
>   >       >       'stats settings' file is empty
>   >       >
>   >       >       On Tue, 7 Jul 2020, Shweta Agrawal wrote:
>   >       >
>   >       >       > Hi Dormando,
>   >       >       > Got the stats for production. Please find attached 
> files for stats settings. stats items, stats, stats slabs. Summary for
>   all slabs.
>   >       >       >
>   >       >       > Other details that might help:
>   >       >       >  *  TTL is two days or more. 
>   >       >       >  *  Key length is in the range of 40-80 bytes.
>   >       >       > Below are the parameters that we plan to change from 
> the current settings:
>   >       >       >  1. slab_automove : from 0 to 1
>   >       >       >  2. hash_algorithm: from jenkins to murmur
>   >       >       >  3. chunk_size: from 48 to 297 (as we don't have data 
> of size less than that)
>   >       >       >  4. growth_factor: 1.25 to 1.20 ( Can reducing this 
> more help? Do more slab classes affect performance?)
>   >       >       >  5. max_item_size : from 4MB to 1MB (as our data will 
> never be more than 1MB large)
>   >       >       > Please let me know if different values for above 
> paramters can be more beneficial.
>   >       >       > Are there any other parameters which we should 
> consider to change or set?
>   >       >       >
>   >       >       > Also below are the calculations used for columns in 
> the summary shared. Can you please confirm if calculations are fine.
>   >       >       > 1) Total_Mem = total_pages*page_size  --> total 
> memory 
>   >       >       > 2) Strg_ovrHd = 
> (mem_requested/(used_chunks*chunk_size)) * 100 --> storage overhead
>   >       >       > 3) Free Memory = free_chunks * chunk_size   ---> free 
> memory
>   >       >       > 4) To Store = mem_requested      -->   actual memory 
> requested for storing data
>   >       >       >
>   >       >       > Thank you for your time and efforts in explaining 
> concepts.
>   >       >       > Shweta
>   >       >       >
>   >       >       >             > > the rest is free memory, which should 
> be measured separately.
>   >       >       >             > free memory for a class will be : 
> (free_chunks * chunk_size) 
>   >       >       >             > And total memory reserved by a class 
> will be : (total_pages*page_size)
>   >       >       >             >
>   >       >       >             > > If you're getting evictions in class 
> A but there's too much free memory in classes C, D, etc 
>   >       >       >             > > then you have a balance issue. for 
> example. An efficiency stat which just 
>   >       >       >             > > adds up the total pages doesn't tell 
> you what to do with it. 
>   >       >       >             > I see. Got your point.Storage overhead 
> can help in deciding the chunk_size and growth_factor. Let me add
>   >       

Re: Total memory allocated with -m NOT EQUAL to (total pages * max_item_size). Request to provide clarification on how page size works.

2020-07-07 Thread Shweta Agrawal
yes

On Wednesday, July 8, 2020 at 9:35:19 AM UTC+5:30, Dormando wrote:
>
> Oh, so this is amazon elasticache? 
>
> On Tue, 7 Jul 2020, Shweta Agrawal wrote: 
>
> > We use aws for deployment and don't have that information. What 
> particularly looks odd in settings?  
> > 
> > On Wednesday, July 8, 2020 at 8:10:04 AM UTC+5:30, Dormando wrote: 
> >   what're your start arguments? the settings look a little odd. ie; 
> the full 
> >   commandline (censoring anything important) that you used to start 
> >   memcached 
> > 
> >   On Tue, 7 Jul 2020, Shweta Agrawal wrote: 
> > 
> >   > Sorry. Here it is. 
> >   > 
> >   > On Wednesday, July 8, 2020 at 12:38:38 AM UTC+5:30, Dormando 
> wrote: 
> >   >   'stats settings' file is empty 
> >   > 
> >   >   On Tue, 7 Jul 2020, Shweta Agrawal wrote: 
> >   > 
> >   >   > Hi Dormando, 
> >   >   > Got the stats for production. Please find attached files 
> for stats settings. stats items, stats, stats slabs. Summary for all slabs. 
> >   >   > 
> >   >   > Other details that might help: 
> >   >   >  *  TTL is two days or more.  
> >   >   >  *  Key length is in the range of 40-80 bytes. 
> >   >   > Below are the parameters that we plan to change from the 
> current settings: 
> >   >   >  1. slab_automove : from 0 to 1 
> >   >   >  2. hash_algorithm: from jenkins to murmur 
> >   >   >  3. chunk_size: from 48 to 297 (as we don't have data of 
> size less than that) 
> >   >   >  4. growth_factor: 1.25 to 1.20 ( Can reducing this more 
> help? Do more slab classes affect performance?) 
> >   >   >  5. max_item_size : from 4MB to 1MB (as our data will 
> never be more than 1MB large) 
> >   >   > Please let me know if different values for above 
> paramters can be more beneficial. 
> >   >   > Are there any other parameters which we should consider 
> to change or set? 
> >   >   > 
> >   >   > Also below are the calculations used for columns in the 
> summary shared. Can you please confirm if calculations are fine. 
> >   >   > 1) Total_Mem = total_pages*page_size  --> total memory  
> >   >   > 2) Strg_ovrHd = (mem_requested/(used_chunks*chunk_size)) 
> * 100 --> storage overhead 
> >   >   > 3) Free Memory = free_chunks * chunk_size   ---> free 
> memory 
> >   >   > 4) To Store = mem_requested  -->   actual memory 
> requested for storing data 
> >   >   > 
> >   >   > Thank you for your time and efforts in explaining 
> concepts. 
> >   >   > Shweta 
> >   >   > 
> >   >   > > > the rest is free memory, which should be 
> measured separately. 
> >   >   > > free memory for a class will be : 
> (free_chunks * chunk_size)  
> >   >   > > And total memory reserved by a class will 
> be : (total_pages*page_size) 
> >   >   > > 
> >   >   > > > If you're getting evictions in class A 
> but there's too much free memory in classes C, D, etc  
> >   >   > > > then you have a balance issue. for 
> example. An efficiency stat which just  
> >   >   > > > adds up the total pages doesn't tell you 
> what to do with it.  
> >   >   > > I see. Got your point.Storage overhead can 
> help in deciding the chunk_size and growth_factor. Let me add 
> >   storage-overhead and 
> >   >   > free memory as well for 
> >   >   > > calculation. 
> >   >   > 
> >   >   > Most people don't have to worry about 
> growth_factor very much. Especially 
> >   >   > since the large item code was added, but it 
> has its own caveats. Growth 
> >   >   > factor is only typically useful if you have 
> _very_ statically sized 
> >   >   > objects. 
> >   >   > 
> >   >   > > One curious question: If we have an item 
> of 500Bytes and there is free memory only in class A(chunk_size: 100Bytes). 
> >   Do cache 
> >   >   > evict items from class with 
> >   >   > > largeer chunk_size or use multiple chunks 
> from class A? 
> >   >   > 
> >   >   > No, it will evict an item matching the 500 
> byte chunk size, and not touch 
> >   >   > A. This is where the memory balancer comes 
> in; it will move pages of 
> >   >   > memory between slab classes to keep the tail 
> age roughly the same between 
> >   >   > classes. It does this slowly. 
> >   >   > 
> >   >   > > Example: 
> >   >   > > In below scenario, when we try to store 
> item with 3MB, even when there was 

Re: Total memory allocated with -m NOT EQUAL to (total pages * max_item_size). Request to provide clarification on how page size works.

2020-07-07 Thread dormando
Oh, so this is amazon elasticache?

On Tue, 7 Jul 2020, Shweta Agrawal wrote:

> We use aws for deployment and don't have that information. What particularly 
> looks odd in settings? 
>
> On Wednesday, July 8, 2020 at 8:10:04 AM UTC+5:30, Dormando wrote:
>   what're your start arguments? the settings look a little odd. ie; the 
> full
>   commandline (censoring anything important) that you used to start
>   memcached
>
>   On Tue, 7 Jul 2020, Shweta Agrawal wrote:
>
>   > Sorry. Here it is.
>   >
>   > On Wednesday, July 8, 2020 at 12:38:38 AM UTC+5:30, Dormando wrote:
>   >       'stats settings' file is empty
>   >
>   >       On Tue, 7 Jul 2020, Shweta Agrawal wrote:
>   >
>   >       > Hi Dormando,
>   >       > Got the stats for production. Please find attached files for 
> stats settings. stats items, stats, stats slabs. Summary for all slabs.
>   >       >
>   >       > Other details that might help:
>   >       >  *  TTL is two days or more. 
>   >       >  *  Key length is in the range of 40-80 bytes.
>   >       > Below are the parameters that we plan to change from the 
> current settings:
>   >       >  1. slab_automove : from 0 to 1
>   >       >  2. hash_algorithm: from jenkins to murmur
>   >       >  3. chunk_size: from 48 to 297 (as we don't have data of size 
> less than that)
>   >       >  4. growth_factor: 1.25 to 1.20 ( Can reducing this more 
> help? Do more slab classes affect performance?)
>   >       >  5. max_item_size : from 4MB to 1MB (as our data will never 
> be more than 1MB large)
>   >       > Please let me know if different values for above paramters 
> can be more beneficial.
>   >       > Are there any other parameters which we should consider to 
> change or set?
>   >       >
>   >       > Also below are the calculations used for columns in the 
> summary shared. Can you please confirm if calculations are fine.
>   >       > 1) Total_Mem = total_pages*page_size  --> total memory 
>   >       > 2) Strg_ovrHd = (mem_requested/(used_chunks*chunk_size)) * 
> 100 --> storage overhead
>   >       > 3) Free Memory = free_chunks * chunk_size   ---> free memory
>   >       > 4) To Store = mem_requested      -->   actual memory 
> requested for storing data
>   >       >
>   >       > Thank you for your time and efforts in explaining concepts.
>   >       > Shweta
>   >       >
>   >       >             > > the rest is free memory, which should be 
> measured separately.
>   >       >             > free memory for a class will be : (free_chunks 
> * chunk_size) 
>   >       >             > And total memory reserved by a class will be : 
> (total_pages*page_size)
>   >       >             >
>   >       >             > > If you're getting evictions in class A but 
> there's too much free memory in classes C, D, etc 
>   >       >             > > then you have a balance issue. for example. 
> An efficiency stat which just 
>   >       >             > > adds up the total pages doesn't tell you what 
> to do with it. 
>   >       >             > I see. Got your point.Storage overhead can help 
> in deciding the chunk_size and growth_factor. Let me add
>   storage-overhead and
>   >       >             free memory as well for
>   >       >             > calculation.
>   >       >
>   >       >             Most people don't have to worry about 
> growth_factor very much. Especially
>   >       >             since the large item code was added, but it has 
> its own caveats. Growth
>   >       >             factor is only typically useful if you have 
> _very_ statically sized
>   >       >             objects.
>   >       >
>   >       >             > One curious question: If we have an item of 
> 500Bytes and there is free memory only in class A(chunk_size: 100Bytes).
>   Do cache
>   >       >             evict items from class with
>   >       >             > largeer chunk_size or use multiple chunks from 
> class A?
>   >       >
>   >       >             No, it will evict an item matching the 500 byte 
> chunk size, and not touch
>   >       >             A. This is where the memory balancer comes in; it 
> will move pages of
>   >       >             memory between slab classes to keep the tail age 
> roughly the same between
>   >       >             classes. It does this slowly.
>   >       >
>   >       >             > Example:
>   >       >             > In below scenario, when we try to store item 
> with 3MB, even when there was memory in class with smaller chunk_size, it
>   evicts
>   >       >             items from 512K class and
>   >       >             > other memory is blocked by smaller slabs.
>   >       >
>   >       >             Large (> 512KB) items are an exception. It will 
> try to 

Re: Total memory allocated with -m NOT EQUAL to (total pages * max_item_size). Request to provide clarification on how page size works.

2020-07-07 Thread Shweta Agrawal
We use aws for deployment and don't have that information. What 
particularly looks odd in settings? 

On Wednesday, July 8, 2020 at 8:10:04 AM UTC+5:30, Dormando wrote:
>
> what're your start arguments? the settings look a little odd. ie; the full 
> commandline (censoring anything important) that you used to start 
> memcached 
>
> On Tue, 7 Jul 2020, Shweta Agrawal wrote: 
>
> > Sorry. Here it is. 
> > 
> > On Wednesday, July 8, 2020 at 12:38:38 AM UTC+5:30, Dormando wrote: 
> >   'stats settings' file is empty 
> > 
> >   On Tue, 7 Jul 2020, Shweta Agrawal wrote: 
> > 
> >   > Hi Dormando, 
> >   > Got the stats for production. Please find attached files for 
> stats settings. stats items, stats, stats slabs. Summary for all slabs. 
> >   > 
> >   > Other details that might help: 
> >   >  *  TTL is two days or more.  
> >   >  *  Key length is in the range of 40-80 bytes. 
> >   > Below are the parameters that we plan to change from the current 
> settings: 
> >   >  1. slab_automove : from 0 to 1 
> >   >  2. hash_algorithm: from jenkins to murmur 
> >   >  3. chunk_size: from 48 to 297 (as we don't have data of size 
> less than that) 
> >   >  4. growth_factor: 1.25 to 1.20 ( Can reducing this more help? 
> Do more slab classes affect performance?) 
> >   >  5. max_item_size : from 4MB to 1MB (as our data will never be 
> more than 1MB large) 
> >   > Please let me know if different values for above paramters can 
> be more beneficial. 
> >   > Are there any other parameters which we should consider to 
> change or set? 
> >   > 
> >   > Also below are the calculations used for columns in the summary 
> shared. Can you please confirm if calculations are fine. 
> >   > 1) Total_Mem = total_pages*page_size  --> total memory  
> >   > 2) Strg_ovrHd = (mem_requested/(used_chunks*chunk_size)) * 100 
> --> storage overhead 
> >   > 3) Free Memory = free_chunks * chunk_size   ---> free memory 
> >   > 4) To Store = mem_requested  -->   actual memory requested 
> for storing data 
> >   > 
> >   > Thank you for your time and efforts in explaining concepts. 
> >   > Shweta 
> >   > 
> >   > > > the rest is free memory, which should be 
> measured separately. 
> >   > > free memory for a class will be : (free_chunks * 
> chunk_size)  
> >   > > And total memory reserved by a class will be : 
> (total_pages*page_size) 
> >   > > 
> >   > > > If you're getting evictions in class A but 
> there's too much free memory in classes C, D, etc  
> >   > > > then you have a balance issue. for example. An 
> efficiency stat which just  
> >   > > > adds up the total pages doesn't tell you what to 
> do with it.  
> >   > > I see. Got your point.Storage overhead can help in 
> deciding the chunk_size and growth_factor. Let me add storage-overhead and 
> >   > free memory as well for 
> >   > > calculation. 
> >   > 
> >   > Most people don't have to worry about growth_factor 
> very much. Especially 
> >   > since the large item code was added, but it has its 
> own caveats. Growth 
> >   > factor is only typically useful if you have _very_ 
> statically sized 
> >   > objects. 
> >   > 
> >   > > One curious question: If we have an item of 
> 500Bytes and there is free memory only in class A(chunk_size: 100Bytes). Do 
> cache 
> >   > evict items from class with 
> >   > > largeer chunk_size or use multiple chunks from 
> class A? 
> >   > 
> >   > No, it will evict an item matching the 500 byte 
> chunk size, and not touch 
> >   > A. This is where the memory balancer comes in; it 
> will move pages of 
> >   > memory between slab classes to keep the tail age 
> roughly the same between 
> >   > classes. It does this slowly. 
> >   > 
> >   > > Example: 
> >   > > In below scenario, when we try to store item with 
> 3MB, even when there was memory in class with smaller chunk_size, it evicts 
> >   > items from 512K class and 
> >   > > other memory is blocked by smaller slabs. 
> >   > 
> >   > Large (> 512KB) items are an exception. It will try 
> to evict from the 
> >   > "large item" bucket, which is 512kb. It will try to 
> do this up to a few 
> >   > times, trying to free up enough memory to make space 
> for the large item. 
> >   > 
> >   > So to make space for a 3MB item, if the tail item is 
> 5MB in size or 1MB in 
> >   > size, they will still be evicted. If the tail age is 
> low compared to all 
> >

Re: Total memory allocated with -m NOT EQUAL to (total pages * max_item_size). Request to provide clarification on how page size works.

2020-07-07 Thread dormando
what're your start arguments? the settings look a little odd. ie; the full
commandline (censoring anything important) that you used to start
memcached

On Tue, 7 Jul 2020, Shweta Agrawal wrote:

> Sorry. Here it is.
>
> On Wednesday, July 8, 2020 at 12:38:38 AM UTC+5:30, Dormando wrote:
>   'stats settings' file is empty
>
>   On Tue, 7 Jul 2020, Shweta Agrawal wrote:
>
>   > Hi Dormando,
>   > Got the stats for production. Please find attached files for stats 
> settings. stats items, stats, stats slabs. Summary for all slabs.
>   >
>   > Other details that might help:
>   >  *  TTL is two days or more. 
>   >  *  Key length is in the range of 40-80 bytes.
>   > Below are the parameters that we plan to change from the current 
> settings:
>   >  1. slab_automove : from 0 to 1
>   >  2. hash_algorithm: from jenkins to murmur
>   >  3. chunk_size: from 48 to 297 (as we don't have data of size less 
> than that)
>   >  4. growth_factor: 1.25 to 1.20 ( Can reducing this more help? Do 
> more slab classes affect performance?)
>   >  5. max_item_size : from 4MB to 1MB (as our data will never be more 
> than 1MB large)
>   > Please let me know if different values for above paramters can be 
> more beneficial.
>   > Are there any other parameters which we should consider to change or 
> set?
>   >
>   > Also below are the calculations used for columns in the summary 
> shared. Can you please confirm if calculations are fine.
>   > 1) Total_Mem = total_pages*page_size  --> total memory 
>   > 2) Strg_ovrHd = (mem_requested/(used_chunks*chunk_size)) * 100 --> 
> storage overhead
>   > 3) Free Memory = free_chunks * chunk_size   ---> free memory
>   > 4) To Store = mem_requested      -->   actual memory requested for 
> storing data
>   >
>   > Thank you for your time and efforts in explaining concepts.
>   > Shweta
>   >
>   >             > > the rest is free memory, which should be measured 
> separately.
>   >             > free memory for a class will be : (free_chunks * 
> chunk_size) 
>   >             > And total memory reserved by a class will be : 
> (total_pages*page_size)
>   >             >
>   >             > > If you're getting evictions in class A but there's 
> too much free memory in classes C, D, etc 
>   >             > > then you have a balance issue. for example. An 
> efficiency stat which just 
>   >             > > adds up the total pages doesn't tell you what to do 
> with it. 
>   >             > I see. Got your point.Storage overhead can help in 
> deciding the chunk_size and growth_factor. Let me add storage-overhead and
>   >             free memory as well for
>   >             > calculation.
>   >
>   >             Most people don't have to worry about growth_factor very 
> much. Especially
>   >             since the large item code was added, but it has its own 
> caveats. Growth
>   >             factor is only typically useful if you have _very_ 
> statically sized
>   >             objects.
>   >
>   >             > One curious question: If we have an item of 500Bytes 
> and there is free memory only in class A(chunk_size: 100Bytes). Do cache
>   >             evict items from class with
>   >             > largeer chunk_size or use multiple chunks from class A?
>   >
>   >             No, it will evict an item matching the 500 byte chunk 
> size, and not touch
>   >             A. This is where the memory balancer comes in; it will 
> move pages of
>   >             memory between slab classes to keep the tail age roughly 
> the same between
>   >             classes. It does this slowly.
>   >
>   >             > Example:
>   >             > In below scenario, when we try to store item with 3MB, 
> even when there was memory in class with smaller chunk_size, it evicts
>   >             items from 512K class and
>   >             > other memory is blocked by smaller slabs.
>   >
>   >             Large (> 512KB) items are an exception. It will try to 
> evict from the
>   >             "large item" bucket, which is 512kb. It will try to do 
> this up to a few
>   >             times, trying to free up enough memory to make space for 
> the large item.
>   >
>   >             So to make space for a 3MB item, if the tail item is 5MB 
> in size or 1MB in
>   >             size, they will still be evicted. If the tail age is low 
> compared to all
>   >             other classes, the memory balancer will eventually move 
> more pages into
>   >             the 512K slab class.
>   >
>   >             If you tend to store a lot of very large items, it works 
> better if the
>   >             instances are larger.
>   >
>   >             Memcached is more optimized for performance with small 
> items. if you try
> 

Re: Total memory allocated with -m NOT EQUAL to (total pages * max_item_size). Request to provide clarification on how page size works.

2020-07-07 Thread dormando
'stats settings' file is empty

On Tue, 7 Jul 2020, Shweta Agrawal wrote:

> Hi Dormando,
> Got the stats for production. Please find attached files for stats settings. 
> stats items, stats, stats slabs. Summary for all slabs.
>
> Other details that might help:
>  *  TTL is two days or more. 
>  *  Key length is in the range of 40-80 bytes.
> Below are the parameters that we plan to change from the current settings:
>  1. slab_automove : from 0 to 1
>  2. hash_algorithm: from jenkins to murmur
>  3. chunk_size: from 48 to 297 (as we don't have data of size less than that)
>  4. growth_factor: 1.25 to 1.20 ( Can reducing this more help? Do more slab 
> classes affect performance?)
>  5. max_item_size : from 4MB to 1MB (as our data will never be more than 1MB 
> large)
> Please let me know if different values for above paramters can be more 
> beneficial.
> Are there any other parameters which we should consider to change or set?
>
> Also below are the calculations used for columns in the summary shared. Can 
> you please confirm if calculations are fine.
> 1) Total_Mem = total_pages*page_size  --> total memory 
> 2) Strg_ovrHd = (mem_requested/(used_chunks*chunk_size)) * 100 --> storage 
> overhead
> 3) Free Memory = free_chunks * chunk_size   ---> free memory
> 4) To Store = mem_requested      -->   actual memory requested for storing 
> data
>
> Thank you for your time and efforts in explaining concepts.
> Shweta
>
> > > the rest is free memory, which should be measured separately.
> > free memory for a class will be : (free_chunks * chunk_size) 
> > And total memory reserved by a class will be : 
> (total_pages*page_size)
> >
> > > If you're getting evictions in class A but there's too much 
> free memory in classes C, D, etc 
> > > then you have a balance issue. for example. An efficiency 
> stat which just 
> > > adds up the total pages doesn't tell you what to do with it. 
> > I see. Got your point.Storage overhead can help in deciding the 
> chunk_size and growth_factor. Let me add storage-overhead and
> free memory as well for
> > calculation.
>
> Most people don't have to worry about growth_factor very much. 
> Especially
> since the large item code was added, but it has its own caveats. 
> Growth
> factor is only typically useful if you have _very_ statically 
> sized
> objects.
>
> > One curious question: If we have an item of 500Bytes and there 
> is free memory only in class A(chunk_size: 100Bytes). Do cache
> evict items from class with
> > largeer chunk_size or use multiple chunks from class A?
>
> No, it will evict an item matching the 500 byte chunk size, and 
> not touch
> A. This is where the memory balancer comes in; it will move pages 
> of
> memory between slab classes to keep the tail age roughly the same 
> between
> classes. It does this slowly.
>
> > Example:
> > In below scenario, when we try to store item with 3MB, even 
> when there was memory in class with smaller chunk_size, it evicts
> items from 512K class and
> > other memory is blocked by smaller slabs.
>
> Large (> 512KB) items are an exception. It will try to evict from 
> the
> "large item" bucket, which is 512kb. It will try to do this up to 
> a few
> times, trying to free up enough memory to make space for the 
> large item.
>
> So to make space for a 3MB item, if the tail item is 5MB in size 
> or 1MB in
> size, they will still be evicted. If the tail age is low compared 
> to all
> other classes, the memory balancer will eventually move more 
> pages into
> the 512K slab class.
>
> If you tend to store a lot of very large items, it works better 
> if the
> instances are larger.
>
> Memcached is more optimized for performance with small items. if 
> you try
> to store a small item, it will evict exactly one item to make 
> space.
> However, for very large items (1MB+), the time it takes to read 
> the data
> from the network is so large that we can afford to do extra 
> processing.
>
> > 3Mb_items_eviction.png
> >
> >
> > Thank you,
> > Shweta
> >
> >
> > On Sunday, July 5, 2020 at 1:13:19 AM UTC+5:30, Dormando wrote:
> >       (memory_requested / (chunk_size * chunk_used)) * 100
> >
> >       is roughly the storage overhead of memory used in the 
> system. the rest is
> >       free memory, which should be measured separately. If 
> you're getting
> >       evictions in class A but there's too much free memory in 

Re: Total memory allocated with -m NOT EQUAL to (total pages * max_item_size). Request to provide clarification on how page size works.

2020-07-05 Thread Shweta Agrawal
Got it. Thank you for the explanation :)

On Sunday, July 5, 2020 at 9:10:23 AM UTC+5:30, Dormando wrote:
>
> On Sat, 4 Jul 2020, Shweta Agrawal wrote: 
>
> > > the rest is free memory, which should be measured separately. 
> > free memory for a class will be : (free_chunks * chunk_size)  
> > And total memory reserved by a class will be : (total_pages*page_size) 
> > 
> > > If you're getting evictions in class A but there's too much free 
> memory in classes C, D, etc  
> > > then you have a balance issue. for example. An efficiency stat which 
> just  
> > > adds up the total pages doesn't tell you what to do with it.  
> > I see. Got your point.Storage overhead can help in deciding the 
> chunk_size and growth_factor. Let me add storage-overhead and free memory 
> as well for 
> > calculation. 
>
> Most people don't have to worry about growth_factor very much. Especially 
> since the large item code was added, but it has its own caveats. Growth 
> factor is only typically useful if you have _very_ statically sized 
> objects. 
>
> > One curious question: If we have an item of 500Bytes and there is free 
> memory only in class A(chunk_size: 100Bytes). Do cache evict items from 
> class with 
> > largeer chunk_size or use multiple chunks from class A? 
>
> No, it will evict an item matching the 500 byte chunk size, and not touch 
> A. This is where the memory balancer comes in; it will move pages of 
> memory between slab classes to keep the tail age roughly the same between 
> classes. It does this slowly. 
>
> > Example: 
> > In below scenario, when we try to store item with 3MB, even when there 
> was memory in class with smaller chunk_size, it evicts items from 512K 
> class and 
> > other memory is blocked by smaller slabs. 
>
> Large (> 512KB) items are an exception. It will try to evict from the 
> "large item" bucket, which is 512kb. It will try to do this up to a few 
> times, trying to free up enough memory to make space for the large item. 
>
> So to make space for a 3MB item, if the tail item is 5MB in size or 1MB in 
> size, they will still be evicted. If the tail age is low compared to all 
> other classes, the memory balancer will eventually move more pages into 
> the 512K slab class. 
>
> If you tend to store a lot of very large items, it works better if the 
> instances are larger. 
>
> Memcached is more optimized for performance with small items. if you try 
> to store a small item, it will evict exactly one item to make space. 
> However, for very large items (1MB+), the time it takes to read the data 
> from the network is so large that we can afford to do extra processing. 
>
> > 3Mb_items_eviction.png 
> > 
> > 
> > Thank you, 
> > Shweta 
> > 
> > 
> > On Sunday, July 5, 2020 at 1:13:19 AM UTC+5:30, Dormando wrote: 
> >   (memory_requested / (chunk_size * chunk_used)) * 100 
> > 
> >   is roughly the storage overhead of memory used in the system. the 
> rest is 
> >   free memory, which should be measured separately. If you're 
> getting 
> >   evictions in class A but there's too much free memory in classes 
> C, D, etc 
> >   then you have a balance issue. for example. An efficiency stat 
> which just 
> >   adds up the total pages doesn't tell you what to do with it. 
> > 
> >   On Sat, 4 Jul 2020, Shweta Agrawal wrote: 
> > 
> >   > > I'll need the raw output from "stats items" and "stats slabs". 
> I don't  
> >   > > think that efficiency column is very helpful. ohkay no 
> worries. I can get by Tuesday and will share.  
> >   > 
> >   > Efficiency for each slab is calcuated as  
> >   >  (("stats slabs" -> memory_requested) / (("stats slabs" -> 
> total_pages) * page_size)) * 100 
> >   > 
> >   > 
> >   > Attaching script which has calculations for the same. The script 
> is from memcahe repo with additional calculation for efficiency.  
> >   > Will it be possible for you to verify if the efficiency 
> calculation is correct? 
> >   > 
> >   > Thank you, 
> >   > Shweta 
> >   > 
> >   > On Saturday, July 4, 2020 at 1:08:23 PM UTC+5:30, Dormando 
> wrote: 
> >   >   ah okay. 
> >   > 
> >   >   I'll need the raw output from "stats items" and "stats 
> slabs". I don't 
> >   >   think that efficiency column is very helpful. 
> >   > 
> >   >   On Fri, 3 Jul 2020, Shweta Agrawal wrote: 
> >   > 
> >   >   > 
> >   >   > 
> >   >   > On Saturday, July 4, 2020 at 9:41:49 AM UTC+5:30, 
> Dormando wrote: 
> >   >   >   No attachment 
> >   >   > 
> >   >   >   On Fri, 3 Jul 2020, Shweta Agrawal wrote: 
> >   >   > 
> >   >   >   > 
> >   >   >   > Wooo...so quick. :):) 
> >   >   >   > > Correct, close. It actually uses more like 3 
> 512k chunks and then one  
> >   >   >   > > smaller chunk from a different class to fit 
> exactly 1.6MB.  

Re: Total memory allocated with -m NOT EQUAL to (total pages * max_item_size). Request to provide clarification on how page size works.

2020-07-04 Thread dormando
On Sat, 4 Jul 2020, Shweta Agrawal wrote:

> > the rest is free memory, which should be measured separately.
> free memory for a class will be : (free_chunks * chunk_size) 
> And total memory reserved by a class will be : (total_pages*page_size)
>
> > If you're getting evictions in class A but there's too much free memory in 
> > classes C, D, etc 
> > then you have a balance issue. for example. An efficiency stat which just 
> > adds up the total pages doesn't tell you what to do with it. 
> I see. Got your point.Storage overhead can help in deciding the chunk_size 
> and growth_factor. Let me add storage-overhead and free memory as well for
> calculation.

Most people don't have to worry about growth_factor very much. Especially
since the large item code was added, but it has its own caveats. Growth
factor is only typically useful if you have _very_ statically sized
objects.

> One curious question: If we have an item of 500Bytes and there is free memory 
> only in class A(chunk_size: 100Bytes). Do cache evict items from class with
> largeer chunk_size or use multiple chunks from class A?

No, it will evict an item matching the 500 byte chunk size, and not touch
A. This is where the memory balancer comes in; it will move pages of
memory between slab classes to keep the tail age roughly the same between
classes. It does this slowly.

> Example:
> In below scenario, when we try to store item with 3MB, even when there was 
> memory in class with smaller chunk_size, it evicts items from 512K class and
> other memory is blocked by smaller slabs.

Large (> 512KB) items are an exception. It will try to evict from the
"large item" bucket, which is 512kb. It will try to do this up to a few
times, trying to free up enough memory to make space for the large item.

So to make space for a 3MB item, if the tail item is 5MB in size or 1MB in
size, they will still be evicted. If the tail age is low compared to all
other classes, the memory balancer will eventually move more pages into
the 512K slab class.

If you tend to store a lot of very large items, it works better if the
instances are larger.

Memcached is more optimized for performance with small items. if you try
to store a small item, it will evict exactly one item to make space.
However, for very large items (1MB+), the time it takes to read the data
from the network is so large that we can afford to do extra processing.

> 3Mb_items_eviction.png
>
>
> Thank you,
> Shweta
>
>
> On Sunday, July 5, 2020 at 1:13:19 AM UTC+5:30, Dormando wrote:
>   (memory_requested / (chunk_size * chunk_used)) * 100
>
>   is roughly the storage overhead of memory used in the system. the rest 
> is
>   free memory, which should be measured separately. If you're getting
>   evictions in class A but there's too much free memory in classes C, D, 
> etc
>   then you have a balance issue. for example. An efficiency stat which 
> just
>   adds up the total pages doesn't tell you what to do with it.
>
>   On Sat, 4 Jul 2020, Shweta Agrawal wrote:
>
>   > > I'll need the raw output from "stats items" and "stats slabs". I 
> don't 
>   > > think that efficiency column is very helpful. ohkay no worries. I 
> can get by Tuesday and will share. 
>   >
>   > Efficiency for each slab is calcuated as 
>   >  (("stats slabs" -> memory_requested) / (("stats slabs" -> 
> total_pages) * page_size)) * 100
>   >
>   >
>   > Attaching script which has calculations for the same. The script is 
> from memcahe repo with additional calculation for efficiency. 
>   > Will it be possible for you to verify if the efficiency calculation 
> is correct?
>   >
>   > Thank you,
>   > Shweta
>   >
>   > On Saturday, July 4, 2020 at 1:08:23 PM UTC+5:30, Dormando wrote:
>   >       ah okay.
>   >
>   >       I'll need the raw output from "stats items" and "stats slabs". 
> I don't
>   >       think that efficiency column is very helpful.
>   >
>   >       On Fri, 3 Jul 2020, Shweta Agrawal wrote:
>   >
>   >       >
>   >       >
>   >       > On Saturday, July 4, 2020 at 9:41:49 AM UTC+5:30, Dormando 
> wrote:
>   >       >       No attachment
>   >       >
>   >       >       On Fri, 3 Jul 2020, Shweta Agrawal wrote:
>   >       >
>   >       >       >
>   >       >       > Wooo...so quick. :):)
>   >       >       > > Correct, close. It actually uses more like 3 512k 
> chunks and then one 
>   >       >       > > smaller chunk from a different class to fit exactly 
> 1.6MB. 
>   >       >       > I see.Got it.
>   >       >       >
>   >       >       > >Can you share snapshots from "stats items" and 
> "stats slabs" for one of 
>   >       >       > these instances? 
>   >       >       >
>   >       >       > Currently I have summary of it, sharing the same 
> below. I can get snapshot by Tuesday as need to request for it.
>   >   

Re: Total memory allocated with -m NOT EQUAL to (total pages * max_item_size). Request to provide clarification on how page size works.

2020-07-04 Thread dormando
(memory_requested / (chunk_size * chunk_used)) * 100

is roughly the storage overhead of memory used in the system. the rest is
free memory, which should be measured separately. If you're getting
evictions in class A but there's too much free memory in classes C, D, etc
then you have a balance issue. for example. An efficiency stat which just
adds up the total pages doesn't tell you what to do with it.

On Sat, 4 Jul 2020, Shweta Agrawal wrote:

> > I'll need the raw output from "stats items" and "stats slabs". I don't 
> > think that efficiency column is very helpful. ohkay no worries. I can get 
> > by Tuesday and will share. 
>
> Efficiency for each slab is calcuated as 
>  (("stats slabs" -> memory_requested) / (("stats slabs" -> total_pages) * 
> page_size)) * 100
>
>
> Attaching script which has calculations for the same. The script is from 
> memcahe repo with additional calculation for efficiency. 
> Will it be possible for you to verify if the efficiency calculation is 
> correct?
>
> Thank you,
> Shweta
>
> On Saturday, July 4, 2020 at 1:08:23 PM UTC+5:30, Dormando wrote:
>   ah okay.
>
>   I'll need the raw output from "stats items" and "stats slabs". I don't
>   think that efficiency column is very helpful.
>
>   On Fri, 3 Jul 2020, Shweta Agrawal wrote:
>
>   >
>   >
>   > On Saturday, July 4, 2020 at 9:41:49 AM UTC+5:30, Dormando wrote:
>   >       No attachment
>   >
>   >       On Fri, 3 Jul 2020, Shweta Agrawal wrote:
>   >
>   >       >
>   >       > Wooo...so quick. :):)
>   >       > > Correct, close. It actually uses more like 3 512k chunks 
> and then one 
>   >       > > smaller chunk from a different class to fit exactly 1.6MB. 
>   >       > I see.Got it.
>   >       >
>   >       > >Can you share snapshots from "stats items" and "stats slabs" 
> for one of 
>   >       > these instances? 
>   >       >
>   >       > Currently I have summary of it, sharing the same below. I can 
> get snapshot by Tuesday as need to request for it.
>   >       >
>   >       > pages have value from total_pages from stats slab for each 
> slab
>   >       > item_size have value from chunk_size from stats slab for each 
> slab
>   >       > Used memory is calculated as pages*page size ---> This has to 
> corrected now.
>   >       >
>   >       >
>   >       > prod_stats.png
>   >       >
>   >       >
>   >       > > 90%+ are perfectly doable. You probably need to look a bit 
> more closely
>   >       > > into why you're not getting the efficiency you expect. The 
> detailed stats
>   >       > > output should point to why. I can help with that if it's 
> confusing.
>   >       >
>   >       > Great. Will surely ask for your input whenever I have 
> question. It is really kind of you to offer help. 
>   >       >
>   >       > > Either the slab rebalancer isn't keeping up or you actually 
> do have 39GB
>   >       > > of data and your expecations are a bit off. This will also 
> depending on
>   >       > > the TTL's you're setting and how often/quickly your items 
> change size.
>   >       > > Also things like your serialization method / compression / 
> key length vs
>   >       > > data length / etc.
>   >       >
>   >       > We have much less data than 39 GB. As after facing evictions, 
> it has been always kept higher than expected data-size.
>   >       > TTL is two days or more. 
>   >       > From my observation items size(data-length) is in the range 
> of 300Bytes to 500K after compression.
>   >       > Key length is in the range of 40-80 bytes.
>   >       >
>   >       > Thank you,
>   >       > Shweta
>   >       >  
>   >       > On Saturday, July 4, 2020 at 8:38:31 AM UTC+5:30, Dormando 
> wrote:
>   >       >       Hey,
>   >       >
>   >       >       > Putting my understanding to re-confirm:
>   >       >       > 1) Page size will always be 1MB and we cannot change 
> it.Moreover, it's not required to be changed.
>   >       >
>   >       >       Correct.
>   >       >
>   >       >       > 2) We can store items larger than 1MB and it is done 
> by combining chunks together. (example: let's say item size: ~1.6MB -->
>   4 slab
>   >       >       chunks(512k slab) from
>   >       >       > 2 pages will be used)
>   >       >
>   >       >       Correct, close. It actually uses more like 3 512k 
> chunks and then one
>   >       >       smaller chunk from a different class to fit exactly 
> 1.6MB.
>   >       >
>   >       >       > We use memcache in production and in past we saw 
> evictions even when free memory was present. Also currently we use cluster
>   with
>   >       39GB RAM in
>   >       >       total to
>   >       >       > cache data even when data size we expect is ~15GB to 
> 

Re: Total memory allocated with -m NOT EQUAL to (total pages * max_item_size). Request to provide clarification on how page size works.

2020-07-04 Thread Shweta Agrawal
> I'll need the raw output from "stats items" and "stats slabs". I don't 
> think that efficiency column is very helpful. 
ohkay no worries. I can get by Tuesday and will share. 

Efficiency for each slab is calcuated as 
 (("stats slabs" -> memory_requested) / (("stats slabs" -> total_pages) * 
page_size)) * 100


Attaching script which has calculations for the same. The script is from 
memcahe repo with additional calculation for efficiency. 
Will it be possible for you to verify if the efficiency calculation is 
correct?

Thank you,
Shweta

On Saturday, July 4, 2020 at 1:08:23 PM UTC+5:30, Dormando wrote:
>
> ah okay. 
>
> I'll need the raw output from "stats items" and "stats slabs". I don't 
> think that efficiency column is very helpful. 
>
> On Fri, 3 Jul 2020, Shweta Agrawal wrote: 
>
> > 
> > 
> > On Saturday, July 4, 2020 at 9:41:49 AM UTC+5:30, Dormando wrote: 
> >   No attachment 
> > 
> >   On Fri, 3 Jul 2020, Shweta Agrawal wrote: 
> > 
> >   > 
> >   > Wooo...so quick. :):) 
> >   > > Correct, close. It actually uses more like 3 512k chunks and 
> then one  
> >   > > smaller chunk from a different class to fit exactly 1.6MB.  
> >   > I see.Got it. 
> >   > 
> >   > >Can you share snapshots from "stats items" and "stats slabs" 
> for one of  
> >   > these instances?  
> >   > 
> >   > Currently I have summary of it, sharing the same below. I can 
> get snapshot by Tuesday as need to request for it. 
> >   > 
> >   > pages have value from total_pages from stats slab for each slab 
> >   > item_size have value from chunk_size from stats slab for each 
> slab 
> >   > Used memory is calculated as pages*page size ---> This has to 
> corrected now. 
> >   > 
> >   > 
> >   > prod_stats.png 
> >   > 
> >   > 
> >   > > 90%+ are perfectly doable. You probably need to look a bit 
> more closely 
> >   > > into why you're not getting the efficiency you expect. The 
> detailed stats 
> >   > > output should point to why. I can help with that if it's 
> confusing. 
> >   > 
> >   > Great. Will surely ask for your input whenever I have question. 
> It is really kind of you to offer help.  
> >   > 
> >   > > Either the slab rebalancer isn't keeping up or you actually do 
> have 39GB 
> >   > > of data and your expecations are a bit off. This will also 
> depending on 
> >   > > the TTL's you're setting and how often/quickly your items 
> change size. 
> >   > > Also things like your serialization method / compression / key 
> length vs 
> >   > > data length / etc. 
> >   > 
> >   > We have much less data than 39 GB. As after facing evictions, it 
> has been always kept higher than expected data-size. 
> >   > TTL is two days or more.  
> >   > From my observation items size(data-length) is in the range of 
> 300Bytes to 500K after compression. 
> >   > Key length is in the range of 40-80 bytes. 
> >   > 
> >   > Thank you, 
> >   > Shweta 
> >   >   
> >   > On Saturday, July 4, 2020 at 8:38:31 AM UTC+5:30, Dormando 
> wrote: 
> >   >   Hey, 
> >   > 
> >   >   > Putting my understanding to re-confirm: 
> >   >   > 1) Page size will always be 1MB and we cannot change 
> it.Moreover, it's not required to be changed. 
> >   > 
> >   >   Correct. 
> >   > 
> >   >   > 2) We can store items larger than 1MB and it is done by 
> combining chunks together. (example: let's say item size: ~1.6MB --> 4 slab 
> >   >   chunks(512k slab) from 
> >   >   > 2 pages will be used) 
> >   > 
> >   >   Correct, close. It actually uses more like 3 512k chunks 
> and then one 
> >   >   smaller chunk from a different class to fit exactly 1.6MB. 
> >   > 
> >   >   > We use memcache in production and in past we saw 
> evictions even when free memory was present. Also currently we use cluster 
> with 
> >   39GB RAM in 
> >   >   total to 
> >   >   > cache data even when data size we expect is ~15GB to 
> avoid eviction of active items. 
> >   > 
> >   >   Can you share snapshots from "stats items" and "stats 
> slabs" for one of 
> >   >   these instances? 
> >   > 
> >   >   > But as our data varies in size, it is possible to avoid 
> evictions by tuning parameters: chunk_size, growth_factor, slab_automove. 
> >   Also I 
> >   >   believe memcache 
> >   >   > is efficient and we can reduce cost by reducing memory 
> size for cluster.  
> >   >   > So I am trying to find the best possible memory size and 
> parameters we can have.So want to be clear with my understanding and 
> >   calculations. 
> >   >   > 
> >   >   > So while trying different parameters and putting all 
> calculations, I observed that total_pages * item_size_max > physical memory 
> 

Re: Total memory allocated with -m NOT EQUAL to (total pages * max_item_size). Request to provide clarification on how page size works.

2020-07-04 Thread dormando
ah okay.

I'll need the raw output from "stats items" and "stats slabs". I don't
think that efficiency column is very helpful.

On Fri, 3 Jul 2020, Shweta Agrawal wrote:

>
>
> On Saturday, July 4, 2020 at 9:41:49 AM UTC+5:30, Dormando wrote:
>   No attachment
>
>   On Fri, 3 Jul 2020, Shweta Agrawal wrote:
>
>   >
>   > Wooo...so quick. :):)
>   > > Correct, close. It actually uses more like 3 512k chunks and then 
> one 
>   > > smaller chunk from a different class to fit exactly 1.6MB. 
>   > I see.Got it.
>   >
>   > >Can you share snapshots from "stats items" and "stats slabs" for one 
> of 
>   > these instances? 
>   >
>   > Currently I have summary of it, sharing the same below. I can get 
> snapshot by Tuesday as need to request for it.
>   >
>   > pages have value from total_pages from stats slab for each slab
>   > item_size have value from chunk_size from stats slab for each slab
>   > Used memory is calculated as pages*page size ---> This has to 
> corrected now.
>   >
>   >
>   > prod_stats.png
>   >
>   >
>   > > 90%+ are perfectly doable. You probably need to look a bit more 
> closely
>   > > into why you're not getting the efficiency you expect. The detailed 
> stats
>   > > output should point to why. I can help with that if it's confusing.
>   >
>   > Great. Will surely ask for your input whenever I have question. It is 
> really kind of you to offer help. 
>   >
>   > > Either the slab rebalancer isn't keeping up or you actually do have 
> 39GB
>   > > of data and your expecations are a bit off. This will also 
> depending on
>   > > the TTL's you're setting and how often/quickly your items change 
> size.
>   > > Also things like your serialization method / compression / key 
> length vs
>   > > data length / etc.
>   >
>   > We have much less data than 39 GB. As after facing evictions, it has 
> been always kept higher than expected data-size.
>   > TTL is two days or more. 
>   > From my observation items size(data-length) is in the range of 
> 300Bytes to 500K after compression.
>   > Key length is in the range of 40-80 bytes.
>   >
>   > Thank you,
>   > Shweta
>   >  
>   > On Saturday, July 4, 2020 at 8:38:31 AM UTC+5:30, Dormando wrote:
>   >       Hey,
>   >
>   >       > Putting my understanding to re-confirm:
>   >       > 1) Page size will always be 1MB and we cannot change 
> it.Moreover, it's not required to be changed.
>   >
>   >       Correct.
>   >
>   >       > 2) We can store items larger than 1MB and it is done by 
> combining chunks together. (example: let's say item size: ~1.6MB --> 4 slab
>   >       chunks(512k slab) from
>   >       > 2 pages will be used)
>   >
>   >       Correct, close. It actually uses more like 3 512k chunks and 
> then one
>   >       smaller chunk from a different class to fit exactly 1.6MB.
>   >
>   >       > We use memcache in production and in past we saw evictions 
> even when free memory was present. Also currently we use cluster with
>   39GB RAM in
>   >       total to
>   >       > cache data even when data size we expect is ~15GB to avoid 
> eviction of active items.
>   >
>   >       Can you share snapshots from "stats items" and "stats slabs" 
> for one of
>   >       these instances?
>   >
>   >       > But as our data varies in size, it is possible to avoid 
> evictions by tuning parameters: chunk_size, growth_factor, slab_automove.
>   Also I
>   >       believe memcache
>   >       > is efficient and we can reduce cost by reducing memory size 
> for cluster. 
>   >       > So I am trying to find the best possible memory size and 
> parameters we can have.So want to be clear with my understanding and
>   calculations.
>   >       >
>   >       > So while trying different parameters and putting all 
> calculations, I observed that total_pages * item_size_max > physical memory 
> for
>   a
>   >       machine. And from
>   >       > all blogs,and docs it didnot match my understanding. But it's 
> clear now. Thanks to you.
>   >       >
>   >       > One last question: From my trials I find that we can achieve 
> ~90% storage efficiency with memcache. (i.e we need 10MB of physical
>   memory to
>   >       store 9MB of
>   >       > data. Do you recommend any idle memory-size interms of 
> percentage of expected data-size? 
>   >
>   >       90%+ are perfectly doable. You probably need to look a bit more 
> closely
>   >       into why you're not getting the efficiency you expect. The 
> detailed stats
>   >       output should point to why. I can help with that if it's 
> confusing.
>   >
>   >       Either the slab rebalancer isn't keeping up or you actually do 
> have 39GB
>   >  

Re: Total memory allocated with -m NOT EQUAL to (total pages * max_item_size). Request to provide clarification on how page size works.

2020-07-03 Thread Shweta Agrawal


On Saturday, July 4, 2020 at 9:41:49 AM UTC+5:30, Dormando wrote:
>
> No attachment 
>
> On Fri, 3 Jul 2020, Shweta Agrawal wrote: 
>
> > 
> > Wooo...so quick. :):) 
> > > Correct, close. It actually uses more like 3 512k chunks and then one  
> > > smaller chunk from a different class to fit exactly 1.6MB.  
> > I see.Got it. 
> > 
> > >Can you share snapshots from "stats items" and "stats slabs" for one 
> of  
> > these instances?  
> > 
> > Currently I have summary of it, sharing the same below. I can get 
> snapshot by Tuesday as need to request for it. 
> > 
> > pages have value from total_pages from stats slab for each slab 
> > item_size have value from chunk_size from stats slab for each slab 
> > Used memory is calculated as pages*page size ---> This has to corrected 
> now. 
> > 
> > 
> > prod_stats.png 
> > 
> > 
> > > 90%+ are perfectly doable. You probably need to look a bit more 
> closely 
> > > into why you're not getting the efficiency you expect. The detailed 
> stats 
> > > output should point to why. I can help with that if it's confusing. 
> > 
> > Great. Will surely ask for your input whenever I have question. It is 
> really kind of you to offer help.  
> > 
> > > Either the slab rebalancer isn't keeping up or you actually do have 
> 39GB 
> > > of data and your expecations are a bit off. This will also depending 
> on 
> > > the TTL's you're setting and how often/quickly your items change size. 
> > > Also things like your serialization method / compression / key length 
> vs 
> > > data length / etc. 
> > 
> > We have much less data than 39 GB. As after facing evictions, it has 
> been always kept higher than expected data-size. 
> > TTL is two days or more.  
> > From my observation items size(data-length) is in the range of 300Bytes 
> to 500K after compression. 
> > Key length is in the range of 40-80 bytes. 
> > 
> > Thank you, 
> > Shweta 
> >   
> > On Saturday, July 4, 2020 at 8:38:31 AM UTC+5:30, Dormando wrote: 
> >   Hey, 
> > 
> >   > Putting my understanding to re-confirm: 
> >   > 1) Page size will always be 1MB and we cannot change 
> it.Moreover, it's not required to be changed. 
> > 
> >   Correct. 
> > 
> >   > 2) We can store items larger than 1MB and it is done by 
> combining chunks together. (example: let's say item size: ~1.6MB --> 4 slab 
> >   chunks(512k slab) from 
> >   > 2 pages will be used) 
> > 
> >   Correct, close. It actually uses more like 3 512k chunks and then 
> one 
> >   smaller chunk from a different class to fit exactly 1.6MB. 
> > 
> >   > We use memcache in production and in past we saw evictions even 
> when free memory was present. Also currently we use cluster with 39GB RAM 
> in 
> >   total to 
> >   > cache data even when data size we expect is ~15GB to avoid 
> eviction of active items. 
> > 
> >   Can you share snapshots from "stats items" and "stats slabs" for 
> one of 
> >   these instances? 
> > 
> >   > But as our data varies in size, it is possible to avoid 
> evictions by tuning parameters: chunk_size, growth_factor, slab_automove. 
> Also I 
> >   believe memcache 
> >   > is efficient and we can reduce cost by reducing memory size for 
> cluster.  
> >   > So I am trying to find the best possible memory size and 
> parameters we can have.So want to be clear with my understanding and 
> calculations. 
> >   > 
> >   > So while trying different parameters and putting all 
> calculations, I observed that total_pages * item_size_max > physical memory 
> for a 
> >   machine. And from 
> >   > all blogs,and docs it didnot match my understanding. But it's 
> clear now. Thanks to you. 
> >   > 
> >   > One last question: From my trials I find that we can achieve 
> ~90% storage efficiency with memcache. (i.e we need 10MB of physical memory 
> to 
> >   store 9MB of 
> >   > data. Do you recommend any idle memory-size interms of 
> percentage of expected data-size?  
> > 
> >   90%+ are perfectly doable. You probably need to look a bit more 
> closely 
> >   into why you're not getting the efficiency you expect. The 
> detailed stats 
> >   output should point to why. I can help with that if it's 
> confusing. 
> > 
> >   Either the slab rebalancer isn't keeping up or you actually do 
> have 39GB 
> >   of data and your expecations are a bit off. This will also 
> depending on 
> >   the TTL's you're setting and how often/quickly your items change 
> size. 
> >   Also things like your serialization method / compression / key 
> length vs 
> >   data length / etc. 
> > 
> >   -Dormando 
> > 
> >   > On Saturday, July 4, 2020 at 12:23:09 AM UTC+5:30, Dormando 
> wrote: 
> >   >   Hey, 
> >   > 
> >   >   Looks like I never updated the manpage. In the past the 
> item size max was 
> >   >   achieved by changing the slab page size, but that hasn't 
> been true 

Re: Total memory allocated with -m NOT EQUAL to (total pages * max_item_size). Request to provide clarification on how page size works.

2020-07-03 Thread dormando
No attachment

On Fri, 3 Jul 2020, Shweta Agrawal wrote:

>
> Wooo...so quick. :):)
> > Correct, close. It actually uses more like 3 512k chunks and then one 
> > smaller chunk from a different class to fit exactly 1.6MB. 
> I see.Got it.
>
> >Can you share snapshots from "stats items" and "stats slabs" for one of 
> these instances? 
>
> Currently I have summary of it, sharing the same below. I can get snapshot by 
> Tuesday as need to request for it.
>
> pages have value from total_pages from stats slab for each slab
> item_size have value from chunk_size from stats slab for each slab
> Used memory is calculated as pages*page size ---> This has to corrected now.
>
>
> prod_stats.png
>
>
> > 90%+ are perfectly doable. You probably need to look a bit more closely
> > into why you're not getting the efficiency you expect. The detailed stats
> > output should point to why. I can help with that if it's confusing.
>
> Great. Will surely ask for your input whenever I have question. It is really 
> kind of you to offer help. 
>
> > Either the slab rebalancer isn't keeping up or you actually do have 39GB
> > of data and your expecations are a bit off. This will also depending on
> > the TTL's you're setting and how often/quickly your items change size.
> > Also things like your serialization method / compression / key length vs
> > data length / etc.
>
> We have much less data than 39 GB. As after facing evictions, it has been 
> always kept higher than expected data-size.
> TTL is two days or more. 
> From my observation items size(data-length) is in the range of 300Bytes to 
> 500K after compression.
> Key length is in the range of 40-80 bytes.
>
> Thank you,
> Shweta
>  
> On Saturday, July 4, 2020 at 8:38:31 AM UTC+5:30, Dormando wrote:
>   Hey,
>
>   > Putting my understanding to re-confirm:
>   > 1) Page size will always be 1MB and we cannot change it.Moreover, 
> it's not required to be changed.
>
>   Correct.
>
>   > 2) We can store items larger than 1MB and it is done by combining 
> chunks together. (example: let's say item size: ~1.6MB --> 4 slab
>   chunks(512k slab) from
>   > 2 pages will be used)
>
>   Correct, close. It actually uses more like 3 512k chunks and then one
>   smaller chunk from a different class to fit exactly 1.6MB.
>
>   > We use memcache in production and in past we saw evictions even when 
> free memory was present. Also currently we use cluster with 39GB RAM in
>   total to
>   > cache data even when data size we expect is ~15GB to avoid eviction 
> of active items.
>
>   Can you share snapshots from "stats items" and "stats slabs" for one of
>   these instances?
>
>   > But as our data varies in size, it is possible to avoid evictions by 
> tuning parameters: chunk_size, growth_factor, slab_automove. Also I
>   believe memcache
>   > is efficient and we can reduce cost by reducing memory size for 
> cluster. 
>   > So I am trying to find the best possible memory size and parameters 
> we can have.So want to be clear with my understanding and calculations.
>   >
>   > So while trying different parameters and putting all calculations, I 
> observed that total_pages * item_size_max > physical memory for a
>   machine. And from
>   > all blogs,and docs it didnot match my understanding. But it's clear 
> now. Thanks to you.
>   >
>   > One last question: From my trials I find that we can achieve ~90% 
> storage efficiency with memcache. (i.e we need 10MB of physical memory to
>   store 9MB of
>   > data. Do you recommend any idle memory-size interms of percentage of 
> expected data-size? 
>
>   90%+ are perfectly doable. You probably need to look a bit more closely
>   into why you're not getting the efficiency you expect. The detailed 
> stats
>   output should point to why. I can help with that if it's confusing.
>
>   Either the slab rebalancer isn't keeping up or you actually do have 39GB
>   of data and your expecations are a bit off. This will also depending on
>   the TTL's you're setting and how often/quickly your items change size.
>   Also things like your serialization method / compression / key length vs
>   data length / etc.
>
>   -Dormando
>
>   > On Saturday, July 4, 2020 at 12:23:09 AM UTC+5:30, Dormando wrote:
>   >       Hey,
>   >
>   >       Looks like I never updated the manpage. In the past the item 
> size max was
>   >       achieved by changing the slab page size, but that hasn't been 
> true for a
>   >       long time.
>   >
>   >       From ./memcached -h:
>   >       -m, --memory-limit=  item memory in megabytes (default: 64)
>   >
>   >       ... -m just means the memory limit in megabytes, abstract from 
> the page
>   >       size. I think that was always true.
>   >
>   >       In any recentish version, any item larger than half a page size 
> (512k) is

Re: Total memory allocated with -m NOT EQUAL to (total pages * max_item_size). Request to provide clarification on how page size works.

2020-07-03 Thread Shweta Agrawal
Sorry forgot to mention. summary is from one instance. Instance has 13 GB 
of RAM

On Saturday, July 4, 2020 at 9:22:13 AM UTC+5:30, Shweta Agrawal wrote:
>
>
> Wooo...so quick. :):)
>
> > Correct, close. It actually uses more like 3 512k chunks and then one 
> > smaller chunk from a different class to fit exactly 1.6MB. 
>
> I see.Got it.
>
> >Can you share snapshots from "stats items" and "stats slabs" for one of 
> these instances? 
>
> Currently I have summary of it, sharing the same below. I can get snapshot 
> by Tuesday as need to request for it.
>
> pages have value from total_pages from stats slab for each slab
> item_size have value from chunk_size from stats slab for each slab
> Used memory is calculated as pages*page size ---> This has to corrected 
> now.
>
>
> [image: prod_stats.png]
>
> > 90%+ are perfectly doable. You probably need to look a bit more closely
> > into why you're not getting the efficiency you expect. The detailed stats
> > output should point to why. I can help with that if it's confusing.
>
> Great. Will surely ask for your input whenever I have question. It is 
> really kind of you to offer help. 
>
> > Either the slab rebalancer isn't keeping up or you actually do have 39GB
> > of data and your expecations are a bit off. This will also depending on
> > the TTL's you're setting and how often/quickly your items change size.
> > Also things like your serialization method / compression / key length vs
> > data length / etc.
>
> We have much less data than 39 GB. As after facing evictions, it has been 
> always kept higher than expected data-size.
> TTL is two days or more. 
> From my observation items size(data-length) is in the range of 300Bytes to 
> 500K after compression.
> Key length is in the range of 40-80 bytes.
>
> Thank you,
> Shweta
>  
> On Saturday, July 4, 2020 at 8:38:31 AM UTC+5:30, Dormando wrote:
>>
>> Hey, 
>>
>> > Putting my understanding to re-confirm: 
>> > 1) Page size will always be 1MB and we cannot change it.Moreover, it's 
>> not required to be changed. 
>>
>> Correct. 
>>
>> > 2) We can store items larger than 1MB and it is done by combining 
>> chunks together. (example: let's say item size: ~1.6MB --> 4 slab 
>> chunks(512k slab) from 
>> > 2 pages will be used) 
>>
>> Correct, close. It actually uses more like 3 512k chunks and then one 
>> smaller chunk from a different class to fit exactly 1.6MB. 
>>
>> > We use memcache in production and in past we saw evictions even when 
>> free memory was present. Also currently we use cluster with 39GB RAM in 
>> total to 
>> > cache data even when data size we expect is ~15GB to avoid eviction of 
>> active items. 
>>
>> Can you share snapshots from "stats items" and "stats slabs" for one of 
>> these instances? 
>>
>> > But as our data varies in size, it is possible to avoid evictions by 
>> tuning parameters: chunk_size, growth_factor, slab_automove. Also I believe 
>> memcache 
>> > is efficient and we can reduce cost by reducing memory size for 
>> cluster.  
>> > So I am trying to find the best possible memory size and parameters we 
>> can have.So want to be clear with my understanding and calculations. 
>> > 
>> > So while trying different parameters and putting all calculations, I 
>> observed that total_pages * item_size_max > physical memory for a machine. 
>> And from 
>> > all blogs,and docs it didnot match my understanding. But it's clear 
>> now. Thanks to you. 
>> > 
>> > One last question: From my trials I find that we can achieve ~90% 
>> storage efficiency with memcache. (i.e we need 10MB of physical memory to 
>> store 9MB of 
>> > data. Do you recommend any idle memory-size interms of percentage of 
>> expected data-size?  
>>
>> 90%+ are perfectly doable. You probably need to look a bit more closely 
>> into why you're not getting the efficiency you expect. The detailed stats 
>> output should point to why. I can help with that if it's confusing. 
>>
>> Either the slab rebalancer isn't keeping up or you actually do have 39GB 
>> of data and your expecations are a bit off. This will also depending on 
>> the TTL's you're setting and how often/quickly your items change size. 
>> Also things like your serialization method / compression / key length vs 
>> data length / etc. 
>>
>> -Dormando 
>>
>> > On Saturday, July 4, 2020 at 12:23:09 AM UTC+5:30, Dormando wrote: 
>> >   Hey, 
>> > 
>> >   Looks like I never updated the manpage. In the past the item size 
>> max was 
>> >   achieved by changing the slab page size, but that hasn't been 
>> true for a 
>> >   long time. 
>> > 
>> >   From ./memcached -h: 
>> >   -m, --memory-limit=  item memory in megabytes (default: 64) 
>> > 
>> >   ... -m just means the memory limit in megabytes, abstract from 
>> the page 
>> >   size. I think that was always true. 
>> > 
>> >   In any recentish version, any item larger than half a page size 
>> (512k) is 
>> >   created by stitching page chunks together. 

Re: Total memory allocated with -m NOT EQUAL to (total pages * max_item_size). Request to provide clarification on how page size works.

2020-07-03 Thread dormando
Hey,

> Putting my understanding to re-confirm:
> 1) Page size will always be 1MB and we cannot change it.Moreover, it's not 
> required to be changed.

Correct.

> 2) We can store items larger than 1MB and it is done by combining chunks 
> together. (example: let's say item size: ~1.6MB --> 4 slab chunks(512k slab) 
> from
> 2 pages will be used)

Correct, close. It actually uses more like 3 512k chunks and then one
smaller chunk from a different class to fit exactly 1.6MB.

> We use memcache in production and in past we saw evictions even when free 
> memory was present. Also currently we use cluster with 39GB RAM in total to
> cache data even when data size we expect is ~15GB to avoid eviction of active 
> items.

Can you share snapshots from "stats items" and "stats slabs" for one of
these instances?

> But as our data varies in size, it is possible to avoid evictions by tuning 
> parameters: chunk_size, growth_factor, slab_automove. Also I believe memcache
> is efficient and we can reduce cost by reducing memory size for cluster. 
> So I am trying to find the best possible memory size and parameters we can 
> have.So want to be clear with my understanding and calculations.
>
> So while trying different parameters and putting all calculations, I observed 
> that total_pages * item_size_max > physical memory for a machine. And from
> all blogs,and docs it didnot match my understanding. But it's clear now. 
> Thanks to you.
>
> One last question: From my trials I find that we can achieve ~90% storage 
> efficiency with memcache. (i.e we need 10MB of physical memory to store 9MB of
> data. Do you recommend any idle memory-size interms of percentage of expected 
> data-size? 

90%+ are perfectly doable. You probably need to look a bit more closely
into why you're not getting the efficiency you expect. The detailed stats
output should point to why. I can help with that if it's confusing.

Either the slab rebalancer isn't keeping up or you actually do have 39GB
of data and your expecations are a bit off. This will also depending on
the TTL's you're setting and how often/quickly your items change size.
Also things like your serialization method / compression / key length vs
data length / etc.

-Dormando

> On Saturday, July 4, 2020 at 12:23:09 AM UTC+5:30, Dormando wrote:
>   Hey,
>
>   Looks like I never updated the manpage. In the past the item size max 
> was
>   achieved by changing the slab page size, but that hasn't been true for a
>   long time.
>
>   From ./memcached -h:
>   -m, --memory-limit=  item memory in megabytes (default: 64)
>
>   ... -m just means the memory limit in megabytes, abstract from the page
>   size. I think that was always true.
>
>   In any recentish version, any item larger than half a page size (512k) 
> is
>   created by stitching page chunks together. This prevents waste when an
>   item would be more than half a page size.
>
>   Is there a problem you're trying to track down?
>
>   I'll update the manpage.
>
>   On Fri, 3 Jul 2020, Shweta Agrawal wrote:
>
>   > Hi,
>   > Sorry if I am repeating the question, I searched the list but could 
> not find definite answer. So posting it.
>   >
>   > Memcache version: 1.5.10 
>   > I have started memcahce with option: -I 4m (setting maximum item size 
> to 4MB).Verified it is set by command stats settings , I can see STAT
>   item_size_max
>   > 4194304.
>   >
>   > Documentation from git repository here stats that:
>   >
>   > -I, --max-item-size=
>   > Override the default size of each slab page. The default size is 1mb. 
> Default
>   > value for this parameter is 1m, minimum is 1k, max is 1G (1024 * 1024 
> * 1024).
>   > Adjusting this value changes the item size limit.
>   > My understanding from documentation is this option will allow to save 
> items with size till 4MB and the page size for each slab will be 4MB
>   (as I set it as
>   > -I 4m).
>   >
>   > I am able to save items till 4MB but the page-size is still 1MB.
>   >
>   > -m memory size is default 64MB.
>   >
>   > Calculation:
>   > -> Calculated total pages used from stats slabs output parameter 
> total_pages = 64 (If page size is 4MB then total pages should not be more
>   than 16. Also
>   > when I store 8 items of ~3MB it uses 25 pages but if page size is 
> 4MB, it should use 8 pages right.)
>   >
>   > Can you please help me in understanding the behaviour?
>   >
>   > Attached files with details for output of command stats settings and 
> stats slabs.
>   > Below is the summarized view of the distribution. 
>   > First added items with variable sizes, then then added items with 3MB 
> and above.
>   >
>   > data_distribution.png
>   >
>   >
>   >
>   > Please let me know in case more details are required or question is 
> not clear.
>   >  
>  

Re: Total memory allocated with -m NOT EQUAL to (total pages * max_item_size). Request to provide clarification on how page size works.

2020-07-03 Thread Shweta Agrawal

Hi Dormando,

Thanks a lot for the quick and promt reply.

*Putting my understanding to re-confirm:*
1) Page size will always be 1MB and we cannot change it.Moreover, it's not 
required to be changed.
2) We can store items larger than 1MB and it is done by combining chunks 
together. (example: let's say item size: ~1.6MB --> 4 slab chunks(512k 
slab) from 2 pages will be used)

*>Is there a problem you're trying to track down?*
We use memcache in production and in past we saw evictions even when free 
memory was present. Also currently we use cluster with 39GB RAM in total to 
cache data even when data size we expect is ~15GB to avoid eviction of 
active items.
But as our data varies in size, it is possible to avoid evictions by tuning 
parameters: chunk_size, growth_factor, slab_automove. Also I believe 
memcache is efficient and we can reduce cost by reducing memory size for 
cluster. 
So I am trying to find the best possible memory size and parameters we can 
have.So want to be clear with my understanding and calculations.

So while trying different parameters and putting all calculations, I 
observed that *total_pages * item_size_max > physical memory for a machine. 
*And from all blogs,and docs it didnot match my understanding. But it's 
clear now. *Thanks to you.*

*One last question:* From my trials I find that we can achieve ~90% storage 
efficiency with memcache. (i.e we need 10MB of physical memory to store 9MB 
of data. Do you recommend any idle memory-size interms of percentage of 
expected data-size? 

Very grateful for the reply. Thanks a lot.
Shweta

On Saturday, July 4, 2020 at 12:23:09 AM UTC+5:30, Dormando wrote:
>
> Hey, 
>
> Looks like I never updated the manpage. In the past the item size max was 
> achieved by changing the slab page size, but that hasn't been true for a 
> long time. 
>
> From ./memcached -h: 
> -m, --memory-limit=  item memory in megabytes (default: 64) 
>
> ... -m just means the memory limit in megabytes, abstract from the page 
> size. I think that was always true. 
>
> In any recentish version, any item larger than half a page size (512k) is 
> created by stitching page chunks together. This prevents waste when an 
> item would be more than half a page size. 
>
> Is there a problem you're trying to track down? 
>
> I'll update the manpage. 
>
> On Fri, 3 Jul 2020, Shweta Agrawal wrote: 
>
> > Hi, 
> > Sorry if I am repeating the question, I searched the list but could not 
> find definite answer. So posting it. 
> > 
> > Memcache version: 1.5.10  
> > I have started memcahce with option: -I 4m (setting maximum item size to 
> 4MB).Verified it is set by command stats settings , I can see STAT 
> item_size_max 
> > 4194304. 
> > 
> > Documentation from git repository here stats that: 
> > 
> > -I, --max-item-size= 
> > Override the default size of each slab page. The default size is 1mb. 
> Default 
> > value for this parameter is 1m, minimum is 1k, max is 1G (1024 * 1024 * 
> 1024). 
> > Adjusting this value changes the item size limit. 
> > My understanding from documentation is this option will allow to save 
> items with size till 4MB and the page size for each slab will be 4MB (as I 
> set it as 
> > -I 4m). 
> > 
> > I am able to save items till 4MB but the page-size is still 1MB. 
> > 
> > -m memory size is default 64MB. 
> > 
> > Calculation: 
> > -> Calculated total pages used from stats slabs output 
> parameter total_pages = 64 (If page size is 4MB then total pages should not 
> be more than 16. Also 
> > when I store 8 items of ~3MB it uses 25 pages but if page size is 4MB, 
> it should use 8 pages right.) 
> > 
> > Can you please help me in understanding the behaviour? 
> > 
> > Attached files with details for output of command stats settings and 
> stats slabs. 
> > Below is the summarized view of the distribution.  
> > First added items with variable sizes, then then added items with 3MB 
> and above. 
> > 
> > data_distribution.png 
> > 
> > 
> > 
> > Please let me know in case more details are required or question is not 
> clear. 
> >   
> > Thank You, 
> >  Shweta 
> > 
> > -- 
> > 
> > --- 
> > You received this message because you are subscribed to the Google 
> Groups "memcached" group. 
> > To unsubscribe from this group and stop receiving emails from it, send 
> an email to memc...@googlegroups.com . 
> > To view this discussion on the web visit 
> https://groups.google.com/d/msgid/memcached/2b640e1f-9f59-4432-a930-d830cbe8566do%40googlegroups.com.
>  
>
> > 
> >

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"memcached" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/memcached/586aad58-c6fb-4ed8-89ce-6b005d59ba12o%40googlegroups.com.


Re: Total memory allocated with -m NOT EQUAL to (total pages * max_item_size). Request to provide clarification on how page size works.

2020-07-03 Thread dormando
Hey,

Looks like I never updated the manpage. In the past the item size max was
achieved by changing the slab page size, but that hasn't been true for a
long time.

>From ./memcached -h:
-m, --memory-limit=  item memory in megabytes (default: 64)

... -m just means the memory limit in megabytes, abstract from the page
size. I think that was always true.

In any recentish version, any item larger than half a page size (512k) is
created by stitching page chunks together. This prevents waste when an
item would be more than half a page size.

Is there a problem you're trying to track down?

I'll update the manpage.

On Fri, 3 Jul 2020, Shweta Agrawal wrote:

> Hi,
> Sorry if I am repeating the question, I searched the list but could not find 
> definite answer. So posting it.
>
> Memcache version: 1.5.10 
> I have started memcahce with option: -I 4m (setting maximum item size to 
> 4MB).Verified it is set by command stats settings , I can see STAT 
> item_size_max
> 4194304.
>
> Documentation from git repository here stats that:
>
> -I, --max-item-size=
> Override the default size of each slab page. The default size is 1mb. Default
> value for this parameter is 1m, minimum is 1k, max is 1G (1024 * 1024 * 1024).
> Adjusting this value changes the item size limit.
> My understanding from documentation is this option will allow to save items 
> with size till 4MB and the page size for each slab will be 4MB (as I set it as
> -I 4m).
>
> I am able to save items till 4MB but the page-size is still 1MB.
>
> -m memory size is default 64MB.
>
> Calculation:
> -> Calculated total pages used from stats slabs output parameter total_pages 
> = 64 (If page size is 4MB then total pages should not be more than 16. Also
> when I store 8 items of ~3MB it uses 25 pages but if page size is 4MB, it 
> should use 8 pages right.)
>
> Can you please help me in understanding the behaviour?
>
> Attached files with details for output of command stats settings and stats 
> slabs.
> Below is the summarized view of the distribution. 
> First added items with variable sizes, then then added items with 3MB and 
> above.
>
> data_distribution.png
>
>
>
> Please let me know in case more details are required or question is not clear.
>  
> Thank You,
>  Shweta
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups 
> "memcached" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to memcached+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/memcached/2b640e1f-9f59-4432-a930-d830cbe8566do%40googlegroups.com.
>
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"memcached" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/memcached/alpine.DEB.2.21.2007031149160.18887%40dskull.