Re #2 - when more objects are stored in the cache, the hit ratio should be
higher--> the application might run faster, i.e. with less DB accesses.

Anyway, it sounds to me more related to slabs allocation, as only reset
solves it (not flush) if I understood u well.

Does it happen on any object size or on specific object size range ?

On Tue, Jul 17, 2012 at 4:04 PM, David Morel <[email protected]>wrote:

>
>
> On Tuesday, July 17, 2012 12:26:16 PM UTC+2, Yiftach wrote:
>>
>> Few things that may help understanding your problem:
>>
>> 1. What is the status of your slabs allocation, is there enough room to
>> all slabes ?
>>
>
> This happens when the memory gets close to full. however there is not a
> large number of evictions.
> I would expect evictions to be made whenever needed, but not the process
> of making room for 1 object to take half a second.
>
>
>> 2. Do you see increase in the requests rate when your Memcached memory is
>> becoming full with objects ?
>>
>
> I don't think so, why would that be the case, it's application dependent,
> not server, right?
>
>
>> 3. How many threads are configured ?
>>
>
> the default 4
>
>
>>
>>
>> On Tue, Jul 17, 2012 at 1:11 PM, David Morel <[email protected]>wrote:
>>
>>> hi memcached users/devvers,
>>>
>>> I'm seeing occasional slowdowns (tens of milliseconds) in setting some
>>> keys on some big servers (80GB RAM allocated to memcached) which contain
>>> a large number of keys (many millions). The current version I use is
>>> 1.4.6 on RH6.
>>>
>>> The thing is once I bounce the service (restart, not flush_all),
>>> everything becomes fine again. So could a large number of keys be the
>>> source of the issue (some memory allocation slowdown or something)?
>>>
>>> I don't see that many evictions on the box, and anyway, evicting an
>>> object to make room for another shouldn't take long, should it? Is there
>>> a remote possibility the large number of keys is at fault and splitting
>>> the daemons, like 2 or more instances per box, would fix it? Or is that
>>> a known issue fixed in a later release?
>>>
>>> Thanks for any insight.
>>>
>>> David Morel
>>>
>>
>>
>>
>> --
>> Yiftach Shoolman
>> +972-54-7634621
>>
>


-- 
Yiftach Shoolman
+972-54-7634621

Reply via email to