One more guess - could it be that you are limited by the network bandwidth
of your server ?

I came across many Memcached deployments that this was the case

On Tue, Jul 17, 2012 at 7:47 PM, David Morel <[email protected]>wrote:

> On 17 Jul 2012, at 15:33, Yiftach Shoolman wrote:
>
>  Re #2 - when more objects are stored in the cache, the hit ratio should be
>> higher--> the application might run faster, i.e. with less DB accesses.
>>
>
> no relation here, I was really mentioning bare set() calls timing,
> regardless of anything else.
>
>
>  Anyway, it sounds to me more related to slabs allocation, as only reset
>> solves it (not flush) if I understood u well.
>>
>
> all the slabs are already allocated, with a moderate rate of evictions. so
> it's not slab
> allocation either, it's just one key now and then, regardless (it seems)
> of the key
> itself or the size of the object.
>
>
>  Does it happen on any object size or on specific object size range ?
>>
>
> anything, really. puzzling, hey?
>
> thanks!
>
>
>  On Tue, Jul 17, 2012 at 4:04 PM, David Morel <[email protected]>**
>> wrote:
>>
>>
>>>
>>> On Tuesday, July 17, 2012 12:26:16 PM UTC+2, Yiftach wrote:
>>>
>>>>
>>>> Few things that may help understanding your problem:
>>>>
>>>> 1. What is the status of your slabs allocation, is there enough room to
>>>> all slabes ?
>>>>
>>>>
>>> This happens when the memory gets close to full. however there is not a
>>> large number of evictions.
>>> I would expect evictions to be made whenever needed, but not the process
>>> of making room for 1 object to take half a second.
>>>
>>>
>>>  2. Do you see increase in the requests rate when your Memcached memory
>>>> is
>>>> becoming full with objects ?
>>>>
>>>>
>>> I don't think so, why would that be the case, it's application dependent,
>>> not server, right?
>>>
>>>
>>>  3. How many threads are configured ?
>>>>
>>>>
>>> the default 4
>>>
>>>
>>>
>>>>
>>>> On Tue, Jul 17, 2012 at 1:11 PM, David Morel <[email protected]>*
>>>> *wrote:
>>>>
>>>>  hi memcached users/devvers,
>>>>>
>>>>> I'm seeing occasional slowdowns (tens of milliseconds) in setting some
>>>>> keys on some big servers (80GB RAM allocated to memcached) which
>>>>> contain
>>>>> a large number of keys (many millions). The current version I use is
>>>>> 1.4.6 on RH6.
>>>>>
>>>>> The thing is once I bounce the service (restart, not flush_all),
>>>>> everything becomes fine again. So could a large number of keys be the
>>>>> source of the issue (some memory allocation slowdown or something)?
>>>>>
>>>>> I don't see that many evictions on the box, and anyway, evicting an
>>>>> object to make room for another shouldn't take long, should it? Is
>>>>> there
>>>>> a remote possibility the large number of keys is at fault and splitting
>>>>> the daemons, like 2 or more instances per box, would fix it? Or is that
>>>>> a known issue fixed in a later release?
>>>>>
>>>>> Thanks for any insight.
>>>>>
>>>>> David Morel
>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>> Yiftach Shoolman
>>>> +972-54-7634621
>>>>
>>>>
>>>
>>
>> --
>> Yiftach Shoolman
>> +972-54-7634621
>>
>
>
> David Morel
> --
> Booking.com <http://www.booking.com/>
> Lyon office: 93 rue de la Villette, 5e étage, F-69003 Lyon
> phone:+33 4 20 10 26 63
> gsm:+33 6 80 38 56 83
>



-- 
Yiftach Shoolman
+972-54-7634621

Reply via email to