or some part of the code that
might cause worker threads to not engage?
Thanks,
Saman
That is pretty weird. I've not run it on a quad socket but plenty of intel
machines without problem. Modern ones too.
How many clients are you telling memslap to use? Can you try
https://github.com/dormando
maintainer thread running
LRU maintainer thread sleeping
// ... endless...
Expected, but a bit annoying
On Tue, Jan 20, 2015 at 12:37 AM, dormando dorma...@rydia.net wrote:
Thanks!
No crashes is interesting/useful at least? No errors or other problems?
I'm still
for an A/B test. The tool can fake traffic of
random expire-time and random length, and also can specify the weights
of different expire-time and length, and lots of other
functions. It is almost completed, and I can post a result next Monday.
On Fri, Jan 16, 2015 at 11:12 AM, dormando
nobody -d -c 10240 -o tail_repair_time=7200
-m 64 -p 11811,
I sum up the stats of all memcache instances on the host and make followings
analysis:
Inline image 1
On Wed, Jan 14, 2015 at 1:58 AM, dormando dorma...@rydia.net wrote:
Last update to the branch was 3 days ago. I'm
-compile.
On Tue, Jan 13, 2015 at 4:20 AM, dormando dorma...@rydia.net wrote:
That sounds like an okay place to start. Can you please make sure the
other dev server is running the very latest version of the branch? A lot
changed since last friday... a few pretty bad bugs
The only data stored are when the item expires, and when the last time it
was accessed.
The age field (and evicted_time) is how long ago the oldest item in the
LRU was accessed. You can roughly tell how wide your LRU is with that.
On Mon, 12 Jan 2015, 'Jay Grizzard' via memcached wrote:
I
GMT+08:00 dormando dorma...@rydia.net:
Hey,
To all three of you: Just run it anywhere you can (but not more than one
machine, yet?), with the options prescribed in the PR. Ideally you have
graphs of the hit ratio and maybe cache fullness and can compare
before/after
folks involved in testing. I'll give it a good
soak before merging. Please and thanks!
On Wed, 7 Jan 2015, dormando wrote:
Hey,
To all three of you: Just run it anywhere you can (but not more than one
machine, yet?), with the options prescribed in the PR. Ideally you have
graphs of the hit ratio
to monitor it?
-Original Message-
From: memcached@googlegroups.com [mailto:memcached@googlegroups.com] On
Behalf Of dormando
Sent: Thursday, January 8, 2015 9:25 PM
To: memcached@googlegroups.com
Subject: Re: memory efficiency / LRU refactor branch
Hi,
https://github.com/memcached
.
thanks,
-Dormando
To be extra clear; you can send feeback here or the PR. I don't care
either way.
On Wed, 7 Jan 2015, dormando wrote:
Hey,
To all three of you: Just run it anywhere you can (but not more than one
machine, yet?), with the options prescribed in the PR. Ideally you have
graphs of the hit ratio
?
-Original Message-
From: memcached@googlegroups.com [mailto:memcached@googlegroups.com] On
Behalf Of dormando
Sent: Wednesday, January 7, 2015 3:52 AM
To: memcached@googlegroups.com
Subject: memory efficiency / LRU refactor branch
Yo,
https
https://code.google.com/p/memcached/wiki/ReleaseNotes1422
: 0.00031982)
real 53m25.697s
user 0m17.721s
sys 0m11.093s
Even though we saw 14 failures during this time period. Will look more to see
if this is a problem on our end
On Sat, Nov 29, 2014 at 4:46 PM, dormando dorma...@rydia.net wrote:
Hey,
http://memcached.org/timeouts
Please use a newer source tarball from http://memcached.org/ - this was
fixed ages ago.
On Sat, 29 Nov 2014, vivek verma wrote:
Hi,
Can you please specify how to manually remove pthread?
I don't have certain rights in the system, so can't follow other solutions.
Thanks
On Wednesday,
Hey,
http://memcached.org/timeouts - sounds like you've already done some tcp
dumping, so checking the stats as mentioned in here and running the test
script a bit should illuminate things a bit.
On Fri, 21 Nov 2014, kgo...@bepress.com wrote:
A couple months ago, we moved our memcached nodes
Are your sets or any other functions failing sometimes? Are you just more
likely to notice with a delete?
The only issues have always been with the client. Old clients would send
invalid args to the delete command (though it doesn't seem like you're
doing that here). You might just be failing to
You're probably getting spaces or newlines into your keys, which can cause
the client protocol to desync with the server. Then you'll get all sorts
of junk into random keys (or random responses from keys which're fine).
Either filtering those or using the binary protocol should fix that for
you.
conn_close after for a while, such as
waiting for next event when getting TRANSMIT_HARD_ERROR error then to
conn_close immediately?
在 2014年10月31日星期五UTC+8下午3时01分06秒,Dormando写道:
Hey,
How are you reproducing this? How many connections do you typically have
open?
It's
Hey,
How are you reproducing this? How many connections do you typically have
open?
It's really bizarre that your curr_conns is 5, but your connections are
disabled? Even if there's still a race, as more connections close they
each have an opportunity to flip the acceptor back on.
Can you print
memory usage. there're a lot of buffers/etc
outside of that. Also the hash table, which is measured separately.
On Fri, 31 Oct 2014, Samdy Sun wrote:
@Dormando,
I try my best to reproduce this in my environment, but failed. This just
happened on my servers.
I use stats command to check
You're absolutely sure the running version was 1.4.20? that looks like a
bug that was fixed in .19 or .20
hmmm... maybe a unix domain bug?
On Tue, 28 Oct 2014, Samdy Sun wrote:
Hello, I got a memcached-1.4.20 stuck problem when EMFILE happen.
Here are my memcached's cmdline memcached -s
with the ascii protocol, yes. It would not work otherwise.
with the binary protocol, the answer is also currently yes, but the
ordering isn't strict and could be up to the individual commands.
On Wed, 22 Oct 2014, Yaowen Tu wrote:
If I have a client that creates a TCP connection, and send
to dig deeper to see if it is because of ordering of memcached
responses.
Based on your answer it is highly possible, so I would be really appreciated
if you could share with me more detailed information.
Thanks,
Yaowen
Yaowen
On Thu, Oct 23, 2014 at 5:19 PM, dormando dorma...@rydia.net
Internally, there's a per-item lock, so an item can only be updated by one
thread at a time.
This is *just* during the internal update, not while a client is uploading
or downloading data to the key. You can probably do several thousand
updates per second to the same key without problem (like
The hash table buckets are chained.
By default memcached autoresizes the hash table as the number of items
grows, so bucket collision is relatively rare. In recent versions you can
also switch the internal hash algorithm between jenkins and murmur if you
want to test.
On Sun, 19 Oct 2014, Deepak
Is out: https://code.google.com/p/memcached/wiki/ReleaseNotes1421 -
targeted release just for the OOM issues reported by Box + some misc
fixes.
--
---
You received this message because you are subscribed to the Google Groups
memcached group.
To unsubscribe from this group and stop receiving
No idea, sorry :/
On Thu, 2 Oct 2014, Sheel Shah wrote:
Understood.
Do you know where I can find a supported windows version of the memcached
exe? The most recent one I was able to find was version 1.4.4.
Thanks,
Sheel
--
---
You received this message because you are subscribed to
Hey,
Sorry but that version is well over 5 years old, and a forked windows port
at that. It's unsupported.
On Wed, 1 Oct 2014, Sheel Shah wrote:
I believe the version number on our current memcached EXE is 1.2.6.
The error I see in my independent log is the following:
Item could not be
What version of memcached are you running (the server, not the client).
What is the exact error you're seeing in the logs?
On Tue, 30 Sep 2014, Sheel Shah wrote:
Hello,
I apologize for the vagueness of this post, as I am new to using and
supporting memcached. For the last couple of months,
Recent versions use a monotonic clock, so changing the system clock can't
cause memcached to lose its mind.
Why are you trying to do this on purpose?
On Tue, 16 Sep 2014, Yu Liu wrote:
Today I found memcached cat not be work。 so , I found memcached stats time
while being changed before os
. the data stored in memcache server need to be flushed.
or other client my get the wrong data.
在 2014年9月3日星期三UTC+8下午3时04分59秒,Dormando写道:
If a client is uploading something and it does not complete the upload,
the data will be dropped.
Otherwise, no.
On Wed, 3 Sep 2014
this thing is deserving of a party… ;)
-j
On Thu, Aug 21, 2014 at 12:33 PM, dormando dorma...@rydia.net wrote:
Okay cool.
As I mentioned with the original link I will be adding some sort of
sanity
checking to break the loop. I just have to reorganize the whole thing
/dormando/memcached/pull/1). I *think* I
got the right things in the right places, though you
may take issue with the stat name (“reflocked”).
Now that I have the stats, I’m going to work on putting a patched copy out
under production load to make sure it holds up there,
and at least see about
Apparently I lied about the weekend, sorry...
On Mon, 11 Aug 2014, Jay Grizzard wrote:
Well, sounds like whatever process was asking for that data is dead (and
possibly pissing off a customer) so you should
indeed figure out what
that's about.
Yeah, we’ll definitely hunt this one down.
Hello there,
There he has a method to be able to remove items from the cache using a
regular expression on the key. For example we want to remove all the key as
my_key_ *?
We try to parse all the slabs with the command stats cachedump but our
slabs contain several pages and it is
and never write anything (similar to the case you
suggest in your last paragraph), but that’s a situation I’m totally willing
to accept. ;)
Anyhow, looking forward to a patch, and will gladly help test!
Here, try out this branch:
https://github.com/dormando/memcached/tree/refchuck
It needs
Thanks! It might take me a while to look into it more closely.
That conn_mwrite is probably bad, however a single connection shouldn't be
able to do it. Before the OOM is given up, memcached walks up the chain
from the bottom of the LRU by 5ish. So all of them have to be locked, or
possibly some
other stats that could help explain this issue
if necessary.
On Mon, Aug 4, 2014 at 9:36 PM, dormando dorma...@rydia.net wrote:
You could run one instance with one thread and serve all of that just
fine. have you actually looked at graphs of the CPU usage of the host?
memcached
, stats items and stats slabs.
the only commands executed are remove, incr, add, get, set and cas.
I'm running now with 6 threads per instance with 3 per server and haven't had
the issue again, not that this change fixed it.
I'll definitely update.
On Aug 7, 2014 6:13 PM, dormando dorma
completely different keys). On the other servers sometimes the issue goes
away on its own or the spike is not at 100pct.
On Aug 7, 2014 6:36 PM, dormando dorma...@rydia.net wrote:
Those three stats commands aren't problematic. The others I listed are.
Sadly there aren't stats counters
I have no idea what you're talking about.
On Wed, 6 Aug 2014, skt8u...@gmail.com wrote:
Dear All,
I'm developing a system using memcached 1.4
and I'll release it to the other country (Italy).
Could you please give me, the US Export Control Classification Number (ECCN)
of memcached 1.4 ?
On Mon, 4 Aug 2014, Claudio Santana wrote:
I have this Memcached cluster where 3 instances of Memcached run in a single
server. These servers have 24 cores, each instance
is configured to have 8 threads each. Each individual instance serves have
about 5000G gets/sets a day and about 3k
You could run one instance with one thread and serve all of that just
fine. have you actually looked at graphs of the CPU usage of the host?
memcached should be practically idle with load that low.
One with -t 6 or -t 8 would do it just fine.
On Mon, 4 Aug 2014, Claudio Santana wrote:
Dormando
Hello Dormando,
Thanks for the answer.
The LRU fiddling only happens once a minute per item, so hot items don't
affect the lock as much. The more you lean toward hot
items the better it scales as-is.
= For linked-list traversal, pthreads acquire item-partitioned lock. But
threads acquire
On Jul 31, 2014, at 10:01 AM, Byung-chul Hong byungchul.h...@gmail.com
wrote:
Hello,
I'm testing the scalability of memcached-1.4.20 version in a GET dominated
system.
For a linked-list traversal in a hash table (do_item_get), it is protected by
interleaved lock (per bucket),
so
Dormando,Sure, I waited till Monday (our usual tailrepair/oom errors day) but
we did not have any issues today :). I will continue to monitor and
will grab stats conns next time.
Great, thanks!
As for network issues during the last time - i do not see any but still
trying to find
better.
在 2014年7月3日星期四UTC+8下午1时30分29秒,Dormando写道:
the item lock is already held for that key when do_item_get is called,
which is why the nolock code is called there.
slab rebalance has that second short-circuiting of fetches to ensure
very
hot items don't permanently jam
Samoylov wrote:
Dormando, sure, we will add option to preset hashtable. (as i see nn should
be 26).
One question: as i see in logs for the servers there is no change for
hash_power_level before incident (it would be hard to say for crushed but .20
just had outofmemory and i have solid stats
, Dormando wrote:
Cool. That is disappointing.
Can you clarify a few things for me:
1) You're saying that you were getting OOM's on slab 13, but it
recovered
on its own? This is under version 1.4.20 and you did *not* enable tail
repairs?
2) Can you share
the item lock is already held for that key when do_item_get is called,
which is why the nolock code is called there.
slab rebalance has that second short-circuiting of fetches to ensure very
hot items don't permanently jam a page move.
On Wed, 2 Jul 2014, Zhiwei Chan wrote:
Hi all, I have
Hey,
Can you presize the hash table? (-o hashpower=nn) to be large enough on
those servers such that hash expansion won't happen at runtime? You can
see what hashpower is on a long running server via stats to know what to
set the value to.
If that helps, we might still have a bug in hash
i saw Dormando comment about some fixes in .17 but I cannot trace any
fix related). My question is actually slightly different - i do grep and i do
not see where we initialize slabclass_t-slots. It is set to 0(zero)
in slabs_init (by memset). And also I see 8 usages across the file slabs.c
==
On Tuesday, 27 May 2014 19:09:56 UTC-7, Dormando wrote:
You're completely sure that's the 1.4.20 source tree?
That bug was pretty well fixed...
If you are definitely testing a 1.4.20 binary, here's the way to grab a
trace:
start memcached-debug under gdb:
gdb
Can you try this patch?
https://github.com/dormando/memcached/commit/724bfb34484347963a27051fed2b4312e189ace3
Either apply it yourself, or just download the raw file:
https://raw.githubusercontent.com/dormando/memcached/724bfb34484347963a27051fed2b4312e189ace3/t/lru-crawler.t
On Wed, 28 May 2014
:11211 prove -v t/lru-crawler.t
... wait until it's been spinning cpu for a few seconds. Then ^C the GDB
window and run thread apply all bt
.. and send me that info.
On Tue, 27 May 2014, Alex Gemmell wrote:
Hello Dormando,
I am having exactly the same issue but with Memcached 1.4.20.
My server
memcached's operations are all atomic. Always have been, always will be,
barring bugs.
Wouldn't be much useful to anyone if you could have a get come back with
half a set... I answer this question a lot and it's pretty bizarre that
people think it's how it works.
Internally, items are generally
fixes a hang regression seen in .18 and .19. does not affect .17 or newer.
no other changes.
--
---
You received this message because you are subscribed to the Google Groups
memcached group.
To unsubscribe from this group and stop receiving emails from it, send an email
to
, then
it's only the server increasing the packet load...
On Fri, 9 May 2014, Byung-chul Hong wrote:
Hello, Ryan, dormando,
Thanks a lot for the clear explanation and the comments.
I'm trying to find out how many requests I can batch as a muli-get within the
allowed latency.
I think multi
with 2 threads.
Why do you say it will be less likely to happen with 2 threads than 4?
On Wednesday, May 7, 2014 5:38:47 PM UTC-7, Dormando wrote:
That doesn't really tell us anything about the nature of the problem
though. With 2 threads it might still happen, but is a lot less likely
conditions: the more threads you have running the more
likely you are to hit them, sometimes on order of magnitudes.
It doesn't really change the fact that this has worked for many years and
the code *barely* changed recently. I just don't see it.
On Wednesday, May 7, 2014 5:38:47 PM UTC-7, Dormando wrote
To that note, it *is* useful if you try that branch I posted, since so far
as I can tell that should emulate the .17 behavior.
On Thu, 8 May 2014, dormando wrote:
I am just speculating, and by no means have any idea what I am really
talking about here. :)
With 2 threads, still solid
Dormando,Yes, have to admit - we cache too aggressively (just do not want to
use different less polite word :)).
Going to do two test experiments: enable compression and auto reallocation.
Before doing this:
1) why auto reallocation is not enabled by default, what issues/disadvantage
Hey,
try this branch:
https://github.com/dormando/memcached/tree/double_close
so far as I can tell that emulates the behavior in .17...
to build:
./autogen.sh ./configure make
run it in screen like you were doing with the other tests, see if it
prints ERROR: Double Close [somefd
Hello,
For now, I'm trying to evaluate the performance of memcached server by using
several client workloads.
I have a question about multi-get implementation in binary protocol.
As I know, in ascii protocol, we can send multiple keys in a single request
packet to implement multi-get.
, then revert back to 4 threads and see if timeout
errors come up again. That will tell us the problem lies in spawning more
than 2 threads.
On Wednesday, May 7, 2014 5:19:13 PM UTC-7, Dormando wrote:
Hey,
try this branch:
https://github.com/dormando/memcached/tree
real traffic. Super frustrating.
On Sunday, May 4, 2014 10:12:08 AM UTC-7, Dormando wrote:
I'm stumped. (also, your e-mails aren't updating the ticket...).
It's impossible for a connection to get into the closed state without
having event_del() and close() called on the socket
Hi,
Does anybody know good way to handle OOM during set operation? Server is
fully calcified :) (no new pages to allocate) and i have this issue for
slab 17
STAT items:17:number 16128
STAT items:17:age 90
STAT items:17:evicted 246790897
STAT items:17:evicted_nonzero 246790874
STAT
Hi Dormando,
Full Slabs and Items stats are below. The problem is that other slabs are
full too, so rebalancing is not trivial. I will try to create a wrapper
that will do some analysis and do slab rebalancing based on stats (the idea
to move try to shrink slabs with low eviction but need
I'm stumped. (also, your e-mails aren't updating the ticket...).
It's impossible for a connection to get into the closed state without
having event_del() and close() called on the socket. A socket slot isn't
event_add()'ed again until after the state is reset to 'init_state'.
There was no code
http://code.google.com/p/memcached/wiki/ReleaseNotes1419
Thanks to everyone who helped out with the bugfixes for this release.
Don't want to get my hopes up but I think we're finally running out of
segfaults and refcount leaks (until we go changing more stuff again..).
--
---
You received
What's the output of:
$ prove -v t/lru-crawler.t
How long are the tests taking to run? This has definitely been tested on
ubuntu 12.04 (which is what I assume you meant?), but not something with
so little RAM.
On Thu, 1 May 2014, Wilfred Khalik wrote:
Hi guys,
I get the below failure error
I don't know. I need to see the output of that program.
On Thu, 1 May 2014, Wilfred Khalik wrote:
By the way, how RAM is enough RAM?
On Friday, May 2, 2014 1:28:57 PM UTC+12, Dormando wrote:
What's the output of:
$ prove -v t/lru-crawler.t
How long are the tests taking
http://memcached.org/timeouts
also, you haven't said what version you're on of memcached? or provided
stats, or etc...
On Fri, 25 Apr 2014, Filippe Costa Spolti wrote:
Helle guys,
Anyone already had a problem similar to this:
Caused by: java.util.concurrent.ExecutionException:
what version are you testing?
On Wed, 23 Apr 2014, Filippe Costa Spolti wrote:
Hello everyone.
THis python script crash the memcached.
import sys
import socket
print Memcached Remote DoS - Bursting Clouds yo!
if len(sys.argv) != 3:
print Usage: %s host port %(sys.argv[0])
://counter.li.org/
filippespo...@gmail.com
Be yourself
[IMAGE]
On 04/23/2014 06:24 PM, dormando wrote:
what version are you testing?
On Wed, 23 Apr 2014, Filippe Costa Spolti wrote:
Hello everyone.
THis python script crash the memcached.
import sys
import socket
print Memcached Remote DoS - Bursting
Well I haven't read the lease paper yet. Ryan, can folks more familiar
with the actual implementation have a look through it maybe?
On Thu, 17 Apr 2014, Zhiwei Chan wrote:
I m working on a trading system, and getting stale data for the system is
unaccepted at most of the time. But the high
...
On Thursday, April 17, 2014 6:28:24 PM UTC-5, Dormando wrote:
http://code.google.com/p/memcached/wiki/ReleaseNotes1418
I just tried building the Arch Linux package for this and got failures when
running the test suite. This was the output from the 32-bit i686 build;
I saw the same
On Sat, Apr 19, 2014 at 12:43 PM, dormando dorma...@rydia.net wrote:
Well, that learns me for trying to write software without the 10+ VM
buildbots...
The i386 one, can you include the output of stats settings, and also
manually run: lru_crawler enable (or start
Er... reading comprehension fail. I meant 64bit binary still at the
bottom there.
On Sat, 19 Apr 2014, dormando wrote:
On Sat, Apr 19, 2014 at 12:43 PM, dormando dorma...@rydia.net wrote:
Well, that learns me for trying to write software without the 10+ VM
buildbots
On Sat, 19 Apr 2014, Dan McGee wrote:
On Sat, Apr 19, 2014 at 1:45 PM, dormando dorma...@rydia.net wrote:
On Sat, Apr 19, 2014 at 12:43 PM, dormando dorma...@rydia.net wrote:
Well, that learns me for trying to write software without the
10+ VM
buildbots
.
https://github.com/dormando/memcached/tree/crawler_fix
Can you try this? The lock elision might've made my undefined behavior
mistake of not holding a lock before initially waiting on the condition
fatal.
A further fix might be required, as it's possible someone
On Sat, Apr 19, 2014 at 6:05 PM, dormando dorma...@rydia.net wrote:
Once I wrapped my head around it, figured this one out. This cheap
patch fixes the test, although I'm not sure it is the best actual solution.
Because we don't set the lru_crawler_running flag on the main
exhausted memory isn't going to cause it to pause...
http://memcached.org/timeouts for the typical run-through of timeout
problems.
On Tue, 15 Apr 2014, Suraj Narkhede wrote:
Or maybe its like you have exhausted your memory. Can you please check in the
stats if there is any eviction_count?
at all, so if
something is broken it won't harm people.
have fun,
-Dormando
--
---
You received this message because you are subscribed to the Google Groups
memcached group.
To unsubscribe from this group and stop receiving emails from it, send an email
to memcached+unsubscr...@googlegroups.com
http://code.google.com/p/memcached/wiki/ReleaseNotes1418
--
---
You received this message because you are subscribed to the Google Groups
memcached group.
To unsubscribe from this group and stop receiving emails from it, send an email
to memcached+unsubscr...@googlegroups.com.
For more
What version are you using? If less than 1.4.17, please upgrade to the
latest version.
Also, -t 25 is a huge waste. Use -t 4 unless you're doing more than
several hundred thousand requests per second.
On Mon, 14 Apr 2014, Jon Hauksson wrote:
Hi,
I work at company where we use memcached and
memcached instances. It runs at low priority
and doesn't cause much latency that we notice. I should really get our
version back out there so that others can see how we did it and implement it
in the legit memcached :-)
~Ryan
On Fri, Apr 11, 2014 at 11:08 AM, dormando dorma...@rydia.net wrote
Hey Dormando...
Some quick question first... i have checked some Intel papers on their
memcached fork and for 1.6 it seems that there's some rather big lock
contention... have you thought about just gluing individual items to a
thread, using maybe item hash or some configurable method
2014 21:12:43 UTC+2 użytkownik Dormando
napisał:
Hey Dormando...
Some quick question first... i have checked some Intel papers on
their memcached fork and for 1.6 it seems that there's some rather
big lock
contention... have you thought about just
On Fri, 11 Apr 2014, Slawomir Pryczek wrote:
Hi Dormando, more about the behaviour... when we're using normal memcached
1.4.13 16GB of memory gets exhausted in ~1h, then we start to have
almost instant evictions of needed items (again these items aren't really
needed individually, just
s/pagging/padding/. gah.
On Fri, 11 Apr 2014, dormando wrote:
On Fri, 11 Apr 2014, Slawomir Pryczek wrote:
Hi Dormando, more about the behaviour... when we're using normal
memcached 1.4.13 16GB of memory gets exhausted in ~1h, then we start to have
almost instant evictions of needed
Hey Dormando, thanks again for some comments... appreciate the help.
Maybe i wasn't clear enough. I need only 1 minute persistence, and i can lose
data sometimes, just i can't keep loosing data every minute due to
constant evictions caused by LRU. Actually i have just wrote that in my
1.4 is the latest stable. 1.6 is a development branch.
On Tue, 8 Apr 2014, Vakul Garg wrote:
Hi
Which is the latest memcached version (1.4 or 1.6)?
I do not see engine-pu branch on memcached git.
Is memcached version 1.6 deprecated?
Regards
Vakul
--
---
You received this message
Hi Guys,
im running a specific case where i don't want (actually can't have) to have
evicted items (evictions = 0 ideally)... now i have created some simple
algo that lock the cache, goes through linked list and evicts items... it
makes some problems, like 10-20ms cache locks on some cases.
just use master.
On Tue, 8 Apr 2014, Slawomir Pryczek wrote:
Is it safe to use master branch code in production enviroment? When adding
changes to code i can just fork master and use that safely, or i'll need
to make modifications on 1.4.17 code available from the website?
I noticed there
It's not presently possible to do either. I would like to allow people to
supply the slab classes specifically, but we haven't done it yet.
2000 slab classes has its own set of inefficiencies. You should still keep
the number relatively low.
On Wed, 2 Apr 2014, Slawomir Pryczek wrote:
Hi guys,
The memcached library that lighttpd uses, last I checked, was synchronous.
Lighttpd is an async webserver, which means each time it needs to fetch
something from memcached it will block the entire thing waiting for a
response.
It won't block very long, mind you, but it can't process in parallel.
On Wed, 12 Mar 2014, Joshua Miller wrote:
On Wed, Mar 12, 2014 at 12:43 AM, dormando dorma...@rydia.net wrote:
https://github.com/unrtst/Cache-Memcached/tree/20140223-patch-cas-support
This started as just wanting to get a couple small features into
Cache
https://github.com/unrtst/Cache-Memcached/tree/20140223-patch-cas-support
This started as just wanting to get a couple small features into
Cache::Memcached, but I ended up squashing a bunch of bugs (and merging
bugfixes
from existing and old RT tickets), and kept adding features.
The
401 - 500 of 993 matches
Mail list logo