a perfect opportunity to go
test it. Run your cluster with the traffic you expect to hit and try it
both ways with a simple bench script. Keep the one you're happiest with,
or pick the simplest if it doesn't make a difference.
-Dormando
Right. It does sound like we'll have to conduct some experiments.
Would've been nice to get some other inputs though.
We might make 2000-3000 mgets/sec. 100-500 keys each.
It's straight up math.
if (keylength * keycount tcp_buffer) - latency benefit from having
shorter mgets (or just
Hey,
Enjoying 1.4.8? Thought I'd share some rough things that you guys may
enjoy:
https://github.com/dormando/mc-crusher
^ I've thrown my hat into the ring of benchmark utilities. This is
probably on par with some work Dustin's been doing, but I went in a
slightly different direction
Looks correct to me... Apologies to everyone if it isn't :P I applied it
and it's going into 1.4.8.
thanks!
On Thu, 29 Sep 2011, Nate wrote:
At some point it looks like start-memcached was changed to fork an
extra time, as the comments put it, now that the tty is closed. I'm
not sure why,
the rest of the stuff we added/fixed are totally fawesome.
-Dormando
packets until you first read some.
iirc libmemcached has a workaround for this by starting to call the read
handler callbacks while sending requests still.
-Dormando
.
This time you get some fancypants FEATURES like COMMANDS and SWITCHES and
FIDDLYDINKS and some awesome COUNTERS for judging how well you do things.
enjoy,
-Dormando
on the backlog with 1.4.
-Dormando
On Tue, 27 Sep 2011, Gonzalo de Pedro wrote:
It's coming up pretty soon in my TODO list though; we've been catching up
on the backlog with 1.4.
Are you planning to implement this for version 1.6?
I can't/won't predict what version number that change will be in.
How many do you want it to run? After a point you have to start tuning
your OS kernel to reserve less RAM per TCP connection, but it'll scale to
Which parameter is that?
It's a lot of parameters. Google for linux TCP tuning.
access to hardare
too..
Could also try the folks at Oregon State University. They support a
number of FOSS projects in that way and are generally a very cool bunch
of folks to work with. Can do an intro if you like.
If they can help, sure I'd love an intro!
-Dormando
Hi,
I am trying to do some scalability analysis (scaling the number of clients)
for Memcached.
Is there any available benchmark for such experiments ?
Also, I am curious to know about the number of clients that a memcached
server can service in a typical deployment.
By default, the
I know that problably it would depend of my specific scenario, but in
general, do you think it would be better to have a single instance of
60MB or 6 instances of 10MB on the same server?
I wonder if having multiples instances would highly imprrove the
concurrency capacity and if having
+dormando (who seems to have been dropped from the cc list)
On Fri, Sep 23, 2011 at 9:46 AM, Mark Wong mark...@gmail.com wrote:
On Fri, Sep 23, 2011 at 9:30 AM, Josh Berkus j...@postgresql.org wrote:
On 9/23/11 3:24 AM, Paul Lindner wrote:
Contact Josh Berkus he may be able to get you
Hey,
http://memcached.org/feedme :)
Preemptive thanks to anyone who decides to stand up and help!
-Dormando
the main tree
... then we can decide on if we want to distribute both engines and give
users a choice, or keep one in the repo and slowly adopt the scaling
changes from one to the other (if possible).
Thanks,
-Dormando
the changes into default_engine. Would you like me to experiment
with this approach?
Yes, if you could test that and show the results of either approach that'd
be great! You should still put it into its own engine for now, though.
-Dormando
Hi there,
I would like to contribute to the memcached brazilian community. Can
I translate all the content?
Sure? :)
Hi everyone,
I'm new to memcached. Here's my question:
When we are configuring clients with memcached, we enter server addresses
running memcached in our client application.
Our servers; S1, S2, S3, S4
Clients: C1, C2
Now suppose for C1, we give memcached servers as S1, S2, S3
for C2:
all
your clients. The rest is magic, and it's awesome magic. Don't resist it.
-Dormando
Yep I got that point, but I'm saying is suppose you have many many clients
servers. Now if you want to scale your system, you'd add more servers.
Now then you'll need to update server list on each client.
Basically creating a single end-point (which can be replicated too using a
same
with something completely different after playing with it more. The
bulk of that bug entry explains the current situation though.
-Dormando
Wrong mailing list. Go talk to the couchbase people.
On Thu, 25 Aug 2011, Sergio Garcia wrote:
Hi,
We are using memcached to cache .net serialized items, as a asp .net session
provider, a database cache, and to store real time data.
Now, we are using 4 server, with 40GB of data to
I am currently fiddling with the slab allocator and one thing messes
my understanding of the concept. Think about the following scenario:
1) start memcached with ./memcached -m 1 (with 1 Mb of memory limit)
2) set mykey 0 0 11 -- allocates a 1 Mb slab and a chunk is returned
for about the
I am trying to build memcached 1.4.7 with a specific libevent version.
To do this, I ran configure as follows:
./configure --prefix=/home/radhesh/memcached-dist --with-libevent=/usr/
lib/libevent.so.1
--with-libevent=/usr/lib (path to the lib install dir, not the full path
to the file)
if your presentation is disparaging or
claims ownership over the project at all? :)
-Dormando
hi all
I have memcache 1.4.5, Apache 2.2.3 and libmemcache.so all on 64bit
Since upgrading to php 5.2.17 we have had two outages where apache
segfaults.
Close. You give all the versions except for the php memcached client. What
exact software client are you using and what exact version is
test?
-Dormando
... if your
app errors that should go to log somewhere, which you should really be
watching anyway.
-Dormando
pylibmc, which is based off of libmemcached. That may be able to do
multisets and get you better speed. The other library is pure python, I
think. Would be slower than a native DB driver.
-Dormando
to do for the 1.4.* versions?
I'm kicking out one release of 1.4.* monthly until 1.6 supersedes it. That
said I have a backlog of bugs and higher priority changes that will likely
keep me busy for a few months. Unless of course someone sponsors me to
spend more time on it :)
-Dormando
there is counterproductive. Headers only is often useless.
stats cachedump is most often used for the latter, and everyone needs to
remember that users never get to 1) if they can't figure out 2). Maybe I
should flip those priorities around?
-Dormando
of it.
thanks,
-Dormando
to use sflow today, they can apply your patches and use it.
As is such with open source.
-Dormando
No changes since -rc1 as nobody reported any bugs, and I haven't found any
myself.
Fetch and enjoy: http://memcached.org/
In the roughly every three weeks schedule, expect 1.4.8 around september
6th.
-Dormando
I deal with it when I have time...
...but I don't usually see that bug tracker, so it's easily forgotten.
Won't have time for a week or two, but I can kill some of those bugs,
though someone else is welcome to try as well. If you do though, please
actually test the fixes and ensure they won't do
1.4.8 will
have more feature updates ;)
Thanks!
-Dormando
summaries the same as an
external daemon could. Which could make everyone happy in this case.
I like the sFlow stuff, I'm just at a loss for why it's so important that
everything be generated on top of sFlow. So far nobody's addressed my
specific arguments as listed above.
-Dormando
Thanks for doing this!
On Wed, 3 Aug 2011, Paul Lindner wrote:
FYI -- please give this a test and upvote this build if it works for you.
Thanks!
-- Forwarded message --
From: upda...@fedoraproject.org
Date: Wed, Aug 3, 2011 at 3:54 AM
Subject: [Fedora Update] [comment]
should
even consider, if you have better ones. Please help distribute this ML
post around as much as possible so we can have a better chance of having
an intelligent discussion about it.
Thanks,
-Dormando
I owe all of you better tap documentation (the last couple of weeks
have really killed me). It does some pretty great stuff in this area
and has many practical uses.
Now would be a great time to sell us on it, then :)
: 0.00046550)
That seems fine... half a millisecond-ish? normal for an over the network
ping.
-Dormando
http://code.google.com/p/memcached/wiki/NewPerformance
On Mon, 25 Jul 2011, Chetan Gadgilwar wrote:
I have one question, if multiple users are accessing a DB then the speed get
decreases then is it the same thing with memcached if number of users
access the same cached data? or it would
Hello,
I have been playing with Memcached between two servers using Perl and
Memcached::Client which is working great. I would like to take it one step
further and use AnyEvent to trigger when a new key and value has been written
to Memcached. I am struggling to understand how to do it
complicated than
that, but that's the idea).
-Dormando
Is it normal to have a 16 percent virtual memory overhead in memcached
on x86_64 linux? memcached STAT bytes is reporting 3219 megabytes of
data, but virtual memory is 16 percent higher at 3834. Resident memory
is 14 percent higher at 3763 megabytes.
Is there a way to tune linux/memcached
releases.
Unless it's please add tags or something, in that case please chew
rocks.
-Dormando
be trying to release even sooner than that.
The 1.6 tree will continue to see beta releases whenever something
interesting happens. The engine-pu branch receives random upstream
merges, but I am not able to personally follow them very closely.
-Dormando
know there're a few other patches/fixes/reports/etc people have for 1.4;
I'd like to reiterate again that you should be working off of 1.6, the
posted beta or the memcached/engine-pu branch on github.
Thanks!
-Dormando
In the logs:
Notice: Memcache::get() [a href='memcache.get'memcache.get/a]:
Server localhost (tcp 11211) failed with: Failed reading line from
stream (0) in b/var/www/html/inc/chat/libraries/server/NodeJS.php/
b on line b37/bbr /
{r:registered}
At line 37:
$session =
the port is open.
But the problem is actually the memcache or the code? Thanks!
On 28 juin, 19:26, dormando dorma...@rydia.net wrote:
In the logs:
Notice: Memcache::get() [a href='memcache.get'memcache.get/a]:
Server localhost (tcp 11211) failed with: Failed reading line from
stream (0
I just noticed that I didn't answer this question from Dormando: Is
the only reason to keep it exactly the way it is because it's already
done and you have customers who rely on it?
I was actually asking the couchbase folks why they were so insistent on
pushing the feature the way
for an item to reclaim.
It will search the last few items in the tail for one which has already
been expired, and is thus free for reuse.'
So no, not random, but it tries to walk up the tail a little to find
something expired if the very bottom isn't expire.
-Dormando
,
memcached does not stop accepting connections when it gets full. It's
supposed to do more useful things instead.
-Dormando
Hello Dormando,
thans for your feedback. In fact i'm using the last stable version at debian
repositories (http://packages.debian.org/lenny/memcached). Why will no
longer accept new connections? Can i determine the cause based on stats?
I'm collecting data from memcache with cacti
to your reply.
I don't think he meant that you should go fix spymemcached, I think he
meant you should've sent a bug report to their mailing list to see if they
can fix it before hauling off and writing your own client.
He's not a loyal user, he's one of the authors.
-Dormando
trying to compare raw speed vs speed you probably want to start
with the Memcached::libmemcached library. That's the faster C based guy.
There's also a wrapper to make it look more like Cache::Memcached's
interface...
-Dormando
The funny thing is, while in real production, the queries are not this
simple, in most web apps I make, the queries are really not all that
complicated. They do retrieve data from large data stores, but the SQL
itself is relatively straightforward. Besides, none of the web sites I
make are
Ha ha! I installed Cache::Memcached::Fast, which seems to be a C based
drop in replacement for C::M, and now I get the following results
(this includes the get_multi method to get many keys in one shot)
query_dbh: 5 wallclock secs ( 3.31 usr + 1.03 sys = 4.34 CPU) @
2304.15/s (n=1)
?
http://code.google.com/p/memcached/wiki/Timeouts -- walk through this
wiki to narrow down why your memcached server is occasionally taking too
long to respond. It covers most of the bases, and provides a tool to help
narrow down trouble.
-Dormando
nice dormando, could you check if this line is ok in your link and if
not correct it?
./mc_conn_tester.pl memcached-host:11211 5000 4 log_three_seconds
5000 4 log_three?
4 != three
hehehe, maybe should be 3 log_three or
4 log_four
right?
thanks nice guide
That's correct
hum, if this line is ok, why the next
8, shouldn´t be 9? or log_seven?
./mc_conn_tester.pl memcached-host:11211 5000 4 log_three_seconds
./mc_conn_tester.pl memcached-host:11211 5000 8 log_eight_seconds
That should be seven, but odd numbers bother my OCD.
We're getting this error sometimes on a memcache call in php on
memcache-get( some key );
PHP Notice: Memcache::get() a href='memcache.get'memcache.get/a:
Server 192.168.100.53 (tcp 11211) failed with: (null) (0)
And I can't find anything online about this error. Is this a time out
or
the same information multiple times, etc.
-Dormando
On Fri, 29 Apr 2011, ilkinulas wrote:
Hi, i am using memcached 1.4.3
http://code.google.com/p/memcached/issues/detail?id=74 -- might be
related to this if you're solaris?
In either way, can you reproduce the issue under the latest version?
,
but possibly not OpenBSD.
Make evaluating! Give major feedback.
-Dormando
On Mon, 11 Apr 2011, Trond Norbye wrote:
What's new in memcached
===
(part two - new feature proposals)
Table of Contents
=
1 Protocol
On Mon, 11 Apr 2011, Adam Lee wrote:
is there somewhere i can copy edit this document?
a bit nitpicky, i know, but i found a few mistakes just while browsing it...
section 2.1 both suites should be suits, section 3.4 it's should
be its, etc.
awl
Does anything besides the english not
On Wed, 6 Apr 2011, Mohit Anchlia wrote:
Thanks! These points are on my list but none of them are useful. The
reason is I think I mentioned before that most of these servers that
are sending requests to us are hosted inside the co. but by different
group. So geoReplication will not work in
(like a feed list or twitter
timeline)
6) I can think of more variations of #1 while using backhauls, but tbh
they're all super gross.
n' stuff.
-Dormando
What the ... urgh.
I have no idea where you're getting that RPM of memcached, but it looks
like the packager didn't remove the deps for the damemtop script I
shoved in the scripts/ directory. Yum is being helpful and trying to
install a ton of useless perl depedencies.
If you just tell it to
sigh.
yum install --skip-broken memcached
or whatever combo actually works.
On Wed, 6 Apr 2011, smiling dream wrote:
can you give me the rpm for installing memcahed
On 4/6/11, dormando dorma...@rydia.net wrote:
What the ... urgh.
I have no idea where you're getting that RPM
what time does it start? ;)
On Wed, 16 Mar 2011, Matt Ingenthron wrote:
Hi all,
Since a few of the core folks are in the same place at the same time,
we're going to hold a small hackathon... this time really hacking... to
work on some merging of a couple of branches together to push to the
after the data has been safely applied.
No race or anything, I guess the one downside would be if you were
intending to have the caches updated ahead of the database being updated.
-Dormando
On 3/9/11 8:36 PM, dormando wrote:
So when you CRUD against the database, you send another command with the
UDF in it to DELETE or SET against the memcached cluster local to that
database. You can pair it inside a dummy INSERT or UPDATE or trigger or
sproc or whatever floats your boat
guys, the creators of this much loved tool -- viz-a-viz memcache -- designed
it with one goal in mind: CACHING!!
using sessions with memcache would only make sense from a CACHING standpoint,
i.e. cache the session values in your memcache server and if the
caching fails for some reason or
).
the original point (and something I still see as a feature) is the ability
to elastically add/remove cache space in front of things which don't scale
as well or take too much time to process.
For everything else there's
mastercard^Wredis^Wmembase^Wcassandra^Wsomeotherproduct
-Dormando
Have you walked through those links I gave you? You haven't mentioned
exactly what you're seeing and those links walk you through narrowing it
down a lot as well as listing a lot of things to look for.
On Mon, 21 Feb 2011, Patrick Santora wrote:
Hrmm. Still having issues. Here is the latest
settings look ok too. Quite
frustrating...
On Feb 21, 2011 11:51 AM, Patrick Santora patwe...@gmail.com wrote:
I will need to look at those further today. This weekend went a little
haywire for me. :)
On Feb 21, 2011 11:42 AM, dormando dorma...@rydia.net wrote:
Have you walked through
the issue comes up. Does it matter if I run
the tester on the same box the clients on? It should not matter but
thought ii would ask.
Thanks!
On Feb 21, 2011 6:25 PM, dormando dorma...@rydia.net wrote:
Have you been running the connection tester tool while observing the
client slowdown
What I am seeing is that when my memcached container hits around 10MB
of written traffic is starts to bottleneck causing my front end
systems to slow WAY down. I've turned on verbose debugging and see no
issues and there are no complaints on the front end stating that the
connection clients
Or wiki - protocol/commands (two clicks!) though the key stuff should
be
repeated at the top there (just fixed that). Probably should be repeated
somewhere else too, which we'll improve next time.
The wiki says that only space and newlines are disallowed, have you changed
error.
On Feb 8, 2:46 pm, dormando dorma...@rydia.net wrote:
I don't think anyone knows why that particular port of memcached has
trouble; I'm assuming it's a buggy libevent, or buggy interaction with
libevent. Given that it's an unhandled socket error :P
Where did you get the windows
Where do I find the invalid characters for a memcached key?
Buried in the wiki somewhere + protocol.txt.
in short, for ascii; no spaces or newlines.
kind of janky.
- Marc
On Tue, Feb 8, 2011 at 9:55 AM, dormando dorma...@rydia.net wrote:
Where do I find the invalid characters for a memcached key?
Buried in the wiki somewhere + protocol.txt.
in short, for ascii; no spaces or newlines.
I don't think anyone knows why that particular port of memcached has
trouble; I'm assuming it's a buggy libevent, or buggy interaction with
libevent. Given that it's an unhandled socket error :P
Where did you get the windows binary from?
On Tue, 8 Feb 2011, Roberto Spadim wrote:
=] can you use
Hi,
I don't want to be rude but can you perhaps stop advocating using UDP?
It's not actually faster if using persistent connections and is full of
bugs and limitations (like a max packet size of 1.4k).
Uhm. Actually in general your information is a little off from how we
usually go about things;
connection very well and you can end up leaking connections. You should
try to use persistent conns, then fail back to non persistent if they
don't work.
enjoy.
-Dormando
#1) Should i use consistent hashing.
I am not expecting instances to go down randomly. But whenever one
machine has to be taken out for maintenance etc, would like to
minimize the impact. i read about a reduced performance when switched
to consistent hashing. Not sure whether it is still
and documents of you client library (for a better help in memcached
mail group, use memcached based libraries, try to not use independent
libraries since you can use diferent hash algorithms and put data on
one server and read from another = bad cache hit rate)
2011/2/8 dormando dorma...@rydia.net
data loss with udp
speed isn't the point, the data size the network layout and the cache
hit rate is the point, with broken packet we have no communication =]
2011/2/8 dormando dorma...@rydia.net:
Hi,
I don't want to be rude but can you perhaps stop advocating using UDP?
It's
s/leaks/leads to
On Mon, 7 Feb 2011, dormando wrote:
Well you were saying speed is the point, but RAM is there as well.
If you properly tune the TCP stack the memory usage isn't bad at all. I've
ran a number of hosts with 100,000+ tcp connections on them at once and
while RAM gets sorta
http://code.google.com/p/memcached/wiki/NewPerformance -- how fast it
should be
Also make sure that your server is running the latest software available
(1.4.5). 1.2.x releases are not supported anymore.
-Dormando
I've found (part of) the answer to my own question - the virtual
memory comes from the thread stacks. Setting -t 1 reduces the virtual
memory to around 20MB.
I've read from other posts that it is no longer possible to have a non-
threaded memcached version. It appears on my system that the
problem:
today i need 4 packets to make 'lock'/'unlock' using memcache
i need to use less ROM/CPU/RAM/network at client side
solution:
1) make server side lock function with 2 packet (with atomic operations)
2) make a proxy if function = lock/unlock, proxy make these packets to
server (i
in parallel?
That sounds like a win/win to me, if gross :)
-Dormando
for data i use this:
key= pic ip number_fragment
lock key = pic ip number_lock
Shouldn't need the lock_key, then?
That sounds like a win/win to me, if gross :)
what's win/win ? hehehe
It's when there's no lose.
Hi to all,
First of all please accept my apologies if this question answered
before.
I've searched for a straight answer but I couldn't find one. Maybe I'm
not looking in the right place.
So, I've started using the memcached for user sessions and I back it
up with a database storage (in
Think we worked this out on IRC, there was some thing running flush_all
over and over on his server.
On Fri, 21 Jan 2011, Ivan wrote:
Hello,
i have memcached 1.4.5 on Centos 5.5, my sets seem to expire (no
matter what i put as expire time, infinite or 5mins 1 hour etc) after
few seconds and
So, pretty general question:
Seems against the recommendation of this list, Memcached is often used as a
session store. I'm working with a client now that uses two clusters of
memcached servers and every write is saved on two
clusters and on failed reads the backup is read. Poor mans HA
4, 2010 at 10:40 PM, dormando dorma...@rydia.net wrote:
reclaims are good, evictions are bad
On Thu, 4 Nov 2010, Kate Wang wrote:
We are experiencing high reclaims instead of evictions. Could slab
distribution shift cause that as well?
If the slab distribution shifted could
701 - 800 of 993 matches
Mail list logo