, I would not care about consistency, right?
What do you mean by 'between clusters'?In a typical setup where
you have multiple servers there is still only one copy of any value
stored so as far as single key/value pairs it has to be consistent.
--
Les Mikesell
lesmikes...@gmail.com
the storage over the remaining
servers.
--
Les Mikesell
lesmikes...@gmail.com
--
---
You received this message because you are subscribed to the Google Groups
memcached group.
To unsubscribe from this group and stop receiving emails from it, send an email
to memcached+unsubscr
if one goes offline.
--
Les Mikesell
lesmikes...@gmail.com
--
---
You received this message because you are subscribed to the Google Groups
memcached group.
To unsubscribe from this group and stop receiving emails from it, send an email
to memcached+unsubscr...@googlegroups.com
that can be reused many times without bothering the
backend database.
--
Les Mikesell
lesmikes...@gmail.com
--
---
You received this message because you are subscribed to the Google Groups
memcached group.
To unsubscribe from this group and stop receiving emails from it, send an email
seems like repcached adds overhead.
It isn't necessary in the general case - which is probably why it is a
separate project. It might help if you have a small number of nodes or
a database that can't handle a flurry of cache misses.
--
Les Mikesell
lesmikes...@gmail.com
--
---
You
.
--
Les Mikesell
lesmikes...@gmail.com
--
---
You received this message because you are subscribed to the Google Groups
memcached group.
To unsubscribe from this group and stop receiving emails from it, send an email
to memcached+unsubscr...@googlegroups.com.
For more options, visit https
?
--
Les Mikesell
lesmikes...@gmail.com
--
---
You received this message because you are subscribed to the Google Groups
memcached group.
To unsubscribe from this group and stop receiving emails from it, send an email
to memcached+unsubscr...@googlegroups.com.
For more options, visit https
it with client code. If you have a large enough number
of servers, losing one will just add a small percentage of extra load
on your backend DB to cover the extra cache misses.
--
Les Mikesell
lesmikes...@gmail.com
--
---
You received this message because you are subscribed to the Google
failure of a node is for the client to fetch a new copy of the cache
misses from the backing storage.
--
Les Mikesell
lesmikes...@gmail.com
--
---
You received this message because you are subscribed to the Google Groups
memcached group.
To unsubscribe from this group and stop receiving
on
line.
Can you point me to a document or wiki link that gives more information on
how to set up a memcached cluster?
The server side is packaged for some Linux distributions. You just
configure the amount of memory for it to use on each node.
--
Les Mikesell
lesmikes...@gmail.com
there's something going on where memcached isn't closing
connections.
That sounds like your client is opening persistent connections but not
reusing them.
--
Les Mikesell
lesmikes...@gmail.com
--
---
You received this message because you are subscribed to the Google Groups
the data when it is not already
current in the cache. There may be some clients embedded in database
or database-like packages, but in general your client can use any
persistent data store along with memcache.
--
Les Mikesell
lesmikes...@gmail.com
--
---
You received this message because
?
--
Les Mikesell
lesmikes...@gmail.com
people on this list would
know.
--
Les Mikesell
lesmikes...@gmail.com
are distributed over the cluster nodes and if data is not
found in the cache it is up to the client to pull a new copy from the
persistent data store and refresh it in the cache.
--
Les Mikesell
lesmikes...@gmail.com
for how the client handles a server failure:
http://code.google.com/p/xmemcached/wiki/FailureMode_StandbyNode
Failure doesn't mean 'key doesn't exist', though, it means 'server
connection fails'.
--
Les Mikesell
lesmikes...@gmail.com
On Wed, Oct 17, 2012 at 1:56 PM, Kiran Kumar krn1...@gmail.com wrote:
Les Mikesell , Thanks for the link , but unfortunately that is no where
related to my question above .
anyway once again , What i was asking is that as there is some delay in Data
Replication , will the Memcache Client
?
--
Les Mikesell
lesmikes...@gmail.comi
of
requests always hitting the data store directly when a cache server is
down, but avoids the chance of inconsistency if the clients notice the
outage/recovery at slightly different times.
--
Les Mikesell
lesmikes...@gmail.com
them from the backend data store.
They seem to be talking about running another memcached instance on
the same server but a different port, but that doesn't make any
difference to the client.
--
Les Mikesell
lesmikes...@gmail.com
it will take care of itself.
--
Les Mikesell
lesmikes...@gmail.com
point of failure in the sense that the hash
re-balancing continues to provide the clients a place to cache freshly
obtained data.
--
Les Mikesell
lesmikes...@gmail.com
something that
looks like it will it will work with the same clients but attempts to
provide reliable storage). There have been other similar attempts,
but I haven't followed their current status.
--
Les Mikesell
lesmikes...@gmail.com
to
provide reliable storage instead of just using something designed to
be a cache even with multiple instances.
--
Les Mikesell
lesmikes...@gmail.com
of
memcache would be to have enough members in a single cluster that the
backend database can survive if you temporarily lose one or two of the
cache members and some percentage of queries hit it directly.
--
Les Mikesell
lesmikes...@gmail.com
your 2
applications wouldn't work the way you want in this configuration or
why you would want separate independent servers for each application.
Being able to distribute the cache over multiple servers is the main
reason people would use memcache.
--
Les Mikesell
lesmikes...@gmail.com
limited by the number of
servers you want to throw into the pool.
--
Les Mikesell
lesmikes...@gmail.com
On Mon, Jul 23, 2012 at 9:19 PM, Evan Buswell
evan.busw...@accellion.com wrote:
Lot's of examples in the docs. But yeah; maybe I should add a quick
complete example to the front page? I'll do this soon.
Did I miss where you describe the language(s) it supports?
--
Les Mikesell
problem for you?
--
Les Mikesell
lesmikes...@gmail.com
, secondary indexes, etc.
--
Les Mikesell
lesmikes...@gmail.com
it? I
suppose you want it to be reliable too.
--
Les Mikesell
lesmikes...@gmail.com
expire time on everything since it is being
evicted anyway, then write back anything you are actively reusing to
bump up the time to live? That way less active data gets out of the
way sooner with no extra work.
--
Les Mikesell
lesmikes...@gmail.com
down the
entire application
If you have one memcached server and it goes down, you lose 100% of
your caching.
--
Les Mikesell
lesmikes...@gmail.com
with your backend persistent storage before
you get to that point - especially if you expect to recover from any
major failure that dumps most of your cache at once.
--
Les Mikesell
lesmikes...@gmail.com
independently there as well.
--
Les Mikesell
lesmikes...@gmail.com
-- connection or otherwise --
on the client side to, say, warrant a direct fetch from the database?
You will likely have to worry about the persistent backend database
scaling long before that point.
--
Les Mikesell
lesmikes...@gmail.com
On Sat, Nov 26, 2011 at 3:19 PM, Arjen van der Meijden a...@tweakers.net
wrote:
On 26-11-2011 19:28 Les Mikesell wrote:
In the first one you may end up with 16 different tcp/ip-connections per
client. Obviously, connection pooling and proxies can alleviate some of
that
overhead. Still
instances over many machines so that restarting one (or
a failure) just invalidates a small portion of the cache that your
persistent data store can easily handle. If you want persistence,
there are probably better tools.
--
Les Mikesell
lesmikes...@gmail.com
persistent storage or way to generate the data and you want to
avoid the overhead of making a query for repeated requests. If you
are going to use something that also provides the persistent storage
you need to consider whether you need more than key/value operations.
--
Les Mikesell
lesmikes
copies of data on other servers until it expires.
--
Les Mikesell
lesmikes...@gmail.com
has updated 2) periodically uses
the memcache data to perform a task.
Would be very interested on your input.
That's not what memcache does. The value is only sent to and stored
on one node, determined by hashing the key.
--
Les Mikesell
lesmikes...@gmail.com
clients are configured with the same list of servers in
the same order they will find a single copy.
--
Les Mikesell
lesmikes...@gmail.com
are running through a proxy it will repeat the DNS
lookup frequently and switch addresses if you have alternatives.
--
Les Mikesell
lesmikes...@gmail.com
the
pool at a single data center (the L in LTM meaning local...) where the
servers probably all have access to the same memcache service.
--
Les Mikesell
lesmikes...@gmail.com
would say a key exists but when
you try to retrieve it, it doesn't (expired, evicted, deleted by a
concurrent operation, etc.)?
--
Les Mikesell
lesmikes...@gmail.com
with it the underlying value
for the key might have already been changed or removed. It just doesn't
seem to mesh with the operations that work in a cache.
--
Les Mikesell
lesmikes...@gmail.com
of what it just changed. The real problem is the speed the replication
propagates, so having the db push the cache update at each location probably
can't fix it.
--
Les Mikesell
lesmikes...@gmail.com
servers, some don't.
Either way, the next attempt to use a failed server should detect it is
down and retrieve the data from the backing DB.
--
Les Mikesell
lesmikes...@gmail.com
is a cache, not a key-value store.
--
Les Mikesell
lesmikes...@gmail.com
and my server have a lot of memory
Yes, it is very good for holding values for quick access that you are
able to retrieve in some slower way.
--
Les Mikesell
lesmikes...@gmail.com
PIC.
Or if you are using it like a database, membase might be a better fit
without making much difference on the client side. If you have another
way of loading the data you could remove the part that handles cache
misses from the client.
--
Les Mikesell
lesmikes...@gmail.com
:/
If you really need atomic operations, maybe redis would be better.
--
Les Mikesell
lesmikes...@gmail.com
of a library to make http client requests? A
lot of the servers provide a rest interface.
--
Les Mikesell
lesmikes...@gmail.com
there and just speak http from the client?
--
Les Mikesell
lesmikes...@gmail.com
implements the lock but also does
most or all of what the client would have done while holding the lock?
--
Les Mikesell
lesmikes...@gmail.com
On 10/17/10 6:07 AM, Tobias wrote:
Is it ever possible that your compute takes longer than your timeout?
no, the return value of memcache.delete(lock + x) is true.
But wouldn't that also be true if another process found the expired lock and set
a new one?
--
Les Mikesell
lesmikes
or removed?
(Other than the different client interface...).
--
Les Mikesell
lesmikes...@gmail.com
. You should generally assume that the disk head is going to be halfway
across the disk from the data you want and add up the seek time it will take to
get there. On the other hand, using real memory across several machines is very
fast.
--
Les Mikesell
lesmikes...@gmail.com
(for any reason, including expirations) and the point of the
cache is just to not overwhelm it with thousands of requests for the
same thing.
--
Les Mikesell
lesmikes...@gmail.com
] part without
re-creating all of the logic in the client library anyway.
--
Les Mikesell
lesmikes...@gmail.com
On Wed, Jul 28, 2010 at 12:03 PM, Les Mikesell lesmikes...@gmail.com
mailto:lesmikes...@gmail.com wrote:
On 7/28/2010 10:16 AM, jsm wrote:
Gavin,
You are right about the overhead and also saw that API's exist for
most of the languages as well.
I thought REST API
it handles the spikes that would appear from restarts and
value rollovers.
--
Les Mikesell
lesmikes...@gmail.com
is
closely coupled to the rest of the logic. I think OpenNMS might do
it with values it can pick up with http requests but I'm not sure
how well it handles the spikes that would appear from restarts and
value rollovers.
--
Les Mikesell
lesmikes...@gmail.com mailto:lesmikes
.
--
Les Mikesell
lesmikes...@gmail.com
should use multiple NICs on the servers and spread
the clients over different networks?
--
Les Mikesell
lesmikes...@gmail.com
a duplex mismatch at a switch port.
--
Les Mikesell
lesmikes...@gmail.com
of failure is to
have intermediary clients, which can do that.
Now I think we can call it that way.
A cache server failure shouldn't have any visible effect other than making the
source servers work harder while the data it held is refreshed onto the remapped
servers.
--
Les Mikesell
.
Is My understanding correct? If it is correct, we should remove
Distributed from the above definition.
The data is distributed - but the servers don't need to know anything about
that. Doesn't that still make it a distributed system?
--
Les Mikesell
lesmikes...@gmail.com
On 5/19/2010 1:46 PM, Sun-N-Fun wrote:
Apache Traffic Server looks good! Has commands for deleting a
specific object from the cache.
I hadn't been paying attention. Is that released and ready for prime
time now?
--
Les Mikesell
lesmikes...@gmail.com
that provide replication across
server instances?
--
Les Mikesell
lesmikes...@gmail.com
help things a bit by
splitting pages into iframe/image components that do/don't need sessions, and
you can make the client do more of the work by sending back values in cookies
instead of just the session key, but I'm not sure how far you can go.
--
Les Mikesell
lesmikes...@gmail.com
percentage should not double when you add another server.
--
Les Mikesell
lesmikes...@gmail.com
find free space instead of
evicting something even though space is available elsewhere.
--
Les Mikesell
lesmikes...@gmail.com
had trouble with that long ago with a berkeleydb version that I think was
eventually fixed. As things work now, if the new storage has to move to a
larger block, is the old space immediately freed?
--
Les Mikesell
lesmikes...@gmail.com
at all? Perhaps you can
point me to resources providing more details on this?
'Enough' memory may not be what you expect unless you understand how
your data fits in the allocated slabs. And I'm not sure what happens if
the keys have hash collisions.
--
Les Mikesell
lesmikes...@gmail.com
it can operate if you
don't use something like memcache in front of it.
--
Les Mikesell
lesmikes...@gmail.com
of how
much data you throw at it?
--
Les Mikesell
lesmikes...@gmail.com
how
many potential keys might be in use at once (constructing them from
arbitrary sql queries, etc.). Or to do anything to remove data. You
really can't iterate over it to see what needs to be removed - and where
else would you store the keys so you'd know about them?
--
Les Mikesell
The link is actually to a BSD-ish 'retain the copyright notice' license, with
the GPL permitted as an alternative.
Trond Norbye wrote:
Then I guess the answer is no, because memcached is BSD...
Trond
Sent from my iPhone
On 20. feb. 2010, at 16.23, Ryan Chan ryanchan...@gmail.com wrote:
a bit more frequently. If you
are counting on everything having a time-synchronized view of exactly the same
data, you probably shouldn't be using a distributed cache.
--
Les Mikesell
lesmikes...@gmail.com
.
Since these files are large memcached probably isn't the best bet for
this.
You could also redirect the client to the proxy/cache after computing
the filename, but that exposes the name in a way that might be reusable.
--
Les Mikesell
lesmikes...@gmail.com
buffers without memcache, then allocate a portion
to memcache, neither one would have enough space and both would have to
continuously reload the data as it is evicted.
--
Les Mikesell
lesmikes...@gmail.com
reduced filesystem buffers
and end up making them both thresh.
--
Les Mikesell
lesmikes...@gmail.com
. It would be a little handier for this style of
control if memcached had a command line option like '-o logfile' so it
could be controlled in the options the script sources from
/etc/sysconfig/memcached and expands on the command line.
--
Les Mikesell
lesmikes...@gmail.com
the cache as these items
are generated the best way to avoid this - or does it really just show
up in automated testing?
--
Les Mikesell
lesmikes...@gmail.com
Clint Webb wrote:
On Sun, May 3, 2009 at 8:58 AM, Les Mikesell lesmikes...@gmail.com wrote:
Clint Webb wrote:
Rather than using memcached as a global site cache (which it is not really
designed to be), you might have more success actually using it the way it
was intended.
Which means
of an update
count somewhere that you use in a key prefix so when a new post happens
you just stop using the old copies and they'll age out naturally.
--
Les Mikesell
lesmikes...@gmail.com
addresses of any that fail, or do you just make them all active and let
the client re-balancing take care of any problems?
--
Les Mikesell
lesmikes...@gmail.com
of the
negotiation time would be me trying to talk you out of it. :)
If you put this in the server, don't you set up conditions for:
A) all clients trigger the wait and thus deadlock
and/or
B) some number of clients run the server out of resources
??
--
Les Mikesell
lesmikes...@gmail.com
gf wrote:
How does the updater distinguish itself from the rest?
acquire() (atomic add).
So a whole bunch of clients try to add some sort of key that you hope
are identical so all but one fail, the one that succeeds is supposed to
do some more work? What if its next step fails?
--
Les
bothering the underlying backend database.
--
Les Mikesell
lesmikes...@gmail.com
91 matches
Mail list logo