Hey Toby, yes that is exactly what Stick tries to do - by storing in
the datastore your data can survive memcache restarts while having the
speed of in-memory access for some data.
I have always hated with a passion the APIs of JCache and EHCache.
On 22 Jun 2010, at 14:43, Toby wrote:
Hello John,
Thank you for your message. I was looking at your project and indeed
it does pretty much what I was looking for.
I think memcache is good for basic caching problems but it could be
made more efficient if it would keep most used data on the local
machine memory to cut the costs for the network overhead. The problem
is that from application side our local data gets lost when the
application is cycled out. Google could provide a solution that
survives this time-out and that uses resources more efficiently.
Other discussions about ehcache show that there is a need for
something more sophisticated. Maybe something for the roadmap?
Cheers,
Toby
On Jun 18, 10:33 am, John Patterson <[email protected]> wrote:
Hi Toby,
I made some code public that does what you describe: it is a simple
cache interface that has implementations for in-memory,memcacheand
the datastore. You get about 100MB of heap space to use which can
significantly speed up your caching.
There is also a CompositeCache class that allows you to layer the
caches so that it first checks in-memory, thenmemcache, then the
datastore. Puts go to all levels and cache hits refresh the higher
levels. e.g. if an item is not in-memory and has been flushed
from memcachebut is still present in the datastore then the other
two will
be updated.
http://code.google.com/p/stick-cache/
Hope this helps,
John
On 18 Jun 2010, at 15:08, Toby wrote:
Hello Ikai,
Sorry but I might have worded my post incorrectly. I do not doubt
memcache, in fact I use it very heavily and it is great.
On Google I/O I learned thatmemcacheis actually hosted on another
machine and that for each request a significant network overhead is
involved. In appstats I see about 20ms for a single request. Simple
data-store requests are also between 20-50ms. So in factmemcacheis
good as second level cache but shows to be a bottleneck for data
that
I heavily use over and over again (plus it costs valuable API time).
So the idea was to have some "first level" in-memory cache living on
the same machine that is holding heavily used data to prevent the
server round trip to the cache machine. Now you might tell me that
my
application is designed wrong. And indeed I could just put this in
by
myself. But as you also pointed out there are tricky parts like
memory
boundaries and other things to take care of. This is why I started
this thread to see if someone has come up with a good solution.
I think multiple cache layers are kind of a standard approach that
has
shown its usefulness in many places. It would be good to have
that as
part of GAE. Of course this is not the most urgent issue.
Cheers,
Toby
On Jun 17, 6:46 pm, "Ikai L (Google)" <[email protected]> wrote:
What aspect ofMemcacheis too slow? Have you run AppStats yet?
The overhead ofMemcacheis low enough for many of the top sites on
the
internet to use. Some sites are listed on the main page here:
http://memcached.org/
As you move closer and closer to local memory, the volatility of
your cache
will increase, so the only items I would store in local memory are
items
that are okay to lose. If you want, you can probably layer your
application
to fetch frommemcache-> fetch from authoritative source and place
into
local memory on a cache miss. Just be aware that there are process
memory
limits, and exceeding these will force a restart.
On Thu, Jun 17, 2010 at 2:30 AM, Toby <[email protected]> wrote:
Hello,
I wonder if there is a framework (such as Objectify) also for
memcache. Asmemcacheis not on the local machine it is rather
slow,
especially for reoccurring requests. So on Google I/O they
suggested
to build your own in-memory layer around that. I know that is an
easy
task, still I wonder if there might already be a framework for
that :-)
Also I wonder if someone can give me some ideas about how to build
an
in-memory cache. I guess it is just a static hashmap. But will it
survive multiple requests? How much can I put in there?
As the problem ofmemcacheis apparently the high latency for the
network traffic to the server I had the idea to store the in-
memory
cache in thememcache, de-serialize it and then use it?
Do you have other ideas how to speed up caching?
Thank you for your advice,
Toby
--
You received this message because you are subscribed to the Google
Groups
"Google App Engine for Java" group.
To post to this group, send email to
[email protected].
To unsubscribe from this group, send email to
[email protected]<google-appengine-java%[email protected]
.
For more options, visit this group at
http://groups.google.com/group/google-appengine-java?hl=en.
--
Ikai Lan
Developer Programs Engineer, Google App Engine
Blog:http://googleappengine.blogspot.com
Twitter:http://twitter.com/app_engine
Reddit:http://www.reddit.com/r/appengine
--
You received this message because you are subscribed to the Google
Groups "Google App Engine for Java" group.
To post to this group, send email to [email protected]
.
To unsubscribe from this group, send email to
[email protected]
.
For more options, visit this group
athttp://groups.google.com/group/google-appengine-java?hl=en
.
--
You received this message because you are subscribed to the Google
Groups "Google App Engine for Java" group.
To post to this group, send email to [email protected]
.
To unsubscribe from this group, send email to [email protected]
.
For more options, visit this group at http://groups.google.com/group/google-appengine-java?hl=en
.
--
You received this message because you are subscribed to the Google Groups "Google
App Engine for Java" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to
[email protected].
For more options, visit this group at
http://groups.google.com/group/google-appengine-java?hl=en.