Hi Niall, thanks for the prompt reply! Also, I just realized I posted
to the wrong list; I'll make sure to post any future questions to the
users list.
We're using the lateral option, default Java serialization, only
strong references, there are no static fields storing state on our
cached objects, and no transient fields either. I'm currently hunting
through the application and looking into the possibility that we're
simply making changes to already cached data, inadvertently clearing
collections (and thus effectively invalidating the objects) without
clearing the parent objects from JCS. It's notable that this isn't
happening randomly for any old objects we're caching but only for this
specific collection of child objects, and the intervals between this
occurring appear to be random as well.
Also, I like your idea of a deep clone as a possible stop-gap
solution, as well as a way to ferret out the underlying problem. I
may give that a shot.
And I'll post any interesting updates/solutions here as I find them.
Thanks again!
Zac
On Mar 11, 2009, at 6:32 AM, Niall Gallagher wrote:
Hi Zac,
This is very unusual behaviour. Are you using the JCS lateral or
remote
cache option or are you maintaining independent caches on all
servers in
the cluster?
JCS by default uses standard Java serialization to transfer objects
between JVMs, it doesn't look inside your objects or do anything
special
to objects as far as I know. Whether you're storing a simple object or
an object to which a large graph of other objects is attached, Java
serialization should take care of it.
It sounds like a serialization/deserialization issue. Do any of your
objects:
- have 'transient' fields
- use weak or soft references
- implement custom readObject or writeObject methods for serialization
- store state in static fields
You can check the items above if you're using JCS clustering or disk
cache options: lateral or remote cache, indexed disk cache etc. These
are JCS options which require that objects leave the JVM and so
require
objects to be serialized.
If you've instead configured *independent* JCS object caches on each
server in the cluster then JCS would have no need to serialize your
objects in the first place. Without serialization JCS won't ever be
"copying" your objects. In this case when you "put" an object into the
cache you're really just storing a reference to that object in the
cache. Usually your application should then drop any other
references it
holds to that object.
If your application does not drop other references to objects once it
has put them into the cache, then your application code could continue
to modify objects already stored in the cache via another reference.
This would be a bad bug in the application- if your application held
additional references to objects, when the cache evicts objects they
would not be garbage collected defeating the purpose of the cache. But
more to the point, your application could inadvertently continue to
modify objects in the cache causing the behaviour you are seeing.
Actually this last point could apply to situations where you are using
JCS serialization to share objects between servers too. It could
affect
"originating" servers; servers which initially "put" objects into the
cache, as within the originating JVM your application could continue
to
modify objects in that JVM's cache. Other servers would see an earlier
snapshot of objects, approximately from the point at which they were
put
into the cache (it's asynchronous).
To test if your application might be holding references, as a
temporary
measure you could try deep-cloning objects before putting them into
the
cache. That way you'd be guaranteed that what you put into the cache
you
should be able to get out of the cache in all cases:
http://weblogs.java.net/blog/emcmanus/archive/2007/04/cloning_java_ob.html
Anyway above are just suggestions. It could of course be something
entirely different, but anyway I hope you get it sorted.
Kind regards,
Niall
On Tue, 2009-03-10 at 15:01 -0600, Zachary Bradshaw wrote:
Hi, I'm new to JCS dev and wanted to know if anyone else had
experienced this problem: we've got a cluster of production servers
running JCS in LRUMemory mode, which we're using to store fairly
dense, heavily nested complex objects. A strange thing is happening
whereby sometimes I will pull an object out of the cache, and one of
its child collections is empty, and I was wondering if perhaps JCS
was
dropping this data due to expiration or space management policies?
It
seems strange that only part of a cached object would be released,
but
I want to be certain one way or another (there are other
possibilities
I'm looking into too).
Any feedback is appreciated!
Thanks,
Zac
---------------------------------------------------------------------
To unsubscribe, e-mail: jcs-dev-unsubscr...@jakarta.apache.org
For additional commands, e-mail: jcs-dev-h...@jakarta.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: jcs-dev-unsubscr...@jakarta.apache.org
For additional commands, e-mail: jcs-dev-h...@jakarta.apache.org