Hi Rick,
in OScache I can find a #destroy() method. Is this call needed to force
OSCache to do a node cleanup? If true, this call is needed on
onDestroyContext of your webapp.
Thus you need access to the application cache (ObjectCacheOSCacheImpl)
instance used by ObjectCacheTwoLevelImpl or we should provide a shutdown
method.
Currently this is not possible but we can fix this for next upcoming
release. Internally OJB use a ObjectCacheInternal interface, so we can
add a shutdown/destroy method. In ObjectCacheOSCacheImpl this method
will call GeneralCacheAdministrator#destroy().
Also in next release OJB will provide a PBF#shutdown() method. Then OJB
can internally cleanup resources (and this will call shutdown on all
caches too).
To check this please test this dirty hack in webapp #onDestroyContext:
- Add a public #getGeneralCacheAdministrator() method to
ObjectCacheOSCacheImpl.
- since we use a static variable for the GeneralCacheAdministrator
instance you can do
(new ObjectCacheOSCacheImpl(null,
null)).getGeneralCacheAdministrator().destroy();
HTH
regards,
Armin
OJB Dev wrote:
Hi All,
I am using the OSCache clustered cache impl and its working fine, with the
one exception, if I restart/redeploy the instance of tomcat that first
created the cluster (the JGroups "coordinator"), it and any other new nodes
can't join the cluster.
Versions: Tomcat 5.5.9, OJB 1.0.3, OSCache 2.1.1, JGroups 2.2.8.
OJB:
<object-cache class="org.apache.ojb.broker.cache.ObjectCacheTwoLevelImpl">
<attribute attribute-name="cacheExcludes" attribute-value=""/>
<attribute attribute-name="applicationCache"
attribute-value="org.apache.ojb.broker.cache.ObjectCacheOSCacheImpl"/>
<attribute attribute-name="copyStrategy"
attribute-value="org.apache.ojb.broker.cache.ObjectCacheTwoLevelImpl$CopyStr
ategyImpl"/>
</object-cache>
OSCache: using default multicast config for JGroups
Reproduce procedure,
1. start webapp 1
- this instance become the JGroups "coordinator"
2. start webapp 2
3. check cluster: logs show both nodes are talking, cache notifications are
working.
4. restart webapp 2
- OJB cache will leave and rejoin the cluster with no troubles
5. restart webapp 1
- OJB cache leaves the cluster, but when it trys to rejoin, I get the
following error , repeated over and over.
"WARN orj.jgroups.protocols.pbcast.ClientGmsImpl join(<my ip>:<my port>)
failed, retrying"
"WARN orj.jgroups.protocols.pbcast.ClientGmsImpl join(<my ip>:<my port>)
failed, retrying"
"WARN orj.jgroups.protocols.pbcast.ClientGmsImpl join(<my ip>:<my port>)
failed, retrying"
...
- the other node holds the "channel", but there is no "coordinator" to
accept the new node.
At this point, if I restart webapp 2, the cluster goes away, and a new one
is created and both nodes join and start talking.
Question: what am I missing so that I don't have to restart all of my other
nodes if the "coordinator" node is lost. I wrote to the JGroups list and I
was told that I need to do something like an OSCache.close() when the webapp
closes, but how do I make OJB do that? From what I could understand from the
guys in the JGroups forum, OSCache wasn't cleaning up when it was closing
down, so I figured I had better check, does OJB have anyway to notify
OSCache when its about to close? Is there something I should be doing in
the onDestroyContext of my webapp to make a node clean up and leave the
cluster properly. I have been unable to find any OJB docks on the subject.
Anyways, would love to hear from someone else using the OSCache cluster
model and does or doesn't have this issue. Thanks for any suggestions.
Rick Gavin
---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]