Ok, I just checked in a project that contains some useful code for
handling Terracotta events. See a description of how to use it here:
http://www.terracotta.org/confluence/display/labs/Terracotta+Util
Basically, you implement a set of abstract methods from SimpleListener
(an adapter for ClusterEvents.Listener) class:
public synchronized void setMyNodeId(Object nodeId);
public void initialClusterMembers(Object[] nodeIds);
public void nodeConnected(Object nodeId);
public void nodeDisconnected(Object nodeId);
Hopefully these are self-explanatory, except initialClusterMembers.
Here you want to call a method like "purgeAllBut(Object[] ids)" method.
This should iterate over all of the nodes that are believed to be in the
cluster, and if the iterated node does not exist in the list, call the
nodeRemoved logic. This method receives a set of nodeIds that are
currently connected to the cluster. Since new nodes that join have a
new id, you can always safely purge any node that does not match this
list (in other words there aren't any race conditions)
Take a look at my WorkManager project:
http://www.terracotta.org/confluence/display/labs/WorkManager
In particular, org.terracotta.workmanager.impl.AbstractNode for an
example usage. Basically it does the same thing as you describe, the
WorkManager relies on a "NodeManager" (and a PipeManager) that holds a
registry of connected nodes. The NodeManager implements the
added/removed/purgeAllBut methods.
Taylor Gautier wrote:
I disagree that a shutdownhook would be the right place to put that
code - validation at nide startup is the right place to put it.
If a shutdown hook were the only way application exits from the
cluster were handled, then immediate termination - e.g. kill -9, power
or network failure would mean that your shutdown hook would fail to
execute (or maybe it does but in a network failure scenario it can not
have a cluster effect since by definition the node is no longer
connected).
On Aug 28, 2007, at 12:28 PM, "Prasad Bopardikar" <[EMAIL PROTECTED]>
wrote:
I have a TC-shared collection object (say a registry) where every
clustered app (I mean the ones that are sharing this registry)
registers & unregisters itself. I can have my app gracefully
shutdown where it will unregister itself but in case of an abrupt
shutdown, I thought it would have been cool to have a shutdown hook
that would do the unregistration.
I have had to write a mechanism where at the startup of an app,
while it registers itself, it will validate other registered nodes &
cleanup the registry. This could have been avoided if my shutdown
hook worked.
Thanks
Prasad
"Steven Harris" <[EMAIL PROTECTED]> 8/28/2007 1:58 PM >>>
In order for Terracotta to make sure that all actions taken in the
process before exiting are fully sent out it has it's own shutdown
hook
for flushing. I think we might be able to make sure our shutdown hook
runs last and allow people to do things in their own shutdown
hook.
That said, I'm curious why you need a shutdown hook in a terracotta
world?
Cheers,
Steve
On Aug 28, 2007, at 11:52 AM, Prasad Bopardikar wrote:
Why does Terracotta not allow a shutdown hook thread to update a
shared object?
_______________________________________________
tc-dev mailing list
[email protected]
http://lists.terracotta.org/mailman/listinfo/tc-dev
_______________________________________________
tc-dev mailing list
[email protected]
http://lists.terracotta.org/mailman/listinfo/tc-dev
_______________________________________________
tc-dev mailing list
[email protected]
http://lists.terracotta.org/mailman/listinfo/tc-dev
_______________________________________________
tc-dev mailing list
[email protected]
http://lists.terracotta.org/mailman/listinfo/tc-dev