On 12/02/2009, at 6:08 AM, Chance Yeoman wrote:


On Feb 11, 2009, at 9:32 AM, Chance Yeoman wrote:


Thank you for information,

My understanding of the plugin-based farming is incomplete as well.
From the Geronimo 2.1 documentation of plugin-based farming, it did
not seem as if node server configuration was checked and updated
upon startup.  Is this new to Geronimo 2.2 farming?

I think the plugin based farming is only in 2.2.  IIRC the deployment
based farming in 2.1 relies on pushing stuff.


As an alternative to multicast, our configuration could use a
configured admin server approach to node discovery by using a
virtual IP that would route to a master server.  Unlike
multicasting, this approach would require configuration external to
Geronimo to remain flexible, like virtual IP routing or DNS
management.  While multicast would be a better choice for most
configurations, would it be plausible to include admin server
hostname/IP based node discovery as a configuration option, with
multicast as the default?

That sounds reasonable to me, although I don't know how virtual IP
works.

Since we're talking about a new feature it would be better to move the
discussion to the dev list, and if you would open a jira that would
also help.

Depending on your requirements I can think of a couple of possible
strategies:

1. when a node starts up it request the plugin list from the admin
server.  In this case the admin server doesn't track the node members
and if you update a plugin list nodes won't know until they restart.

2. when a node starts up it starts pinging the admin server.  The
admin server tracks cluster members similarly to how it does now with
multicast.  Changes to plugin lists will be propagated quickly to
running nodes.

I think (2) would be easier to implement as it just replaces the
multicast heartbeat with a more configured one.

Would you be interested in contributing an implementation?

thanks
david jencks


Absolutely. I'm taking a peek at the MulticastDiscoveryAgent code to get an idea of how a new implementation would plug in.

I agree that strategy 2 would be a good place to start as the behavior of the server farm would be the same regardless of the node discovery mechanism.

One requirement that this would not address is the ability to rotate plugin deployments to cluster member nodes one at a time or in separate batches rather than all at once. This functionality would probably belong elsewhere though.

I will open a jira about optionally configuring an admin host for node discovery and begin work on an implementation.



Hi,

FWIW, WADI provides API to implement this kind of tracking and trigger remote services. A ServiceSpace can be executed on each note

http://wadi.codehaus.org/wadi-core/xref/org/codehaus/wadi/ servicespace/ServiceSpace.html

. When a node joins or stops, you can receive events by registering a ServiceSpaceListener

http://wadi.codehaus.org/wadi-core/xref/org/codehaus/wadi/ servicespace/ServiceSpaceListener.html

If you are interested by a specific remote service running on a node, e.g. a service tracking the plugins installed on the admin server, then you can also receive events all along its lifecycle by registering a ServiceListener

http://wadi.codehaus.org/wadi-core/xref/org/codehaus/wadi/ servicespace/ServiceListener.html

To invoke a remote service, e.g. to retrieve all the plugins installed on the admin server, you can dynamically create a service proxy by using a ServiceProxyFactory (more info there http:// docs.codehaus.org/display/WADI/2.+Distributed+Services)

http://wadi.codehaus.org/wadi-core/xref/org/codehaus/wadi/ servicespace/ServiceProxyFactory.html


This is the approach I used to track the location of specific stateful SessionBean types to support the fail-over of EJB clients. This is also the approach I used to implement the WADI admin console which can track where sessions are located and gather various statistics.

If you want to implement from scratch a node discovery feature, then I believe that capturing the key contracts as part of the geronimo- clustering project would be great (which has support for the tracking of cluster member).

Thanks,
Gianny


Thanks for your help,

Chance

Reply via email to