> This leads me back to my proposal of generating a collective of servers
> (yeah, your right, the BORG collective, because each server should know
> the whole configuration allowing the collective to work even when a
> member dies or is temporary unavailable IN CONTRAST to the
> community of specialized servers like Master/Slave model) where JINI
> build up the collective and JMX is the communication channel.
I still think that it is a bad idea to have every node in a cluster to have
the complete configuration information for every other node. This simply
will not scale.
If you have 10 nodes, change one bit of config on node0, then node2-9 will
have to sync up there configuration. There will probably have to be some
distributed tx going on here too so that we are sure that no new nodes
startup and use a stale config.
With 10 nodes on a fast lan with a minimal configuration data set this is
not really an issue, but once you start thinking about 50+ nodes distributed
across the net (some with really slow or poor quality links), then you have
a problem.
Compare the configserver to the Jini lookup service. You could have a
lookup on each node, which knew about every other node and keeps uptodate
with the latest snapshot of available services. With 10 nodes, when node0
starts service0, then the lookup on node1-9 will have to sync with the
client stub for service0. Again, with a small number of nodes on a high
speed network, this would work. Once you up the quanta then it is a
completely different game.
Obviously you would want to segement large collections of nodes into groups
or clusters, but you may need to have a large group, which leaves you to
artificially segment or look to an alternate solution.
In most cases when a large group of nodes runs, it will not need to deal
with configuration except durring startup or if an async event asks it to
reconfigure. When this does happen all you really want to know is where to
get that configuration from. From this perspective it does not matter if
any given node has that config or if a subset does. From an efficency and
scaling standpoint it would make most sence to have a small subset of the
cluster total available for serving up configs. If one of them goes down
(crash, network loss, maintenece reboot), any of the hotspares can stand in.
This is very similar to how the lookup service in Jini works. You don't
care which lookup you get, just that it can service the needs of the group
you are in.
Another example would be DNS. There is no way that every DNS server could
sync up with each other DNS server, unless there numbers were small and on a
fast network.
I am no statistiion, but I am sure there are numbers available to show that
given a large number of machines, of which a subset are dedicated to some
critical system, that the chances of all of those machines going down at the
same time is not very high.
Of course that all depends on how you segement, if you have 100 nodes, and
1% is dedicated to configuration, then you are screwed if it goes down, but
if it is 5% or 10% (for the paranoid), then you are statistically in good
shape.
I think that Jini could really help take JBoss to the next level if applied
thoughtfully. Perhaps simply for the lookup serivce (discovery & join),
event model and leasing system. We would still use JMX for all local
control, but add some Jini services, which link JMX agents together, allow
them to notify each other when they go up, down, need config, have new
configs. Leasing could be used to detect system locks and other fancy
stuff.
--jason
_______________________________________________
Jboss-development mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/jboss-development