Jason Dillon wrote: > Why not use ActiveMQ for communication and take advantage of its broker > network for failover? >
The JMS provider would be a pluggable comm strategy. For performance reasons, I want to start with TCP communication. I definitely want to have a JMS strategy...maybe next. But initially I don't want any dependencies on other servers or brokers. With that said, after looking at openwire, the comm marshaller for ActiveMQ, there is a lot to leverage there and will prevent a rewrite of the comm layer. So, there will be some use of that code base initially. Jeff > --jason > > > On Sep 12, 2006, at 9:19 AM, Jeff Genender wrote: > >> I wanted to go over a high level design on a gcache cache component and >> get some feedback, input and invite folks who are interested to join in. >> ..so here it goes... >> >> The gcache will be one of several cache/clustering offerings...but >> starting off with the first one... >> >> The first pass I want to go with the master/slave full replication >> implementation. What this means is a centralized caching server which >> runs a cache implementation (likely will use ehcache underneath), and >> this server is known as a master. My interest in ehcache is it provides >> the ability to persist session state from a configuration if full >> failure recovery is needed (no need to reinvent the wheel on a great >> cache). The master will communicate with N number of slave servers, >> also running a gcache implementation. >> >> +--------+ +---------+ +---------+ >> | | | | | | >> | MASTER | | SLAVE 1 | | SLAVE 2 | ... n-slaves >> | | | | | | >> +--------+ +---------+ +---------+ >> | | | | >> | | | | >> | |____________| | >> | | >> |____________________________| >> >> >> >> We then have client component(s) that "plugs in" and communicates with >> the server. The configuration for the client should be very light where >> it will only really be concerned with the master/slave/slave/nth-slave. >> In other words, it communicates only with the master. The master is >> responsible for "pushing" anything it receives to its slaves and other >> nodes in the cluster. The slaves basically look like clients to the >> master. >> >> +--------+ +---------+ +---------+ >> | | | | | | >> | MASTER |---| SLAVE 1 | | SLAVE 2 | >> | | | | | | >> +--------+ +---------+ +---------+ >> | | | >> | +-----------------------+ >> | >> ,-------. >> ( CLIENT ) >> `-------' >> >> In the event the master goes down, the client notes the timeout and then >> automatically communicates with slave #1 as the new master. Since slave >> #1 is also a client of the MASTER, it can determine either by itself, or >> by the first request that comes in asking for data, that it is the new >> master. >> >> +--------+ +---------+ +---------+ >> | OLD | |NEW MSTER| | | >> | MASTER | | WAS |--| SLAVE 2 | >> | | | SLAVE 1 | | | >> +--------+ +---------+ +---------+ >> | _,' >> X ,' >> | ,-' >> ,-------.<' >> ( CLIENT ) >> `-------' >> >> I think this is a fairly simple implementation, yet fairly robust. >> Since we are not doing the heart beat and mcast, we cut down on a lot of >> network traffic. >> >> Communication will be done by TCPIP sockets and would probably like to >> use NIO. >> >> I would like to see this component be able to run on its own...i.e. no >> Geronimo needed. We can build a Geronimo gbean and deployer around it, >> but I would like to see this component usable in many other areas, >> including outside of Geronimo. Open source needs more "free" clustering >> implementations. I would like this component to be broken down into 2 >> major categories...server and client. >> >> After a successful implementation of master/slave, I would like to make >> pluggable strategies, so we can provide for more of a distributed cache, >> partitioning, and other types of joins, such as mcast/heart beat for >> those who want it. >> >> Thoughts and additional ideas? >> >> Thanks, >> >> Jeff
