Hey Jeff,

Good to see this stuff gaining some traction!

If I could try to summarize to make sure I've got what you are saying. The intent is to rework the communication back end that ehcache is using to make it something more performant than the existing RMI impl and use the existing ehcache master-slave concept in order to keep it simple.

Then we will create a thin veneer over ehcache for now and perhaps or perhaps not replace ehcache with potentially different topologies in the future but just to keep the initial impl simple we go with master- slave.

Am I following you correctly here?

I'm excited to get started on this, sorry i've been so late to the party, been trying to get all the other loose ends I've been doing tied up.

TTFN,

-bd


On Sep 12, 2006, at 10:19 AM, Jeff Genender wrote:

I wanted to go over a high level design on a gcache cache component and get some feedback, input and invite folks who are interested to join in.
..so here it goes...

The gcache will be one of several cache/clustering offerings...but
starting off with the first one...

The first pass I want to go with the master/slave full replication
implementation.  What this means is a centralized caching server which
runs a cache implementation (likely will use ehcache underneath), and
this server is known as a master. My interest in ehcache is it provides
the ability to persist session state from a configuration if full
failure recovery is needed (no need to reinvent the wheel on a great
cache).  The master will communicate with N number of slave servers,
also running a gcache implementation.

   +--------+   +---------+  +---------+
   |        |   |         |  |         |
   | MASTER |   | SLAVE 1 |  | SLAVE 2 | ... n-slaves
   |        |   |         |  |         |
   +--------+   +---------+  +---------+
      |   |            |           |
      |   |            |           |
      |   |____________|           |
      |                            |
      |____________________________|



We then have client component(s) that "plugs in" and communicates with
the server. The configuration for the client should be very light where it will only really be concerned with the master/slave/slave/nth- slave.
 In other words, it communicates only with the master.  The master is
responsible for "pushing" anything it receives to its slaves and other
nodes in the cluster. The slaves basically look like clients to the master.

   +--------+   +---------+  +---------+
   |        |   |         |  |         |
   | MASTER |---| SLAVE 1 |  | SLAVE 2 |
   |        |   |         |  |         |
   +--------+   +---------+  +---------+
       |  |                       |
       |  +-----------------------+
       |
   ,-------.
  ( CLIENT  )
   `-------'

In the event the master goes down, the client notes the timeout and then automatically communicates with slave #1 as the new master. Since slave #1 is also a client of the MASTER, it can determine either by itself, or
by the first request that comes in asking for data, that it is the new
master.

   +--------+   +---------+  +---------+
   |  OLD   |   |NEW MSTER|  |         |
   | MASTER |   |   WAS   |--| SLAVE 2 |
   |        |   | SLAVE 1 |  |         |
   +--------+   +---------+  +---------+
       |           _,'
       X         ,'
       |      ,-'
   ,-------.<'
  ( CLIENT  )
   `-------'

I think this is a fairly simple implementation, yet fairly robust.
Since we are not doing the heart beat and mcast, we cut down on a lot of
network traffic.

Communication will be done by TCPIP sockets and would probably like to
use NIO.

I would like to see this component be able to run on its own...i.e. no
Geronimo needed. We can build a Geronimo gbean and deployer around it,
but I would like to see this component usable in many other areas,
including outside of Geronimo. Open source needs more "free" clustering
implementations.  I would like this component to be broken down into 2
major categories...server and client.

After a successful implementation of master/slave, I would like to make pluggable strategies, so we can provide for more of a distributed cache,
partitioning, and other types of joins, such as mcast/heart beat for
those who want it.

Thoughts and additional ideas?

Thanks,

Jeff

Reply via email to