Hello Filip,

very interesting proporsal indeed. May I point you to some importand
use-cases and ask whether your proporsal has solutions for it.

1. Poviding an easy way to customize diffs.

As far as I understand customizable diffs are possible by a) patching
the StandardSession or b) configuring alternative SessionManager which
would create a wrapper to the StandardSession. Still I'd prefer an
easier way, for example a DiffCreator object inside the session which
can be replaced upon session creating.
The background of the question is following:
we have a large webfarm with huge traffic on it. Our sessions are
partly heavyweght, since we using them for caching (we have much to
much memory in the tomcat and want to use it). For example we are
caching search results (very heavyweight) which are provided by a
third-party search engine. In case a use just switches the view or
navigate in cached parts of the search result, they are replied from
cache reducing load on the third-party system. In case of a server
crash and the failover to another server we would accept loosing the
cached version in favour of reducing the traffic. Therefore a
customization would be very useful.

2. Session sharing between different webapps.
Following use-case: As soon as the user submits personal information
it's sent over https instead of http to secure it from the
man-in-the-middle. Our https is handled by the loadbalancer
(hardware), so we aren't able to provide https for every user
operation. The application which is handling personal data contains
all the same classes as the primary application, but another
configuration, so it can be considered a different webapps. For
tracking purposes we need data from users primary session, which we
can't access in the https application. It would be very cool (and
actually a task in my bugzilla account at work) to provide a service
which could maintain a part of the session centralized and allow other
servers/webapps to get this sessions data.

3. Controlled failover:
In order to make soft-releases and maintainance (without logging out
the user) it would be very cool to transfer a session from one server
to another and back.
Use case:
The webfarm consists of two servers, A and B. Admin issues a command
to server A notto accept new sessions anymore. Server A (customizable
code of course) rewrites the loadbalancer cookie to point to server B.
User makes next request and comes to server B which then gets the
session from A. After all A sessions expire (or a timeout) server A
goes down for maintainance. After Server A is back up again and the
game continues with B.

Those are the three main use-cases we would be very interested in. I
understand that neither of them is solveable without internal webapp
knowledge, but it could be solved the way, that the webapp only need
to provide/configure one-two custom handlers with few lines of code
and doesn't have to reimplement everything each time.

Since we have to develop the three use-cases either way and in case
tomcat team is interested in supporting them too, we would gladly
participate in the development and develop it "the tomcat conform"
way, so it can be contributed.

best regards
Leon

P.S. One more question... If I understood your correctly you plan to
share the session with exact one other server. I haven't found
anything about central replication service in your mail, so how is the
server deciding which is he's replication partner? Kind of random
algorythm or broadcast request? Please enlight me :-)




On 3/3/06, Filip Hanik - Dev Lists <[EMAIL PROTECTED]> wrote:
> I wrote together a little idea (also emailed to geronimo-dev for
> feedback) on how the next generation of session replication should be done.
> Today's replication is an all-to-all replication, and within that realm,
> its pretty poor. It creates way to much network traffic as each request
> results in X number of network transmits (where X is the number of nodes
> in the cluster/domain).
>
> The suggested solution offers two advantages:
> 1. Each request with a dirty session should only result in 1 network
> send (unless session create, session delete, or failover)
> 2. The session Manager (StandardManager,DeltaManager,etc) should not
> have to know about replication, the DeltaManager today is too
> complicated and too involved in the replication logic.
>
> I propose a very simple, yet very efficient replication mechanism, that
> is so straight forward that with a simple extension of the
> StandardSession (if not even built in) you can have session replication.
> It will even support future efforts for replication when we can have AOP
> monitor the sessions itself for data that has changed without
> setAttribute/removeAttribute, and the implementation wouldn't change much.
>
> in my opinion, the Session API should not have to know about clustering
> or session replication, nor should it need to worry about location.
> the clustering API should take care of all of that.
>
> the solution that we plan to implement for Tomcat is fairly straight
> forward. Let me see if I can give an idea of how the API shouldn't need
> to worry, its a little lengthy, but it shows that the Session and the
> SessionManager need to know zero about clustering or session locations.
> (this is only one solution, and other solutions should demonstrate the
> same point, SessionAPI need to know nothing about clustering or session
> locations)
>
> 1. Requirements to be implemented by the Session.java API
>   bool isDirty - (has the session changed in this request)
>   bool isDiffable - is the session able provide a diff
>   byte[] getSessionData() - returns the whole session
>   byte[] getSessionDiff() - optional, see isDiffable, resets the diff data
>   void setSessionDiff(byte[] diff) - optional, see isDiffable, apply
> changes from another node
>
> 2. Requirements to be implemented by the SessionManager.java API
>   void setSessionMap(HashMap map) - makes the map implementation pluggable
>
> 3. And the key to this, is that we will have an implementation of a
> LazyReplicatedHashMap
>   The key object in this map is the session Id.
>   The map entry object is an object that looks like this
>   ReplicatedEntry {
>      string id;//sessionid
>      bool isPrimary; //does this node hold the data
>      bool isBackup; //does this node hold backup data
>      Session session; //not null values for primary and backup nodes
>      Member primary; //information about the primary node
>      Member backup; //information about the backup node
>   }
>
>   The LazyReplicatedHashMap overrides get(key) and put(id,session)
>
> So all the nodes will have the a sessionId,ReplicatedEntry combinations
> in their session map. But only two nodes will have the actual data.
> This solution is for sticky LB only, but when failover happens, the LB
> can pick any node as each node knows where to get the data.
> The newly selected node, will keep the backup node or select a new one,
> and do a publish to the entire cluster of the locations.
>
> As you can see, all-to-all communications only happens when a Session is
> (created|destroyed|failover). Other than that it is primary-to-backup
> communication only, and this can be in terms of diffs or entire sessions
> using the isDirty or getDiff. This is triggered either by an interceptor
> at the end of each request or by a batch process for less network jitter
> but less accuracy (but adequate) for fail over.
>
> What makes this possible will be that Tribes will have true state
> replication and other true RPC calls into the cluster.
>
> positive thoughts, criticism and bashing are all welcome :)
> (remember that I work with the KISS principle)
> http://people.apache.org/~fhanik/kiss.html
>
> Filip
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: [EMAIL PROTECTED]
> For additional commands, e-mail: [EMAIL PROTECTED]
>
>

---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to