It seams that we all think in a similar way.

We can have a solution like this:
We have an instance of the database to store the sessions.
The table for the session can be something like this:
sessionId, sessionVersion, sessionLocker, extraSessionData, sessionStream

sessionId : a unique id for session [yes the id ;-)]
sessionVersion : an identifier which say when it was modified for the last
time (not a timestamp but a number)
sessionLocker : who is locking the session (an id for the virtual machine)
extraSessionData : some data like when it should experire (on dbtime) or
well, I dont know.
sessionStream : the serialized session (so all the object stored in the
session should implement serializable).

Session creation is a litle tricky because be should get sure that no other
virtual machine is trying to create the same session (for the same user) and
if so syncronize them.

On every request:

- get the lock sessionLocker (check if the locker is 0, if it is we take it;
if it is not we wait until the locker is 0 try again). [sessionLocker = me]
- check if it is the same sessionVersion.
- if yes just use the session at memory. (why unserilize it if we can have
them in memory?).
- if not get the session of the database (sessionStream) and unserialize it.

* now the user request runs as normal, until it is finished when ..

- sessionVersion++
- sessionStream gets updated.
- sessionLocker = 0

There are other optimizations but I will not discuss them now not to make a
mess.


The problems I see are most of all user problems like not making the objects
serializable or making a bad use of the singleton pattern (we are running on
multiple machines), or some dirty finalize methods.
Ok, there will we some overhead but it will be after the http stream was
flushed. An there is nothing like a free meal. (Yes, it should only get
activated if the user dessires so) ((user=apache admin).

Does apache has a mean of doing load balancing? If so, can we make it sticky
so we optimize resources?

Chau,

Gaston


----- Original Message -----
From: "Reilly, John" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Friday, August 24, 2001 8:51 PM
Subject: RE: Addition of 'dirty' field to Session interface


>
>
> > > This is just an idea from the top of my head, would
> > > it be possible
> > > having a second vector that contains a footprint(not
> > > a full clone) of
> > > the
> > > object for a session and have a reaper thread
> > > checking the footprints
> > > against
> > > the "real" objects and determine if they changed or
> > > not and based on
> > > that
> > > replicate of whatever we want to do.
> >
> > My thoughts exactly.  If you want to be able to
> > support transparent fail-over for sessions within a
> > cluster, you are going to have to take the performance
> > hit of persisting the session data on at least 1 other
> > machine in the cluster after every request.  If you're
> > already taking that step, you might as well maintain
> > an in-memory image of the serialized session object.
> > You could compare an MD5 on the bytes comprising the
> > session before the request was handled with the MD5
> > for after the request completed.
> >
> > Could this work?
>
> The overhead could be fairly signifigant.
>
> >
> >   - osama.

Reply via email to