comments inside

On 3/4/06, Filip Hanik - Dev Lists <[EMAIL PROTECTED]> wrote:
> Leon Rosenberg wrote:
> > Hello Filip,
> >
> > very interesting proporsal indeed. May I point you to some importand
> > use-cases and ask whether your proporsal has solutions for it.
> >
> > 1. Poviding an easy way to customize diffs.
> >
> > As far as I understand customizable diffs are possible by a) patching
> > the StandardSession or b) configuring alternative SessionManager which
> > would create a wrapper to the StandardSession. Still I'd prefer an
> > easier way, for example a DiffCreator object inside the session which
> > can be replaced upon session creating.
> > The background of the question is following:
> > we have a large webfarm with huge traffic on it. Our sessions are
> > partly heavyweght, since we using them for caching (we have much to
> > much memory in the tomcat and want to use it). For example we are
> > caching search results (very heavyweight) which are provided by a
> > third-party search engine. In case a use just switches the view or
> > navigate in cached parts of the search result, they are replied from
> > cache reducing load on the third-party system. In case of a server
> > crash and the failover to another server we would accept loosing the
> > cached version in favour of reducing the traffic. Therefore a
> > customization would be very useful.
> >
> This scenario sounds like you shouldn't use the session as cache,
> implement a cache object that does the same thing.
> but point taken, you want a way to customize the diffs, my guess is that
> you could have a pluggable diff object that attaches to the session.
> This object can be configured through server.xml (global) or context.xml
> (per webapp)
>

If we would store the result-set (list of already created beans) in
the session, we'd have to store them twice, once in the "cache" and
once in request for presentation. However, a pluggable diff object
would be great!

Btw, another point: The object/handler/whatever which decides whether
a session create event should be distributed at all should be
configurable/replaceable too.
Background: most or at least many hardware loadbalancer use urls for
service health monitoring. They do not send any cookies back, so in
fact each heartbit creates a new session. Our lb + failover lb are
sending heartbits each 8 seconds each. With session timeout of 30
minutes we always have 450 active lb sessions on each server.
Distributing those sessions should be considered spam and waste of
network resources :-)



>
>
> > 2. Session sharing between different webapps.
> > Following use-case: As soon as the user submits personal information
> > it's sent over https instead of http to secure it from the
> > man-in-the-middle. Our https is handled by the loadbalancer
> > (hardware), so we aren't able to provide https for every user
> > operation. The application which is handling personal data contains
> > all the same classes as the primary application, but another
> > configuration, so it can be considered a different webapps. For
> > tracking purposes we need data from users primary session, which we
> > can't access in the https application. It would be very cool (and
> > actually a task in my bugzilla account at work) to provide a service
> > which could maintain a part of the session centralized and allow other
> > servers/webapps to get this sessions data.
> >
> easier way to do this would be to create a centralized cache, and put it
> in <tomcat>/shared/lib/
> This might be out of scope for Tomcat, and out of scope for replication
> for sure.

Forgot to mention that the webapps are running on different instances
in different service pools :-) What I had in mind was a kind of second
session cookie, which is set per domain (configurable), read out by
any webapp in the same domain and synchronized with a "central session
holder". But you're right, this is probably out of tomcat scope and
could be solved with a central service instance available over the
network and filters in webapps (or whatever). Point taken :-)

>
> > 3. Controlled failover:
> > In order to make soft-releases and maintainance (without logging out
> > the user) it would be very cool to transfer a session from one server
> > to another and back.
> > Use case:
> > The webfarm consists of two servers, A and B. Admin issues a command
> > to server A notto accept new sessions anymore. Server A (customizable
> > code of course) rewrites the loadbalancer cookie to point to server B.
> > User makes next request and comes to server B which then gets the
> > session from A. After all A sessions expire (or a timeout) server A
> > goes down for maintainance. After Server A is back up again and the
> > game continues with B.
> >
> This is automatic. It will happen exactly the way you describe. The way
> the LazyReplicatedMap works is as follows:
> 1. Backup node fails -> primary node chooses a new backup node
> 2. Primary node fails -> since Tomcat doesn't know which node the user
> will come to their
>    next http request, nothing is done.
>    When the user makes a request, and the session manager says
> LazyMap.getSession(id) and that session is not yet on the server,
>    the lazymap will request the session from the backup server, load it
> up, set this node as primary.
>    that is why it is called lazy, cause it wont load the session until
> it is actually needed, and because it doesn't know what node will become
> primary, this is decided by the load balancer.

Understood... that means that all tomcats communicate with each other
sending at least discover requests on create/destroy session, which
you mentioned in the previous post.
I'm still not quite sure if this can work efficently without a central
place for session management, but you sound very confident.

One problem that I still see with your approach: in large clusters
(say more then 20 servers)  chances for user to come out on the backup
node are null (well 5.26% which is pretty near null in production
environment). This means that immediately after primary node fails a
lot of traffic between the backup node and the other nodes will take
place. In our use case, where we want to put down 10 of 20 servers for
release purposes it can mean VERY much unnecessary traffic.

In case primary node knows, that it will go down in near future and
should send all his users away, it could stop accepting new requests
and redirect old users directly to the backup node(s). That way the
performance risk of getting sessions cross over the network could be
reduced.

What do you think about it?

> http://people.apache.org/~fhanik/kiss.html ;)

I fully agree with the KISS principle, and follow it in my job and all
projects, that's why we never use anything we don't need, like
app-servers, or-mappers and such, until one proves, that using the
thing make the live really easier and not complicated.

Therefore I understand that implementing support for everything
everyone need and keeping the code simple are contrarian goals, but
making the code fine-graned and elements exchangeable wouldn't violate
KISS, would it?


>
> Filip
>

Leon

---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to