On Jul 9, 2008, at 8:07 AM, Bernard Bernstein wrote:

Running Resin 3.0.21

We are running many resin servers, each have applications that manage their own clustering using Tangosol for their shared data. However, the applications do not handle their own user sessions. We use resin for that, and generally that has been fine as long as we have our load-balancers keep their sessions sticky, talking to the same server. Now we are considering making our sessions less sticky, which means we'll need to start using clustering for sessions at the Resin level.

Here's the hard part.

We have many servers running many instances or applications. Always at least a pair of any instance running. Currently when we deploy to a pair of servers, there is no configuration we need to do on those servers. Each instance has their own port across all servers, so Tangosol is able to use broadcast to find it's cluster members and figure out it's cluster set. When a machine goes down, we simply copy an instance to another machine, fire it up, and the pair for that app is running again.

Is it possible to have Resin clustering work with no configuration per-machine? Is there any solution that would allow Resin instances on the same port to find each other, or to have a single configuration for all instances regardless of their ip address so that they can be clustered no matter where they are?

Not currently. Dynamic servers is something we're working on for 3.2.x, but even 3.2.0 is a fairly early version.

Depending on how your load balancing is configured, you might just use jdbc-based sessions with a mechanism like you've described. That way, the individual servers won't try to connect to each other at all.

One thought is to have a list of all possible ip addresses (or just the final part of the address) as the id for each, then in the launch script, we can pass in the id of the local ip address for the machine. That means, that we'll 254 servers listed in the configuration when only two will actually exist in many cases.

I'm thinking this solution would cause logs full of "connection failed" errors and lots of overhead dealing with failed connections.

There would be an overhead, but Resin does keep track of failed connections and avoids reconnecting to the same server for a timeout (30s or so). The bigger problem is when Resin chooses a non-existent server as the backup. In that case, your sessions wouldn't get backed up at all.
Another thought is if there is some way to use a Persistent Store with always-load-session and always-save-session set, so that the cluster members don't need to know each other, but rather just handle their sessions through the database with no interaction with each other. Perhaps even better yet, they could store these sessions through a memcached instance which can save/restore using a database but provide memory-based access to the session store.
The jdbc-store would do the first. The instances do still need to know about each other to get the index/cookie generation right, but otherwise they wouldn't need to know about each other.

-- Scott

All suggestions welcome. I'm hoping there's a known "best-practice" solution for this, so let me know.
Thanks,
Bernie


_______________________________________________
resin-interest mailing list
resin-interest@caucho.com
http://maillist.caucho.com/mailman/listinfo/resin-interest

_______________________________________________
resin-interest mailing list
resin-interest@caucho.com
http://maillist.caucho.com/mailman/listinfo/resin-interest

Reply via email to