Running Resin 3.0.21

We are running many resin servers, each have applications that manage their own clustering using Tangosol for their shared data. However, the applications do not handle their own user sessions. We use resin for that, and generally that has been fine as long as we have our load- balancers keep their sessions sticky, talking to the same server. Now we are considering making our sessions less sticky, which means we'll need to start using clustering for sessions at the Resin level.

Here's the hard part.

We have many servers running many instances or applications. Always at least a pair of any instance running. Currently when we deploy to a pair of servers, there is no configuration we need to do on those servers. Each instance has their own port across all servers, so Tangosol is able to use broadcast to find it's cluster members and figure out it's cluster set. When a machine goes down, we simply copy an instance to another machine, fire it up, and the pair for that app is running again.

Is it possible to have Resin clustering work with no configuration per- machine? Is there any solution that would allow Resin instances on the same port to find each other, or to have a single configuration for all instances regardless of their ip address so that they can be clustered no matter where they are?

One thought is to have a list of all possible ip addresses (or just the final part of the address) as the id for each, then in the launch script, we can pass in the id of the local ip address for the machine. That means, that we'll 254 servers listed in the configuration when only two will actually exist in many cases.

  <http id='2' port='${hport}'/>
  <http id='3' port='${hport}'/>
  <http id='4' port='${hport}'/>
  <http id='5' port='${hport}'/>
  <http id='254' port='${hport}'/>

   <srun id='2' host='' port='${aport}'/>
    <srun id='3' host='' port='${aport}'/>
    <srun id='4' host='' port='${aport}'/>
    <srun id='5' host='' port='${aport}'/>
    <srun id='254' host='' port='${aport}'/>

And let's say the launch script calls something like this:
${ipend} = <final part of the full ip address>
<launch-command> -Dhport=10080 -Daport=20080 -server=${ipend}

I'm thinking this solution would cause logs full of "connection failed" errors and lots of overhead dealing with failed connections. Another thought is if there is some way to use a Persistent Store with always-load-session and always-save-session set, so that the cluster members don't need to know each other, but rather just handle their sessions through the database with no interaction with each other. Perhaps even better yet, they could store these sessions through a memcached instance which can save/restore using a database but provide memory-based access to the session store.

All suggestions welcome. I'm hoping there's a known "best-practice" solution for this, so let me know.

resin-interest mailing list

Reply via email to