Beautiful. Thanks a LOT for sharing this info. It took a big uncertainty
off the table as I was unsure if OTRS can handle load balancing without
tweaking it in some ugly ways.

The DMS integration is very nice. If authentication / authorization against
the DMS were an issue in your setup, how did you handle them? I can see
single sign on implemented for authentication but can't imagine how
authorization would work.



On Tue, Jan 29, 2013 at 11:12 PM, David Boyes <[email protected]> wrote:

>    I'm not familiar with LVS or Linux-HA (mostly used MS platforms until
> ~ recently) so the next question may be born out of confusion: You have the
> load balancing performed by machines runnings LVS and the Linux-HA is
> running on the app nodes?
>
> ****
>
> Correct. LVS handles session distribution and session affinity. Linux-HA
> handles resilience for the application nodes (if one node fails or goes
> unresponsive, Linux-HA can do things like move the workload to the other
> server, shoot the unresponsive node in the head, and take over). In the
> future, I plan to have it be able to provision an entirely new server node
> in one of the virtual machine pools once I get a decent grasp on the VMWare
> orchestration API.  ****
>
>
> Just to make sure I understood some of your points:
> a. You have a load balancer in front of your app nodes using a generic
> sticky sessions algorithm that you didn't have to configure in any way for
> the OTRS nodes. Correct?****
>
> Correct. ****
>
> We took the simple approach of assigning affinity based on incoming IP
> address because that’s how the Cisco boxes we had before did it. We had to
> do a bit of fiddling and measuring of our traffic load to tell LVS how
> often to time out the affinity relationship (how long a IP is associated
> with a specific OTRS server instance). You also have to accept that SSL
> protection of the session ends at the LVS systems, but in our case, that’s
> acceptable (if you have physical access to the wires, we’re already
> screwed). I suspect that OTRS session data in the database is sufficient
> nowadays, but we’ve never bothered to change the LVS setup. Ain’t broke,
> don’t fix it. ****
>
>
> b. In addition, because of your clever attachment juggling and because the
> user session is stored in the db you did not have to set up shared storage
> between the app nodes. Correct?****
>
> Correct. Data management really shouldn’t be OTRS’ job. ****
>
> One last question: If you have no shared storage, how did you solve the
> default ticket number generator's dependency of TicketCounter.log file? A
> custom generator?****
>
> Yes. Simple piece of code that uses a autoincrement field in a database
> table. I’ll see if I can get approval to post it. OTRS mainline really
> should move to something like it. ****
>
> One thing that is a bit weird if you use DB replication (rather than
> cluster mode) is that your autoincrement values are a function of the
> number of replicated nodes (ie, our ticket #s increment by 3, not one).
> That’s a MySQL thing, though. ****
>
> ---------------------------------------------------------------------
> OTRS mailing list: otrs - Webpage: http://otrs.org/
> Archive: http://lists.otrs.org/pipermail/otrs
> To unsubscribe: http://lists.otrs.org/cgi-bin/listinfo/otrs
>
---------------------------------------------------------------------
OTRS mailing list: otrs - Webpage: http://otrs.org/
Archive: http://lists.otrs.org/pipermail/otrs
To unsubscribe: http://lists.otrs.org/cgi-bin/listinfo/otrs

Reply via email to