Doug, What about something like monit to make sure ssh is up and running and restart if it crashes?
Amy -----Original Message----- From: Doug Lochart <[EMAIL PROTECTED]> Sent: Thursday, February 14, 2008 10:30am To: General Linux-HA mailing list <[email protected]> Subject: Re: [Linux-HA] resource script question (runlevel config) > First off you don't want to make sshd an HA service or you will not be > able to ssh to any node that is not the primary node. SSHD should be > running on all servers at all times. Unfortunately the main app we are trying to make highly available is an rsync based backup system. Originally we were running rsync as a daemon and tunneling over SSH. We discovered a security hole in that setup where a user could edit their scripts and change module names to another client and gain access to their data because they were already authenticated to the box via the ssh tunnel and rsync daemon has access to all the modules. Anyway we needed more control and better security so we switched to running rsync within an SSH connection. This starts an rsync process in daemon mode but it runs as that user and can ONLY access modules in the user defined module in their home directory. We also added a lot of protection in an rsync wrapper script that is run first to intercept and augment/protect/reject commands coming in via SSH to limit a user from doing ANYTHING else besides the rsync command. To make a long story short SSH is the backbone of our app. I did not think about SSH not being available on the secondary node so that is a concern especially for remote administration. > We start heartbeat at boot on all of our clusters and I can't see a reason > why you would not. You, of course, cannot start the services which you are > making HA as they are started by heartbeat when it runs. Thanks. Just trying to be clear. I appreciate the tutorials as I know they are a give back to others like me but when you are struggling to come up to speed quickly (as is always the case) little details ommitted make all the difference in understanding. > In any failover the nodes do know their roles when they come because they > talk to the cluster. How this is handled depends on your configuration. I > would advise using v2 configs and letting heartbeat manage the resources. > We have both v1 and v2 in our environment and while v2 is insane to set up > and get working just right it is worth it in the long run. The short > answer is that, in your scenario, server2 would indeed know that it is > primary. Great. So if V1 works why is V2 so much better? I would like to get this up and running in a test environment first under V1 so I can at least get a grip on how it all works and do some testing. After that I may try to leap to V2. regards, Doug > > _______________________________________________ > Linux-HA mailing list > [email protected] > http://lists.linux-ha.org/mailman/listinfo/linux-ha > See also: http://linux-ha.org/ReportingProblems > -- What profits a man if he gains the whole world yet loses his soul? _______________________________________________ Linux-HA mailing list [email protected] http://lists.linux-ha.org/mailman/listinfo/linux-ha See also: http://linux-ha.org/ReportingProblems _______________________________________________ Linux-HA mailing list [email protected] http://lists.linux-ha.org/mailman/listinfo/linux-ha See also: http://linux-ha.org/ReportingProblems
