Andrew Beekhof wrote:
On Fri, May 2, 2008 at 2:56 PM, Travis Sidelinger <[EMAIL PROTECTED]> wrote:
Hello everyone,

 This is my first time setting up heartbeat, and I'm running into a few
problems.

 My objective is to have a 2.x cluster of 4+ servers with the following
resources: different apache instances, ip addresses, application servers,
databases, NFS, and ISCSI.

 Currently, I have a version 1.x cluster of 3 servers up and running with
apache and IP address resources.

highly dangerous!
Why is that?


 Documentation on setting up a 1.x verses a 2.x configuration has been a
little confusing.

 I've setup a 4th server to help develop a 2.x configuration.  The follow
messages keep showing up in the logs.

 ---------------------------------
 May  2 08:56:05 ocdcweb037 heartbeat: [27911]: WARN: string2msg_ll: node
[ocdcweb034] failed authentication
 May  2 08:56:07 ocdcweb037 heartbeat: [27911]: WARN: string2msg_ll: node
[ocdcweb034] failed authentication
 May  2 08:56:09 ocdcweb037 heartbeat: [27911]: WARN: string2msg_ll: node
[ocdcweb034] failed authentication
 May  2 08:56:09 ocdcweb037 heartbeat: [27911]: WARN: string2msg_ll: node
[ocdcweb034] failed authentication
 May  2 08:56:11 ocdcweb037 heartbeat: [27911]: WARN: string2msg_ll: node
[ocdcweb034] failed authentication
 May  2 08:56:13 ocdcweb037 heartbeat: [27911]: WARN: string2msg_ll: node
[ocdcweb034] failed authentication
 May  2 08:56:15 ocdcweb037 heartbeat: [27911]: WARN: string2msg_ll: node
[ocdcweb034] failed authentication

 ---------------------------------

 How can I fix this?

i'm guessing you either forgot to copy the authkeys file to the new
node or not all the nodes are listed in ha.cf

Nope, both the authkeys and ha.cf files are identical on all 3 servers.


 My ha.cf file looks like this:

 -------------------------------------
 autojoin none           # autojoin other nodes
 crm on                  # Use version 1.x or version 2.x style
 logfacility local0      # syslog facility
 use_logd no             # another logging service
 debug 0                 # levels 0-255
 #bcast eth0             # node(s) to send heartbeats on
 ucast eth0 ocdcweb037   # nodes to send heartbeats to
 keepalive 2             # time between heartbeats
 warntime 10             # time before a late warning shows in the logs
 deadtime 30             # time before the node is pronounced dead
 initdead 120            # deadtime after a reboot, gives time for the
network to come up
 udpport 694             # udp port for heartbeat broadcast
 node ocdcweb037         # cluster node
 auto_failback on        # enables favorite member node

 #apiauth  mgmtd   uid=hacluster
 respawn   root    /usr/lib64/heartbeat/mgmtd -v
 #respawn  root    /sbin/evmsd
 #apiauth  evms    uid=hacluster,root

 # 2.x settings
 #apiauth stonithd        uid=root
 #apiauth crmd            uid=hacluster
 #apiauth cib             uid=hacluster
 #respawn hacluster       ccm
 #respawn hacluster       cib
 #respawn root            stonithd
 #respawn root            lrmd
 #respawn hacluster       crmd
 -------------------------------------

 No resources have been configured yet for the 2.x configuration.

 Thanks

 -Travis Sidelinger
 _______________________________________________
 Linux-HA mailing list
 [email protected]
 http://lists.linux-ha.org/mailman/listinfo/linux-ha
 See also: http://linux-ha.org/ReportingProblems

_______________________________________________
Linux-HA mailing list
[email protected]
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems


_______________________________________________
Linux-HA mailing list
[email protected]
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems

Reply via email to