Hello Mailing List, Problem solved , the location "rule?" caused it, I used "crm configure edit" to remove the lines, that did the tick. So I really know that I must do some reading to understand the tools and infrastructure of pacemaker&corosync.
Thanks a lot anyway, I'm sure that I'll ask for help in the near future, Ben On Sat, May 7, 2011 at 3:39 PM, Ben Schmidt <[email protected]> wrote: > Hello, > > I just started with Pacemaker + corosync a couple hours ago so please > excuse my major screw ups. > What I have achieved so far is a two node Cluster that talks to each > other and one resource on that Cluster. My Problem is that the > resource, a failover-ip, doesn't get failed to the other node if I > take down the node the resource is started on but I can move the > resource. > > Status: > ############### > deb-cluster01:~# crm_mon --one-shot > ============ > Last updated: Sat May 7 15:15:56 2011 > Stack: openais > Current DC: deb-cluster02 - partition with quorum > Version: 1.0.9-74392a28b7f31d7ddc86689598bd23114f58978b > 2 Nodes configured, 2 expected votes > 1 Resources configured. > ============ > > Online: [ deb-cluster01 deb-cluster02 ] > > failover-ip (ocf::heartbeat:IPaddr2): Started deb-cluster01 > ############### > > This happens when I takedown deb-cluster01 > ############### > deb-cluster02:/var/log# crm_mon --one-shot > ============ > Last updated: Sat May 7 15:16:40 2011 > Stack: openais > Current DC: deb-cluster02 - partition WITHOUT quorum > Version: 1.0.9-74392a28b7f31d7ddc86689598bd23114f58978b > 2 Nodes configured, 2 expected votes > 1 Resources configured. > ============ > > Online: [ deb-cluster02 ] > OFFLINE: [ deb-cluster01 ] > ############### > > > Here is my config: > ############### > deb-cluster02:/var/log# crm configure show > node deb-cluster01 \ > attributes standby="off" > node deb-cluster02 > primitive failover-ip ocf:heartbeat:IPaddr2 \ > params ip="10.0.2.250" cidr_netmask="22" \ > op monitor interval="1s" \ > meta is-managed="true" > location cli-standby-failover-ip failover-ip \ > rule $id="cli-standby-rule-failover-ip" -inf: #uname eq deb-cluster02 > property $id="cib-bootstrap-options" \ > dc-version="1.0.9-74392a28b7f31d7ddc86689598bd23114f58978b" \ > cluster-infrastructure="openais" \ > expected-quorum-votes="2" \ > stonith-enabled="false" > rsc_defaults $id="rsc-options" \ > resource-stickiness="100" > ############### > > A Failover to deb-cluster02 > ############### > deb-cluster01:~# crm_mon --one-shot | grep Started > failover-ip (ocf::heartbeat:IPaddr2): Started deb-cluster01 > deb-cluster01:~# crm resource move failover-ip > WARNING: Creating rsc_location constraint 'cli-standby-failover-ip' > with a score of -INFINITY for resource failover-ip on deb-cluster01. > This will prevent failover-ip from running on deb-cluster01 until the > constraint is removed using the 'crm_resource -U' command or manually > with cibadmin > This will be the case even if deb-cluster01 is the last node in the > cluster > This message can be disabled with -Q > deb-cluster01:~# crm_mon --one-shot | grep Started > failover-ip (ocf::heartbeat:IPaddr2): Started deb-cluster02 > ############### > > Here is the log of deb-cluster01 when failover-ip was running on the > other node and I restarted the node: http://pastebin.com/ftLXLBfT > > I'm using a plain debian squeeze amd64, everything comes from the > distribution. > > > Could somebody please point me in a direction on where to look at. > > Thanks a lot, > > Ben > _______________________________________________ Pacemaker mailing list: [email protected] http://oss.clusterlabs.org/mailman/listinfo/pacemaker Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs: http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker
