sure!

server2:/home/milesf# crm configure show
node $id="0444a71c-c1b0-40c2-83e5-c37005379450" server3 \
    attributes standby="off"
node $id="14d2d0ed-59d1-4729-a149-d5421b6a4988" server2 \
    attributes standby="off"
primitive production2 ocf:heartbeat:Xen \
    meta target-role="Started" is-managed="true" \
    op monitor interval="10s" timeout="60s" requires="nothing" \
    op start interval="0" timeout="60s" start-delay="0" \
    op stop interval="0" timeout="300s" \
    params xmfile="/etc/xen/production2.cfg" \
    meta target-role="Started" is-managed="true"
primitive server1 ocf:heartbeat:Xen \
    meta target-role="Started" is-managed="true" \
    op monitor interval="10s" timeout="60s" requires="nothing" \
    op start interval="0" timeout="60s" start-delay="0" \
    op stop interval="0" timeout="300s" \
    params xmfile="/etc/xen/newserver1.cfg" \
    meta target-role="Started" is-managed="true"
location prefer-production2 production2 100: server2
location prefer-server1 server1 100: server3
property $id="cib-bootstrap-options" \
    dc-version="1.0.9-74392a28b7f31d7ddc86689598bd23114f58978b" \
    is-managed-default="true" \
    cluster-infrastructure="Heartbeat" \
    last-lrm-refresh="1375018083" \
    stonith-enable="false" \
    no-quorum-policy="ignore" \
    stonith-enabled="false" \
    default-resource-stickiness="1000"
rsc_defaults $id="rsc-options" \
    resource-stickiness="1000"

*



emmanuel segura* emi2fast at gmail.com wrote
<mailto:linux-ha%40lists.linux-ha.org?Subject=Re%3A%20%5BLinux-HA%5D%20resource%20won%27t%20run%20on%20a%20specific%20node&In-Reply-To=%3CCAE7pJ3DD4OzMtsFtJUkRsNtD5b2fGhTiCGTOHtmxUuMdNFB_8w%40mail.gmail.com%3E>
Hello

Can you show us crm configure show?
thanks


2013/7/27 Miles Fidelman <mfidelman at meetinghouse.net  
<http://lists.linux-ha.org/mailman/listinfo/linux-ha>>

>/  Hi Folks,
/>/
/>/  Dual-node, pacemaker cluster, DRBD-backed xen virtual machines - one of
/>/  our VMs will run on one node, but not the other, and "crm status" yields a
/>/  failure message saying that starting the resource failed for unknown
/>/  reasons.  The log is only slightly less useless:
/>/
/>/  (server2 and server3 are the nodes, server1 is the resource)
/>/  <server3, running server1, crashes>
/>/  <node entries from server2 trying to failover the resource>
/>/
/>/  Jul 27 06:27:06 server2 pengine: [1365]: info: get_failcount: server1 has
/>/  failed INFINITY times on server2
/>/  Jul 27 06:27:06 server2 pengine: [1365]: WARN: common_apply_stickiness:
/>/  Forcing server1 away from server2 after 1000000 failures (max=1000000)
/>/  Jul 27 06:27:06 server2 pengine: [1365]: info: native_color: Resource
/>/  server1 cannot run anywhere
/>/  Jul 27 06:27:06 server2 pengine: [1365]: notice: LogActions: Leave
/>/  resource server1#011(Stopped)
/>/
/>/  Attempts to migrate the server fail with the same errors.  Failover USED
/>/  to work just fine.  It still works for other VMs. Any idea how to track
/>/  down what's failing?
/>/
/>/  Thanks very much,
/>/
/>/  Miles Fidelman
/>

--
In theory, there is no difference between theory and practice.
In practice, there is.   .... Yogi Berra

_______________________________________________
Linux-HA mailing list
[email protected]
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems

Reply via email to