Thanks for looking Dejan

Yes I am able to start /etc/init.d/nfs I have same script named 
/etc/init.d/nfsserver as ocf is using /etc/init.d/nfsserver

This is my log file when I add nfs resource using hb_gui. All my node are with 
equal score is that a problem?
On gui it says nfs_home2 is unmanaged on all nodes. Why  is it trying to 
start/stop this on all nodes?



Feb 24 09:14:00 hos002a pengine: [14393]: info: native_assign_node: 3 nodes 
with equal score (102) for running the listed resources (chose hos001a):
Feb 24 09:14:00 hos002a pengine: [14393]: notice: NoRoleChange: Leave resource 
IP-004   (hos001a)
Feb 24 09:14:00 hos002a pengine: [14393]: notice: NoRoleChange: Leave resource 
LVM_tpa1 (hos001a)
Feb 24 09:14:00 hos002a pengine: [14393]: notice: NoRoleChange: Leave resource 
FS_tpa1  (hos001a)
Feb 24 09:14:00 hos002a pengine: [14393]: info: process_pe_message: Transition 
101: PEngine Input stored in: /var/lib/heartbeat/pengine/pe-input-153.bz2

------------------ Added resources -----------

Feb 24 09:15:01 hos002a mgmtd: [14382]: info: on_add_rsc:<group 
id="GROUP_002"><primitive id="NFS_home2" class="ocf" type="nfsserver" 
provider="heartbeat"><instance_attributes id="NFS_home2_instance_attrs"> 
<attributes><nvpair id="2b5a6ec2-bdfa-4484-a835-ac47bd828e2d" 
name="nfs_init_script" value="/etc/init.d/nfsserver"/><nvpair 
id="90f160f1-87cf-4843-b042-152b1f32e9bd" name="nfs_shared_infodir" 
value="/var/lib/nfs"/><nvpair id="ffbc10b8-33d0-4aaf-9a72-4bcc381415d6" 
name="nfs_ip" 
value="10.26.16.6"/></attributes></instance_attributes></primitive></group>
Feb 24 09:15:01 hos002a haclient: on_event:evt:cib_changed
Feb 24 09:15:01 hos002a cib: [14373]: info: log_data_element: cib:diff: - <cib 
epoch="325"/>
Feb 24 09:15:01 hos002a crmd: [14378]: info: do_state_transition: State 
transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_IPC_MESSAGE 
origin=route_message ]
Feb 24 09:15:01 hos002a tengine: [14392]: info: update_abort_priority: Abort 
priority upgraded to 1000000
Feb 24 09:15:01 hos002a cib: [14373]: info: log_data_element: cib:diff: + <cib 
epoch="326">
Feb 24 09:15:01 hos002a crmd: [14378]: info: do_state_transition: All 3 cluster 
nodes are eligible to run resources.
Feb 24 09:15:01 hos002a cib: [14373]: info: log_data_element: cib:diff: +   
<configuration>
Feb 24 09:15:01 hos002a cib: [14373]: info: log_data_element: cib:diff: +     
<resources>
Feb 24 09:15:01 hos002a cib: [14373]: info: log_data_element: cib:diff: +       
<group id="GROUP_002">
Feb 24 09:15:01 hos002a cib: [14373]: info: log_data_element: cib:diff: +       
  <primitive id="NFS_home2" class="ocf" type="nfsserver" provider="heartbeat">
Feb 24 09:15:01 hos002a cib: [14373]: info: log_data_element: cib:diff: +       
    <instance_attributes id="NFS_home2_instance_attrs">
Feb 24 09:15:01 hos002a cib: [14373]: info: log_data_element: cib:diff: +       
      <attributes>
Feb 24 09:15:01 hos002a cib: [14373]: info: log_data_element: cib:diff: +       
        <nvpair id="2b5a6ec2-bdfa-4484-a835-ac47bd828e2d" 
name="nfs_init_script" value="/etc/init.d/nfsserver"/>
Feb 24 09:15:01 hos002a cib: [14373]: info: log_data_element: cib:diff: +       
        <nvpair id="90f160f1-87cf-4843-b042-152b1f32e9bd" 
name="nfs_shared_infodir" value="/var/lib/nfs"/>
Feb 24 09:15:01 hos002a cib: [14373]: info: log_data_element: cib:diff: +       
        <nvpair id="ffbc10b8-33d0-4aaf-9a72-4bcc381415d6" name="nfs_ip" 
value="10.26.16.6"/>
Feb 24 09:15:01 hos002a cib: [14373]: info: log_data_element: cib:diff: +       
      </attributes>
Feb 24 09:15:01 hos002a cib: [14373]: info: log_data_element: cib:diff: +       
    </instance_attributes>
Feb 24 09:15:01 hos002a cib: [14373]: info: log_data_element: cib:diff: +       
  </primitive>
Feb 24 09:15:01 hos002a cib: [14373]: info: log_data_element: cib:diff: +       
</group>
Feb 24 09:15:01 hos002a cib: [14373]: info: log_data_element: cib:diff: +     
</resources>
Feb 24 09:15:01 hos002a cib: [14373]: info: log_data_element: cib:diff: +   
</configuration>
Feb 24 09:15:01 hos002a cib: [14373]: info: log_data_element: cib:diff: + </cib>
Feb 24 09:15:01 hos002a cib: [8850]: info: retrieveCib: Reading cluster 
configuration from: /var/lib/heartbeat/crm/cib.xml (digest: 
/var/lib/heartbeat/crm/cib.xml.sig)
Feb 24 09:15:01 hos002a cib: [8850]: info: retrieveCib: Reading cluster 
configuration from: /var/lib/heartbeat/crm/cib.xml (digest: 
/var/lib/heartbeat/crm/cib.xml.sig)
Feb 24 09:15:01 hos002a cib: [8850]: info: retrieveCib: Reading cluster 
configuration from: /var/lib/heartbeat/crm/cib.xml.last (digest: 
/var/lib/heartbeat/crm/cib.xml.sig.last)
Feb 24 09:15:01 hos002a pengine: [14393]: notice: unpack_config: On loss of CCM 
Quorum: Ignore
Feb 24 09:15:01 hos002a pengine: [14393]: info: determine_online_status: Node 
hos002a is online
Feb 24 09:15:01 hos002a pengine: [14393]: info: determine_online_status: Node 
hos001a is online
Feb 24 09:15:01 hos002a pengine: [14393]: info: determine_online_status: Node 
hos003a is online
Feb 24 09:15:01 hos002a pengine: [14393]: notice: group_print: Resource Group: 
GROUP_001
Feb 24 09:15:01 hos002a pengine: [14393]: notice: native_print:     IP_001      
(heartbeat::ocf:IPaddr2):       Started hos001a
Feb 24 09:15:01 hos002a pengine: [14393]: notice: native_print:     LVM_home1   
(heartbeat::ocf:LVM):   Started hos001a
Feb 24 09:15:01 hos002a pengine: [14393]: notice: native_print:     FS_home1    
(heartbeat::ocf:Filesystem):    Started hos001a
Feb 24 09:15:01 hos002a pengine: [14393]: notice: group_print: Resource Group: 
GROUP_002
Feb 24 09:15:01 hos002a pengine: [14393]: notice: native_print:     IP_002      
(heartbeat::ocf:IPaddr2):       Started hos002a
Feb 24 09:15:01 hos002a pengine: [14393]: notice: native_print:     LVM_home2   
(heartbeat::ocf:LVM):   Started hos002a
Feb 24 09:15:01 hos002a pengine: [14393]: notice: native_print:     LVM_tpa3    
(heartbeat::ocf:LVM):   Started hos002a
Feb 24 09:15:01 hos002a pengine: [14393]: notice: native_print:     FS_home2    
(heartbeat::ocf:Filesystem):    Started hos002a
Feb 24 09:15:01 hos002a pengine: [14393]: notice: native_print:     FS_tpa3     
(heartbeat::ocf:Filesystem):    Started hos002a
Feb 24 09:15:01 hos002a pengine: [14393]: notice: native_print:     NFS_home2   
(heartbeat::ocf:nfsserver):     Stopped 
Feb 24 09:15:01 hos002a pengine: [14393]: notice: group_print: Resource Group: 
GROUP_003
Feb 24 09:15:01 hos002a pengine: [14393]: notice: native_print:     IP_003      
(heartbeat::ocf:IPaddr2):       Started hos003a
Feb 24 09:15:01 hos002a pengine: [14393]: notice: native_print:     LVM_home3   
(heartbeat::ocf:LVM):   Started hos003a
Feb 24 09:15:01 hos002a pengine: [14393]: notice: native_print:     LVM_tpa2    
(heartbeat::ocf:LVM):   Started hos003a
Feb 24 09:15:01 hos002a pengine: [14393]: notice: native_print:     FS_home3    
(heartbeat::ocf:Filesystem):    Started hos003a
Feb 24 09:15:01 hos002a pengine: [14393]: notice: native_print:     FS_tpa2     
(heartbeat::ocf:Filesystem):    Started hos003a
Feb 24 09:15:01 hos002a pengine: [14393]: notice: group_print: Resource Group: 
GROUP_004
Feb 24 09:15:01 hos002a haclient: on_event:evt:cib_changed
Feb 24 09:15:01 hos002a pengine: [14393]: notice: native_print:     IP-004      
(heartbeat::ocf:IPaddr2):       Started hos001a
Feb 24 09:15:01 hos002a pengine: [14393]: notice: native_print:     LVM_tpa1    
(heartbeat::ocf:LVM):   Started hos001a
Feb 24 09:15:01 hos002a pengine: [14393]: notice: native_print:     FS_tpa1     
(heartbeat::ocf:Filesystem):    Started hos001a
Feb 24 09:15:01 hos002a pengine: [14393]: notice: NoRoleChange: Leave resource 
IP_001   (hos001a)
Feb 24 09:15:01 hos002a pengine: [14393]: notice: NoRoleChange: Leave resource 
LVM_home1        (hos001a)
Feb 24 09:15:01 hos002a pengine: [14393]: notice: NoRoleChange: Leave resource 
FS_home1 (hos001a)
Feb 24 09:15:01 hos002a pengine: [14393]: WARN: native_color: Resource 
NFS_home2 cannot run anywhere
Feb 24 09:15:01 hos002a pengine: [14393]: notice: NoRoleChange: Leave resource 
IP_002   (hos002a)
Feb 24 09:15:01 hos002a pengine: [14393]: notice: NoRoleChange: Leave resource 
LVM_home2        (hos002a)
Feb 24 09:15:01 hos002a pengine: [14393]: notice: NoRoleChange: Leave resource 
LVM_tpa3 (hos002a)
Feb 24 09:15:01 hos002a pengine: [14393]: notice: NoRoleChange: Leave resource 
FS_home2 (hos002a)
Feb 24 09:15:01 hos002a pengine: [14393]: notice: NoRoleChange: Leave resource 
FS_tpa3  (hos002a)
Feb 24 09:15:01 hos002a pengine: [14393]: info: native_assign_node: 3 nodes 
with equal score (104) for running the listed resources (chose hos003a):
Feb 24 09:15:01 hos002a pengine: [14393]: notice: NoRoleChange: Leave resource 
IP_003   (hos003a)
Feb 24 09:15:01 hos002a pengine: [14393]: notice: NoRoleChange: Leave resource 
LVM_home3        (hos003a)
Feb 24 09:15:01 hos002a pengine: [14393]: notice: NoRoleChange: Leave resource 
LVM_tpa2 (hos003a)
Feb 24 09:15:01 hos002a pengine: [14393]: notice: NoRoleChange: Leave resource 
FS_home3 (hos003a)
Feb 24 09:15:01 hos002a pengine: [14393]: notice: NoRoleChange: Leave resource 
FS_tpa2  (hos003a)
Feb 24 09:15:01 hos002a pengine: [14393]: info: native_assign_node: 3 nodes 
with equal score (102) for running the listed resources (chose hos001a):
Feb 24 09:15:01 hos002a pengine: [14393]: notice: NoRoleChange: Leave resource 
IP-004   (hos001a)
Feb 24 09:15:01 hos002a pengine: [14393]: notice: NoRoleChange: Leave resource 
LVM_tpa1 (hos001a)
Feb 24 09:15:01 hos002a pengine: [14393]: notice: NoRoleChange: Leave resource 
FS_tpa1  (hos001a)
Feb 24 09:15:01 hos002a cib: [8850]: info: write_cib_contents: Wrote version 
0.326.1 of the CIB to disk (digest: e395799738e119d85ba46f01882b248a)
Feb 24 09:15:01 hos002a crmd: [14378]: info: do_state_transition: State 
transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS 
cause=C_IPC_MESSAGE origin=route_message ]
Feb 24 09:15:01 hos002a tengine: [14392]: info: unpack_graph: Unpacked 
transition 102: 7 actions in 7 synapses
Feb 24 09:15:01 hos002a tengine: [14392]: info: send_rsc_command: Initiating 
action 3: NFS_home2_monitor_0 on hos001a
Feb 24 09:15:01 hos002a tengine: [14392]: info: send_rsc_command: Initiating 
action 5: NFS_home2_monitor_0 on hos002a
Feb 24 09:15:01 hos002a tengine: [14392]: info: send_rsc_command: Initiating 
action 7: NFS_home2_monitor_0 on hos003a
Feb 24 09:15:01 hos002a crmd: [14378]: info: do_lrm_rsc_op: Performing 
op=NFS_home2_monitor_0 key=5:102:df84890a-e35d-4e96-bb8e-01c854321e83)
Feb 24 09:15:01 hos002a lrmd: [14374]: info: rsc:NFS_home2: monitor
Feb 24 09:15:01 hos002a cib: [8850]: info: retrieveCib: Reading cluster 
configuration from: /var/lib/heartbeat/crm/cib.xml (digest: 
/var/lib/heartbeat/crm/cib.xml.sig)
Feb 24 09:15:01 hos002a cib: [8850]: info: retrieveCib: Reading cluster 
configuration from: /var/lib/heartbeat/crm/cib.xml.last (digest: 
/var/lib/heartbeat/crm/cib.xml.sig.last)
Feb 24 09:15:01 hos002a pengine: [14393]: WARN: process_pe_message: Transition 
102: WARNINGs found during PE processing. PEngine Input stored in: 
/var/lib/heartbeat/pengine/pe-warn-356.bz2
Feb 24 09:15:01 hos002a pengine: [14393]: info: process_pe_message: 
Configuration WARNINGs found during PE processing.  Please run "crm_verify -L" 
to identify issues.
Feb 24 09:15:01 hos002a crmd: [14378]: info: process_lrm_event: LRM operation 
NFS_home2_monitor_0 (call=70, rc=0) complete 
Feb 24 09:15:01 hos002a cib: [14373]: info: cib_stats: Processed 68 operations 
(8970.00us average, 0% utilization) in the last 10min
Feb 24 09:15:01 hos002a tengine: [14392]: info: status_from_rc: Re-mapping op 
status to LRM_OP_ERROR for rc=0
Feb 24 09:15:01 hos002a tengine: [14392]: WARN: status_from_rc: Action monitor 
on hos002a failed (target: 7 vs. rc: 0): Error
Feb 24 09:15:01 hos002a tengine: [14392]: info: update_abort_priority: Abort 
priority upgraded to 1
Feb 24 09:15:01 hos002a tengine: [14392]: info: update_abort_priority: Abort 
action 0 superceeded by 2
Feb 24 09:15:01 hos002a tengine: [14392]: info: match_graph_event: Action 
NFS_home2_monitor_0 (5) confirmed on hos002a (rc=4)
Feb 24 09:15:02 hos002a haclient: on_event:evt:cib_changed
Feb 24 09:15:02 hos002a tengine: [14392]: WARN: status_from_rc: Action monitor 
on hos003a failed (target: 7 vs. rc: 1): Error
Feb 24 09:15:02 hos002a tengine: [14392]: info: match_graph_event: Action 
NFS_home2_monitor_0 (7) confirmed on hos003a (rc=4)
Feb 24 09:15:02 hos002a tengine: [14392]: WARN: status_from_rc: Action monitor 
on hos001a failed (target: 7 vs. rc: 1): Error
Feb 24 09:15:02 hos002a tengine: [14392]: info: match_graph_event: Action 
NFS_home2_monitor_0 (3) confirmed on hos001a (rc=4)
Feb 24 09:15:02 hos002a tengine: [14392]: info: run_graph: 
====================================================
Feb 24 09:15:02 hos002a tengine: [14392]: notice: run_graph: Transition 102: 
(Complete=3, Pending=0, Fired=0, Skipped=1, Incomplete=3)
Feb 24 09:15:02 hos002a crmd: [14378]: info: do_state_transition: State 
transition S_TRANSITION_ENGINE -> S_POLICY_ENGINE [ input=I_PE_CALC 
cause=C_IPC_MESSAGE origin=route_message ]
Feb 24 09:15:02 hos002a crmd: [14378]: info: do_state_transition: All 3 cluster 
nodes are eligible to run resources.
Feb 24 09:15:02 hos002a haclient: on_event:evt:cib_changed
Feb 24 09:15:02 hos002a crmd: [14378]: info: do_state_transition: State 
transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS 
cause=C_IPC_MESSAGE origin=route_message ]
Feb 24 09:15:02 hos002a tengine: [14392]: info: unpack_graph: Unpacked 
transition 103: 6 actions in 6 synapses
Feb 24 09:15:02 hos002a pengine: [14393]: notice: unpack_config: On loss of CCM 
Quorum: Ignore
Feb 24 09:15:02 hos002a lrmd: [14374]: info: rsc:NFS_home2: stop
Feb 24 09:15:02 hos002a crmd: [14378]: info: do_lrm_rsc_op: Performing 
op=NFS_home2_stop_0 key=28:103:df84890a-e35d-4e96-bb8e-01c854321e83)
Feb 24 09:15:02 hos002a tengine: [14392]: info: te_pseudo_action: Pseudo action 
31 fired and confirmed
Feb 24 09:15:02 hos002a pengine: [14393]: info: determine_online_status: Node 
hos002a is online
Feb 24 09:15:02 hos002a tengine: [14392]: info: send_rsc_command: Initiating 
action 1: NFS_home2_stop_0 on hos001a
Feb 24 09:15:02 hos002a pengine: [14393]: info: determine_online_status: Node 
hos001a is online
Feb 24 09:15:02 hos002a tengine: [14392]: info: send_rsc_command: Initiating 
action 2: NFS_home2_stop_0 on hos003a
Feb 24 09:15:02 hos002a pengine: [14393]: WARN: unpack_rsc_op: Processing 
failed op NFS_home2_monitor_0 on hos001a: Error
Feb 24 09:15:02 hos002a tengine: [14392]: info: send_rsc_command: Initiating 
action 28: NFS_home2_stop_0 on hos002a
Feb 24 09:15:02 hos002a pengine: [14393]: ERROR: native_add_running: Resource 
ocf::nfsserver:NFS_home2 appears to be active on 2 nodes.
Feb 24 09:15:02 hos002a pengine: [14393]: ERROR: See 
http://linux-ha.org/v2/faq/resource_too_active for more information.
Feb 24 09:15:02 hos002a pengine: [14393]: info: determine_online_status: Node 
hos003a is online
Feb 24 09:15:02 hos002a pengine: [14393]: WARN: unpack_rsc_op: Processing 
failed op NFS_home2_monitor_0 on hos003a: Error
Feb 24 09:15:02 hos002a pengine: [14393]: ERROR: native_add_running: Resource 
ocf::nfsserver:NFS_home2 appears to be active on 3 nodes.
Feb 24 09:15:02 hos002a pengine: [14393]: ERROR: See 
http://linux-ha.org/v2/faq/resource_too_active for more information.
Feb 24 09:15:02 hos002a pengine: [14393]: notice: group_print: Resource Group: 
GROUP_001
Feb 24 09:15:02 hos002a pengine: [14393]: notice: native_print:     IP_001      
(heartbeat::ocf:IPaddr2):       Started hos001a
Feb 24 09:15:02 hos002a pengine: [14393]: notice: native_print:     LVM_home1   
(heartbeat::ocf:LVM):   Started hos001a
Feb 24 09:15:02 hos002a pengine: [14393]: notice: native_print:     FS_home1    
(heartbeat::ocf:Filesystem):    Started hos001a
Feb 24 09:15:02 hos002a rpcsvcgssd: rpc.svcgssd shutdown failed
Feb 24 09:15:02 hos002a pengine: [14393]: notice: group_print: Resource Group: 
GROUP_002
Feb 24 09:15:02 hos002a pengine: [14393]: notice: native_print:     IP_002      
(heartbeat::ocf:IPaddr2):       Started hos002a
Feb 24 09:15:02 hos002a pengine: [14393]: notice: native_print:     LVM_home2   
(heartbeat::ocf:LVM):   Started hos002a
Feb 24 09:15:02 hos002a pengine: [14393]: notice: native_print:     LVM_tpa3    
(heartbeat::ocf:LVM):   Started hos002a
Feb 24 09:15:02 hos002a pengine: [14393]: notice: native_print:     FS_home2    
(heartbeat::ocf:Filesystem):    Started hos002a
Feb 24 09:15:02 hos002a nfsserver: rpc.mountd shutdown failed
Feb 24 09:15:02 hos002a pengine: [14393]: notice: native_print:     FS_tpa3     
(heartbeat::ocf:Filesystem):    Started hos002a
Feb 24 09:15:02 hos002a pengine: [14393]: notice: native_print:     NFS_home2   
(heartbeat::ocf:nfsserver)
Feb 24 09:15:02 hos002a pengine: [14393]: notice: native_print:         0 : 
hos002a
Feb 24 09:15:02 hos002a pengine: [14393]: notice: native_print:         1 : 
hos001a
Feb 24 09:15:02 hos002a pengine: [14393]: notice: native_print:         2 : 
hos003a
Feb 24 09:15:02 hos002a nfsserver: nfsd shutdown failed
Feb 24 09:15:02 hos002a pengine: [14393]: notice: group_print: Resource Group: 
GROUP_003
Feb 24 09:15:02 hos002a pengine: [14393]: notice: native_print:     IP_003      
(heartbeat::ocf:IPaddr2):       Started hos003a
Feb 24 09:15:02 hos002a pengine: [14393]: notice: native_print:     LVM_home3   
(heartbeat::ocf:LVM):   Started hos003a
Feb 24 09:15:02 hos002a pengine: [14393]: notice: native_print:     LVM_tpa2    
(heartbeat::ocf:LVM):   Started hos003a
Feb 24 09:15:02 hos002a pengine: [14393]: notice: native_print:     FS_home3    
(heartbeat::ocf:Filesystem):    Started hos003a
Feb 24 09:15:02 hos002a nfsserver: rpc.rquotad shutdown failed
Feb 24 09:15:02 hos002a pengine: [14393]: notice: native_print:     FS_tpa2     
(heartbeat::ocf:Filesystem):    Started hos003a
Feb 24 09:15:02 hos002a pengine: [14393]: notice: group_print: Resource Group: 
GROUP_004
Feb 24 09:15:02 hos002a pengine: [14393]: notice: native_print:     IP-004      
(heartbeat::ocf:IPaddr2):       Started hos001a
Feb 24 09:15:02 hos002a nfsserver: Shutting down NFS services:  failed
Feb 24 09:15:02 hos002a pengine: [14393]: notice: native_print:     LVM_tpa1    
(heartbeat::ocf:LVM):   Started hos001a
Feb 24 09:15:02 hos002a pengine: [14393]: notice: native_print:     FS_tpa1     
(heartbeat::ocf:Filesystem):    Started hos001a
Feb 24 09:15:02 hos002a pengine: [14393]: notice: NoRoleChange: Leave resource 
IP_001   (hos001a)
Feb 24 09:15:02 hos002a pengine: [14393]: notice: NoRoleChange: Leave resource 
LVM_home1        (hos001a)
Feb 24 09:15:02 hos002a pengine: [14393]: notice: NoRoleChange: Leave resource 
FS_home1 (hos001a)
Feb 24 09:15:02 hos002a pengine: [14393]: WARN: native_color: Resource 
NFS_home2 cannot run anywhere
Feb 24 09:15:02 hos002a pengine: [14393]: notice: NoRoleChange: Leave resource 
IP_002   (hos002a)
Feb 24 09:15:02 hos002a haclient: on_event:evt:cib_changed
Feb 24 09:15:02 hos002a pengine: [14393]: notice: NoRoleChange: Leave resource 
LVM_home2        (hos002a)
Feb 24 09:15:02 hos002a pengine: [14393]: notice: NoRoleChange: Leave resource 
LVM_tpa3 (hos002a)
Feb 24 09:15:02 hos002a pengine: [14393]: notice: NoRoleChange: Leave resource 
FS_home2 (hos002a)
Feb 24 09:15:02 hos002a haclient: on_event:evt:cib_changed
Feb 24 09:15:02 hos002a pengine: [14393]: notice: NoRoleChange: Leave resource 
FS_tpa3  (hos002a)
Feb 24 09:15:03 hos002a pengine: [14393]: ERROR: native_create_actions: 
Attempting recovery of resource NFS_home2
Feb 24 09:15:03 hos002a pengine: [14393]: notice: StopRsc:   hos002a    Stop 
NFS_home2
Feb 24 09:15:03 hos002a pengine: [14393]: notice: StopRsc:   hos001a    Stop 
NFS_home2
Feb 24 09:15:03 hos002a pengine: [14393]: notice: StopRsc:   hos003a    Stop 
NFS_home2
Feb 24 09:15:03 hos002a pengine: [14393]: info: native_assign_node: 3 nodes 
with equal score (104) for running the listed resources (chose hos003a):
Feb 24 09:15:03 hos002a pengine: [14393]: notice: NoRoleChange: Leave resource 
IP_003   (hos003a)
Feb 24 09:15:03 hos002a pengine: [14393]: notice: NoRoleChange: Leave resource 
LVM_home3        (hos003a)
Feb 24 09:15:03 hos002a pengine: [14393]: notice: NoRoleChange: Leave resource 
LVM_tpa2 (hos003a)
Feb 24 09:15:03 hos002a pengine: [14393]: notice: NoRoleChange: Leave resource 
FS_home3 (hos003a)
Feb 24 09:15:03 hos002a pengine: [14393]: notice: NoRoleChange: Leave resource 
FS_tpa2  (hos003a)
Feb 24 09:15:03 hos002a pengine: [14393]: info: native_assign_node: 3 nodes 
with equal score (102) for running the listed resources (chose hos001a):
Feb 24 09:15:03 hos002a pengine: [14393]: notice: NoRoleChange: Leave resource 
IP-004   (hos001a)
Feb 24 09:15:03 hos002a pengine: [14393]: notice: NoRoleChange: Leave resource 
LVM_tpa1 (hos001a)
Feb 24 09:15:03 hos002a pengine: [14393]: notice: NoRoleChange: Leave resource 
FS_tpa1  (hos001a)
Feb 24 09:15:03 hos002a nfsserver[8889]: [8901]: INFO: +++++STOP++++++Stopping 
NFS server ...
Feb 24 09:15:03 hos002a pengine: [14393]: ERROR: process_pe_message: Transition 
103: ERRORs found during PE processing. PEngine Input stored in: 
/var/lib/heartbeat/pengine/pe-error-90.bz2
Feb 24 09:15:03 hos002a pengine: [14393]: info: process_pe_message: 
Configuration WARNINGs found during PE processing.  Please run "crm_verify -L" 
to identify issues.
Feb 24 09:15:03 hos002a nfsserver[8889]: [8943]: ERROR: IT FAILED stop NFS 
server
Feb 24 09:15:03 hos002a crmd: [14378]: ERROR: process_lrm_event: LRM operation 
NFS_home2_stop_0 (call=71, rc=1) Error unknown error
Feb 24 09:15:03 hos002a tengine: [14392]: WARN: status_from_rc: Action stop on 
hos002a failed (target: <null> vs. rc: 1): Error
Feb 24 09:15:03 hos002a tengine: [14392]: WARN: update_failcount: Updating 
failcount for NFS_home2 on a5793c77-363e-4a23-80df-cb7d552f998f after failed 
stop: rc=1
Feb 24 09:15:03 hos002a tengine: [14392]: info: update_abort_priority: Abort 
priority upgraded to 1
Feb 24 09:15:03 hos002a tengine: [14392]: info: update_abort_priority: Abort 
action 0 superceeded by 2
Feb 24 09:15:03 hos002a tengine: [14392]: info: match_graph_event: Action 
NFS_home2_stop_0 (28) confirmed on hos002a (rc=4)
Feb 24 09:15:04 hos002a cib: [14373]: info: sync_our_cib: Syncing CIB to hos003a
Feb 24 09:15:04 hos002a haclient: on_event:evt:cib_changed
Feb 24 09:15:04 hos002a tengine: [14392]: WARN: status_from_rc: Action stop on 
hos003a failed (target: <null> vs. rc: 1): Error
Feb 24 09:15:04 hos002a tengine: [14392]: WARN: update_failcount: Updating 
failcount for NFS_home2 on 2cff3fca-3825-4429-a204-550885f4d952 after failed 
stop: rc=1
Feb 24 09:15:04 hos002a tengine: [14392]: info: match_graph_event: Action 
NFS_home2_stop_0 (2) confirmed on hos003a (rc=4)
Feb 24 09:15:04 hos002a haclient: on_event:evt:cib_changed
Feb 24 09:15:04 hos002a cib: [14373]: info: sync_our_cib: Syncing CIB to hos001a
Feb 24 09:15:04 hos002a tengine: [14392]: WARN: status_from_rc: Action stop on 
hos001a failed (target: <null> vs. rc: 1): Error
Feb 24 09:15:04 hos002a tengine: [14392]: WARN: update_failcount: Updating 
failcount for NFS_home2 on f3b0907e-b907-4057-89fe-a813ff5ef021 after failed 
stop: rc=1
Feb 24 09:15:04 hos002a tengine: [14392]: info: match_graph_event: Action 
NFS_home2_stop_0 (1) confirmed on hos001a (rc=4)
Feb 24 09:15:04 hos002a tengine: [14392]: info: run_graph: 
====================================================
Feb 24 09:15:04 hos002a tengine: [14392]: notice: run_graph: Transition 103: 
(Complete=4, Pending=0, Fired=0, Skipped=2, Incomplete=0)
Feb 24 09:15:04 hos002a crmd: [14378]: info: do_state_transition: State 
transition S_TRANSITION_ENGINE -> S_POLICY_ENGINE [ input=I_PE_CALC 
cause=C_IPC_MESSAGE origin=route_message ]
Feb 24 09:15:04 hos002a crmd: [14378]: info: do_state_transition: All 3 cluster 
nodes are eligible to run resources.
Feb 24 09:15:04 hos002a pengine: [14393]: notice: unpack_config: On loss of CCM 
Quorum: Ignore
Feb 24 09:15:04 hos002a pengine: [14393]: info: determine_online_status: Node 
hos002a is online
Feb 24 09:15:04 hos002a pengine: [14393]: WARN: unpack_rsc_op: Processing 
failed op NFS_home2_stop_0 on hos002a: Error
Feb 24 09:15:04 hos002a pengine: [14393]: WARN: unpack_rsc_op: Compatability 
handling for failed op NFS_home2_stop_0 on hos002a
Feb 24 09:15:04 hos002a pengine: [14393]: info: determine_online_status: Node 
hos001a is online
Feb 24 09:15:04 hos002a pengine: [14393]: WARN: unpack_rsc_op: Processing 
failed op NFS_home2_monitor_0 on hos001a: Error
Feb 24 09:15:04 hos002a pengine: [14393]: WARN: unpack_rsc_op: Processing 
failed op NFS_home2_stop_0 on hos001a: Error
Feb 24 09:15:04 hos002a pengine: [14393]: WARN: unpack_rsc_op: Compatability 
handling for failed op NFS_home2_stop_0 on hos001a
Feb 24 09:15:04 hos002a pengine: [14393]: info: native_add_running: resource 
NFS_home2 isnt managed
Feb 24 09:15:04 hos002a pengine: [14393]: info: determine_online_status: Node 
hos003a is online
Feb 24 09:15:04 hos002a pengine: [14393]: WARN: unpack_rsc_op: Processing 
failed op NFS_home2_monitor_0 on hos003a: Error
Feb 24 09:15:04 hos002a pengine: [14393]: WARN: unpack_rsc_op: Processing 
failed op NFS_home2_stop_0 on hos003a: Error
Feb 24 09:15:04 hos002a pengine: [14393]: WARN: unpack_rsc_op: Compatability 
handling for failed op NFS_home2_stop_0 on hos003a
Feb 24 09:15:04 hos002a pengine: [14393]: info: native_add_running: resource 
NFS_home2 isnt managed
Feb 24 09:15:04 hos002a pengine: [14393]: notice: group_print: Resource Group: 
GROUP_001
Feb 24 09:15:04 hos002a pengine: [14393]: notice: native_print:     IP_001      
(heartbeat::ocf:IPaddr2):       Started hos001a
Feb 24 09:15:04 hos002a pengine: [14393]: notice: native_print:     LVM_home1   
(heartbeat::ocf:LVM):   Started hos001a
Feb 24 09:15:04 hos002a pengine: [14393]: notice: native_print:     FS_home1    
(heartbeat::ocf:Filesystem):    Started hos001a
Feb 24 09:15:04 hos002a pengine: [14393]: notice: group_print: Resource Group: 
GROUP_002
Feb 24 09:15:04 hos002a pengine: [14393]: notice: native_print:     IP_002      
(heartbeat::ocf:IPaddr2):       Started hos002a
Feb 24 09:15:04 hos002a pengine: [14393]: notice: native_print:     LVM_home2   
(heartbeat::ocf:LVM):   Started hos002a
Feb 24 09:15:04 hos002a pengine: [14393]: notice: native_print:     LVM_tpa3    
(heartbeat::ocf:LVM):   Started hos002a
Feb 24 09:15:04 hos002a pengine: [14393]: notice: native_print:     FS_home2    
(heartbeat::ocf:Filesystem):    Started hos002a
Feb 24 09:15:04 hos002a pengine: [14393]: notice: native_print:     FS_tpa3     
(heartbeat::ocf:Filesystem):    Started hos002a
Feb 24 09:15:04 hos002a pengine: [14393]: notice: native_print:     NFS_home2   
(heartbeat::ocf:nfsserver)
Feb 24 09:15:04 hos002a pengine: [14393]: notice: native_print:         0 : 
hos002a
Feb 24 09:15:04 hos002a pengine: [14393]: notice: native_print:         1 : 
hos001a
Feb 24 09:15:04 hos002a pengine: [14393]: notice: native_print:         2 : 
hos003a
Feb 24 09:15:04 hos002a pengine: [14393]: notice: group_print: Resource Group: 
GROUP_003
Feb 24 09:15:04 hos002a pengine: [14393]: notice: native_print:     IP_003      
(heartbeat::ocf:IPaddr2):       Started hos003a
Feb 24 09:15:04 hos002a pengine: [14393]: notice: native_print:     LVM_home3   
(heartbeat::ocf:LVM):   Started hos003a
Feb 24 09:15:04 hos002a pengine: [14393]: notice: native_print:     LVM_tpa2    
(heartbeat::ocf:LVM):   Started hos003a
Feb 24 09:15:04 hos002a pengine: [14393]: notice: native_print:     FS_home3    
(heartbeat::ocf:Filesystem):    Started hos003a
Feb 24 09:15:04 hos002a pengine: [14393]: notice: native_print:     FS_tpa2     
(heartbeat::ocf:Filesystem):    Started hos003a
Feb 24 09:15:04 hos002a pengine: [14393]: notice: group_print: Resource Group: 
GROUP_004
Feb 24 09:15:04 hos002a pengine: [14393]: notice: native_print:     IP-004      
(heartbeat::ocf:IPaddr2):       Started hos001a
Feb 24 09:15:04 hos002a pengine: [14393]: notice: native_print:     LVM_tpa1    
(heartbeat::ocf:LVM):   Started hos001a
Feb 24 09:15:04 hos002a pengine: [14393]: notice: native_print:     FS_tpa1     
(heartbeat::ocf:Filesystem):    Started hos001a
Feb 24 09:15:04 hos002a pengine: [14393]: notice: NoRoleChange: Leave resource 
IP_001   (hos001a)
Feb 24 09:15:04 hos002a pengine: [14393]: notice: NoRoleChange: Leave resource 
LVM_home1        (hos001a)
Feb 24 09:15:04 hos002a pengine: [14393]: notice: NoRoleChange: Leave resource 
FS_home1 (hos001a)
Feb 24 09:15:04 hos002a pengine: [14393]: WARN: native_color: Resource 
NFS_home2 cannot run anywhere
Feb 24 09:15:04 hos002a pengine: [14393]: notice: NoRoleChange: Leave resource 
IP_002   (hos002a)
Feb 24 09:15:04 hos002a pengine: [14393]: notice: NoRoleChange: Leave resource 
LVM_home2        (hos002a)
Feb 24 09:15:04 hos002a pengine: [14393]: notice: NoRoleChange: Leave resource 
LVM_tpa3 (hos002a)
Feb 24 09:15:04 hos002a pengine: [14393]: notice: NoRoleChange: Leave resource 
FS_home2 (hos002a)
Feb 24 09:15:04 hos002a pengine: [14393]: notice: NoRoleChange: Leave resource 
FS_tpa3  (hos002a)
Feb 24 09:15:04 hos002a pengine: [14393]: ERROR: native_create_actions: 
Attempting recovery of resource NFS_home2
Feb 24 09:15:04 hos002a pengine: [14393]: WARN: custom_action: Action 
NFS_home2_stop_0 (unmanaged)
Feb 24 09:15:04 hos002a last message repeated 2 times
Feb 24 09:15:04 hos002a pengine: [14393]: info: native_assign_node: 3 nodes 
with equal score (104) for running the listed resources (chose hos003a):
Feb 24 09:15:04 hos002a pengine: [14393]: notice: NoRoleChange: Leave resource 
IP_003   (hos003a)
Feb 24 09:15:04 hos002a pengine: [14393]: notice: NoRoleChange: Leave resource 
LVM_home3        (hos003a)
Feb 24 09:15:04 hos002a haclient: on_event: from message queue: evt:cib_changed
Feb 24 09:15:04 hos002a pengine: [14393]: notice: NoRoleChange: Leave resource 
LVM_tpa2 (hos003a)
Feb 24 09:15:04 hos002a haclient: on_event: from message queue: evt:cib_changed
Feb 24 09:15:04 hos002a pengine: [14393]: notice: NoRoleChange: Leave resource 
FS_home3 (hos003a)
Feb 24 09:15:04 hos002a pengine: [14393]: notice: NoRoleChange: Leave resource 
FS_tpa2  (hos003a)
Feb 24 09:15:04 hos002a pengine: [14393]: info: native_assign_node: 3 nodes 
with equal score (102) for running the listed resources (chose hos001a):
Feb 24 09:15:04 hos002a pengine: [14393]: notice: NoRoleChange: Leave resource 
IP-004   (hos001a)
Feb 24 09:15:04 hos002a pengine: [14393]: notice: NoRoleChange: Leave resource 
LVM_tpa1 (hos001a)
Feb 24 09:15:04 hos002a pengine: [14393]: notice: NoRoleChange: Leave resource 
FS_tpa1  (hos001a)
Feb 24 09:15:04 hos002a crmd: [14378]: info: do_state_transition: State 
transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS 
cause=C_IPC_MESSAGE origin=route_message ]
Feb 24 09:15:04 hos002a tengine: [14392]: info: unpack_graph: Unpacked 
transition 104: 0 actions in 0 synapses
Feb 24 09:15:04 hos002a tengine: [14392]: info: run_graph: Transition 104: 
(Complete=0, Pending=0, Fired=0, Skipped=0, Incomplete=0)
Feb 24 09:15:04 hos002a tengine: [14392]: info: notify_crmd: Transition 104 
status: te_complete - <null>
Feb 24 09:15:04 hos002a crmd: [14378]: info: do_state_transition: State 
transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS 
cause=C_IPC_MESSAGE origin=route_message ]
Feb 24 09:15:04 hos002a pengine: [14393]: ERROR: process_pe_message: Transition 
104: ERRORs found during PE processing. PEngine Input stored in: 
/var/lib/heartbeat/pengine/pe-error-91.bz2
Feb 24 09:15:04 hos002a pengine: [14393]: info: process_pe_message: 
Configuration WARNINGs found during PE processing.  Please run "crm_verify -L" 
to identify issues.
Feb 24 09:15:05 hos002a cib: [14373]: info: sync_our_cib: Syncing CIB to hos003a
Feb 24 09:15:06 hos002a cib: [14373]: info: sync_our_cib: Syncing CIB to hos001a




> Date: Tue, 24 Feb 2009 14:02:38 +0100
> From: [email protected]
> To: [email protected]
> Subject: Re: [Linux-HA] Sample NFS ocf resource script
> 
> Hi,
> 
> On Mon, Feb 16, 2009 at 09:27:50PM +0000, sachin patel wrote:
> > 
> > I can now see this on gui but it fails with following error
> > message any one know about it.
> 
> Did you check if /etc/init.d/nfs can start/stop the nfs services
> properly?
> 
> Thanks,
> 
> Dejan
> 
> > Feb 16 15:16:35 hos002a mgmtd: [8577]: info: on_add_rsc:<group 
> > id="GROUP_002"><primitive id="NFS_home2" class="ocf" type="nfsserver" 
> > provider="heartbeat"><instance_attributes id="NFS_ho
> > me2_instance_attrs"> <attributes><nvpair 
> > id="d0be014c-c4b5-4a30-ad44-049705e92b72" name="nfs_init_script" 
> > value="/etc/init.d/nfs"/><nvpair id="efdf12fa-d756-4f63-b549-d1c653a8a515" 
> > name="nfs_notify_cmd" value="/sbin/sm-notify"/><nvpair 
> > id="b41e11a8-ec84-4f1a-9694-31866bd399f6" name="nfs_shared_infodir" 
> > value="/var/lib/nfs"/><nvpair id="87ef03ce-72c6-407b-b007-642dbbd74e99" 
> > name="nfs_ip" 
> > value="10.26.16.6"/></attributes></instance_attributes></primitive></group>
> > Feb 16 15:16:37 hos002a crmd: [8576]: info: do_lrm_rsc_op: Performing 
> > op=NFS_home2_monitor_0 key=5:27:036ae0ed-0d86-4d0d-8424-28b9115cd200)
> > Feb 16 15:16:37 hos002a lrmd: [8573]: info: rsc:NFS_home2: monitor
> > Feb 16 15:16:37 hos002a haclient: on_event:evt:cib_changed
> > Feb 16 15:16:37 hos002a cib: [10956]: info: retrieveCib: Reading cluster 
> > configuration from: /var/lib/heartbeat/crm/cib.xml (digest: 
> > /var/lib/heartbeat/crm/cib.xml.sig)
> > Feb 16 15:16:37 hos002a cib: [10956]: info: retrieveCib: Reading cluster 
> > configuration from: /var/lib/heartbeat/crm/cib.xml (digest: 
> > /var/lib/heartbeat/crm/cib.xml.sig)
> > Feb 16 15:16:37 hos002a cib: [10956]: info: retrieveCib: Reading cluster 
> > configuration from: /var/lib/heartbeat/crm/cib.xml.last (digest: 
> > /var/lib/heartbeat/crm/cib.xml.sig.last)
> > Feb 16 15:16:37 hos002a cib: [10956]: info: write_cib_contents: Wrote 
> > version 0.174.2 of the CIB to disk (digest: 
> > 21f4280fd2a5b520add71bc2a1cb5a27)
> > Feb 16 15:16:37 hos002a cib: [10956]: info: retrieveCib: Reading cluster 
> > configuration from: /var/lib/heartbeat/crm/cib.xml (digest: 
> > /var/lib/heartbeat/crm/cib.xml.sig)
> > Feb 16 15:16:37 hos002a cib: [10956]: info: retrieveCib: Reading cluster 
> > configuration from: /var/lib/heartbeat/crm/cib.xml.last (digest: 
> > /var/lib/heartbeat/crm/cib.xml.sig.last)
> > Feb 16 15:16:37 hos002a crmd: [8576]: info: process_lrm_event: LRM 
> > operation NFS_home2_monitor_0 (call=35, rc=0) complete 
> > Feb 16 15:16:37 hos002a haclient: on_event:evt:cib_changed
> > Feb 16 15:16:37 hos002a cib: [8572]: WARN: G_SIG_dispatch: Dispatch 
> > function for SIGCHLD was delayed 420 ms (> 100 ms) before being called 
> > (GSource: 0x523710)
> > Feb 16 15:16:37 hos002a cib: [8572]: info: G_SIG_dispatch: started at 
> > 1813867318 should have started at 1813867276
> > Feb 16 15:16:38 hos002a crmd: [8576]: info: do_lrm_rsc_op: Performing 
> > op=NFS_home2_stop_0 key=27:28:036ae0ed-0d86-4d0d-8424-28b9115cd200)
> > Feb 16 15:16:38 hos002a lrmd: [8573]: info: rsc:NFS_home2: stop
> > Feb 16 15:16:38 hos002a nfsserver[10964]: [10976]: INFO: Stopping NFS 
> > server ...
> > Feb 16 15:16:38 hos002a haclient: on_event:evt:cib_changed
> > Feb 16 15:16:38 hos002a rpcsvcgssd: rpc.svcgssd shutdown failed
> > Feb 16 15:16:38 hos002a haclient: on_event:evt:cib_changed
> > Feb 16 15:16:38 hos002a nfs: rpc.mountd shutdown failed
> > Feb 16 15:16:38 hos002a nfs: nfsd shutdown failed
> > Feb 16 15:16:38 hos002a haclient: on_event:evt:cib_changed
> > Feb 16 15:16:38 hos002a nfs: rpc.rquotad shutdown failed
> > Feb 16 15:16:38 hos002a nfs: Shutting down NFS services:  failed
> > Feb 16 15:16:38 hos002a nfsserver[10964]: [11018]: ERROR: Failed to stop 
> > NFS server
> > Feb 16 15:16:38 hos002a crmd: [8576]: ERROR: process_lrm_event: LRM 
> > operation NFS_home2_stop_0 (call=36, rc=1) Error unknown error
> > Feb 16 15:16:39 hos002a mgmtd: [8577]: ERROR: native_add_running: Resource 
> > ocf::nfsserver:NFS_home2 appears to be active on 2 nodes.
> > Feb 16 15:16:39 hos002a mgmtd: [8577]: ERROR: See 
> > http://linux-ha.org/v2/faq/resource_too_active for more information.
> > Feb 16 15:16:39 hos002a mgmtd: [8577]: ERROR: native_add_running: Resource 
> > ocf::nfsserver:NFS_home2 appears to be active on 3 nodes.
> > Feb 16 15:16:39 hos002a mgmtd: [8577]: ERROR: See 
> > http://linux-ha.org/v2/faq/resource_too_active for more information.
> > Feb 16 15:16:39 hos002a mgmtd: [8577]: ERROR: native_add_running: Resource 
> > ocf::nfsserver:NFS_home2 appears to be active on 4 nodes.
> > Feb 16 15:16:39 hos002a mgmtd: [8577]: ERROR: See 
> > http://linux-ha.org/v2/faq/resource_too_active for more information.
> > Feb 16 15:16:39 hos002a haclient: on_event:evt:cib_changed
> > Feb 16 15:16:39 hos002a haclient: on_event:evt:cib_changed
> > Feb 16 15:16:40 hos002a haclient: on_event: from message queue: 
> > evt:cib_changed
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > > From: [email protected]
> > > To: [email protected]
> > > Subject: RE: [Linux-HA] Sample NFS ocf resource script
> > > Date: Mon, 16 Feb 2009 20:12:58 +0000
> > > 
> > > 
> > > Ok found the script put it up in /usr/lib/ocf/resource.d/heartbeat but 
> > > when I start gui and try to add resource. this resource is not there 
> > > "nfsserver" how do I get that to recognize it.
> > > 
> > > 
> > > 
> > > > From: [email protected]
> > > > To: [email protected]
> > > > Subject: RE: [Linux-HA] Sample NFS ocf resource script
> > > > Date: Wed, 11 Feb 2009 22:42:00 +0000
> > > > 
> > > > 
> > > > great found it
> > > > 
> > > > http://hg.linux-ha.org/dev/raw-diff/90af633d8164/resources/OCF/nfsserver
> > > > 
> > > > We will test it.
> > > > 
> > > > 
> > > > > Date: Mon, 9 Feb 2009 09:20:48 +0100
> > > > > From: [email protected]
> > > > > To: [email protected]
> > > > > Subject: Re: [Linux-HA] Sample NFS ocf resource script
> > > > > 
> > > > > Hi,
> > > > > 
> > > > > On Fri, Feb 06, 2009 at 09:25:44PM +0000, sachin patel wrote:
> > > > > > 
> > > > > > 
> > > > > > I have four node in setup lvm/filesystem resource works fine i.e it 
> > > > > > fails over back and forth without problem
> > > > > > 
> > > > > > I am trying to configure now nfs resourece but I don't see any nfs 
> > > > > > RC script in /usr/lib/ocf/resource.d/heartbeat dir.
> > > > > > 
> > > > > > I can't find any on google either can someone point me to right 
> > > > > > direction?
> > > > > 
> > > > > There's nfsserver, but probably you have a heartbeat release
> > > > > which is a bit older. You can download just the RA from
> > > > > hg.linux-ha.org/dev.
> > > > > 
> > > > > Thanks,
> > > > > 
> > > > > Dejan
> > > > > 
> > > > > > Thanks
> > > > > > 
> > > > > > _________________________________________________________________
> > > > > > Windows Live?: Keep your life in sync. 
> > > > > > http://windowslive.com/howitworks?ocid=TXT_TAGLM_WL_t1_allup_howitworks_022009_______________________________________________
> > > > > > Linux-HA mailing list
> > > > > > [email protected]
> > > > > > http://lists.linux-ha.org/mailman/listinfo/linux-ha
> > > > > > See also: http://linux-ha.org/ReportingProblems
> > > > > _______________________________________________
> > > > > Linux-HA mailing list
> > > > > [email protected]
> > > > > http://lists.linux-ha.org/mailman/listinfo/linux-ha
> > > > > See also: http://linux-ha.org/ReportingProblems
> > > > 
> > > > _________________________________________________________________
> > > > Windows Live?: E-mail. Chat. Share. Get more ways to connect. 
> > > > http://windowslive.com/explore?ocid=TXT_TAGLM_WL_t2_allup_explore_022009_______________________________________________
> > > > Linux-HA mailing list
> > > > [email protected]
> > > > http://lists.linux-ha.org/mailman/listinfo/linux-ha
> > > > See also: http://linux-ha.org/ReportingProblems
> > > 
> > > _________________________________________________________________
> > > Stay up to date on your PC, the Web, and your mobile phone with Windows 
> > > Live.
> > > http://clk.atdmt.com/MRT/go/msnnkwxp1020093185mrt/direct/01/_______________________________________________
> > > Linux-HA mailing list
> > > [email protected]
> > > http://lists.linux-ha.org/mailman/listinfo/linux-ha
> > > See also: http://linux-ha.org/ReportingProblems
> > 
> > _________________________________________________________________
> > Stay up to date on your PC, the Web, and your mobile phone with Windows 
> > Live.
> > http://clk.atdmt.com/MRT/go/msnnkwxp1020093185mrt/direct/01/_______________________________________________
> > Linux-HA mailing list
> > [email protected]
> > http://lists.linux-ha.org/mailman/listinfo/linux-ha
> > See also: http://linux-ha.org/ReportingProblems
> _______________________________________________
> Linux-HA mailing list
> [email protected]
> http://lists.linux-ha.org/mailman/listinfo/linux-ha
> See also: http://linux-ha.org/ReportingProblems

_________________________________________________________________
It’s the same Hotmail®. If by “same” you mean up to 70% faster. 
http://windowslive.com/online/hotmail?ocid=TXT_TAGLM_WL_HM_AE_Same_022009_______________________________________________
Linux-HA mailing list
[email protected]
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems

Reply via email to