Hi,

Ok, perhaps there is some problem with the services on node1 , so, are you
able to run these services on node1 without cluster. You first stop the
cluster, and try to run these services on node1.

It should run.

Re,
Rajveer Singh

2009/2/13 ESGLinux <[email protected]>

> Hello,
>
> Thats what I want, when node1 comes up I want to relocate to node1 but what
> I get is all my services stoped and in failed state.
>
> With my configuration I expect to have the services running on node1.
>
> Any idea about this behaviour?
>
> Thanks
>
> ESG
>
>
> 2009/2/12 rajveer singh <[email protected]>
>
>
>>
>> 2009/2/12 ESGLinux <[email protected]>
>>
>>>  Hello all,
>>>
>>> I´m testing a cluster using luci as admin tool. I have configured 2 nodes
>>> with 2 services http + mysql. This configuration works almost fine. I have
>>> the services running on the node1
>>>  and y reboot this node1. Then the services relocates to node2 and all
>>> contnues working but, when the node1 goes up all the services stops.
>>>
>>> I think that the node1, when comes alive, tries to run the services and
>>> that makes the services stops, can it be true? I think node1 should not
>>> start anything because the services are running in node2.
>>>
>>> Perphaps is a problem with the configuration, perhaps with fencing (i
>>> have not configured fencing at all)
>>>
>>> here is my cluster.conf. Any idea?
>>>
>>> Thanks in advace
>>>
>>> ESG
>>>
>>>
>>> <?xml version="1.0"?>
>>> <cluster alias="MICLUSTER" config_version="29" name="MICLUSTER">
>>>         <fence_daemon clean_start="0" post_fail_delay="0"
>>> post_join_delay="3"/>
>>>         <clusternodes>
>>>                 <clusternode name="node1" nodeid="1" votes="1">
>>>                         <fence/>
>>>                 </clusternode>
>>>                 <clusternode name="node2" nodeid="2" votes="1">
>>>                         <fence/>
>>>                 </clusternode>
>>>         </clusternodes>
>>>         <cman expected_votes="1" two_node="1"/>
>>>         <fencedevices/>
>>>         <rm>
>>>                 <failoverdomains>
>>>                         <failoverdomain name="DOMINIOFAIL" nofailback="0"
>>> ordere
>>> d="1" restricted="1">
>>>                              *   <failoverdomainnode name="node1"
>>> priority="1"/>
>>> *                               * <failoverdomainnode name="node2"
>>> priority="2"/>
>>> *                        </failoverdomain>
>>>                 </failoverdomains>
>>>                 <resources>
>>>                         <ip address="192.168.1.183" monitor_link="1"/>
>>>                 </resources>
>>>                 <service autostart="1" domain="DOMINIOFAIL" exclusive="0"
>>> name="
>>> HTTP" recovery="relocate">
>>>                         <apache config_file="conf/httpd.conf" name="http"
>>> server
>>> _root="/etc/httpd" shutdown_wait="0"/>
>>>                         <ip ref="192.168.1.183"/>
>>>                 </service>
>>>                 <service autostart="1" domain="DOMINIOFAIL" exclusive="0"
>>> name="
>>> BBDD" recovery="relocate">
>>>                         <mysql config_file="/etc/my.cnf"
>>> listen_address="192.168
>>> .1.183" name="mydb" shutdown_wait="0"/>
>>>                         <ip ref="192.168.1.183"/>
>>>                 </service>
>>>         </rm>
>>> </cluster>
>>>
>>>
>>> --
>>> Linux-cluster mailing list
>>> [email protected]
>>> https://www.redhat.com/mailman/listinfo/linux-cluster
>>>
>>
>> Hi ESG,
>>
>> Offcoures, as you have defined the priority of node1 as 1 and node2 as 2,
>> so node1 is having more priority, so whenever it will be up, it will try to
>> run the service on itself and so it will relocate the service from node2 to
>> node1.
>>
>>
>> Re,
>> Rajveer Singh
>>
>> --
>> Linux-cluster mailing list
>> [email protected]
>> https://www.redhat.com/mailman/listinfo/linux-cluster
>>
>
>
> --
> Linux-cluster mailing list
> [email protected]
> https://www.redhat.com/mailman/listinfo/linux-cluster
>
--
Linux-cluster mailing list
[email protected]
https://www.redhat.com/mailman/listinfo/linux-cluster

Reply via email to