Hi,

On Tue, Feb 19, 2008 at 06:53:51PM +0100, Abraham Iglesias wrote:
> Hi Deja,
>
> I updated to 2.1.3-3 and created a new configuration with an unordered 
> group. Resources within a group are restarted when one of them fails! :( 
> The group resources are collocated but unordered.

The resources are not restarted in case score for rsc_order
between the two is set to 0. I expected that that would be the
case with unordered groups, but it seems like it's not so. I
guess that you should file a bug for this (please use hb_report
to collect all information).

> I  guess this does not work for me...
>
> any advice?

You could achieve this by creating a chain of colocation
constraints (1->2->3...->8):

- resources must run on the same node (colocate INFINITY)

Thus you would simulate an unordered group. If that doesn't work,
then perhaps you would also need a chain of order constraints to
set the score to "0":

- rsc_order score is 0 (<rsc_order from=1 to=2 score="0">)

Perhaps this solution is too complex, but I can't think of any
better.

Thanks,

Dejan

> -Abraham
>
> Dejan Muhamedagic escribi?:
>> Hi,
>>
>> On Tue, Feb 19, 2008 at 04:24:19PM +0100, Abraham Iglesias wrote:
>>   
>>> I think there wouldn't be any problem in upgrading heartbeat. Would it be 
>>> my 2.0.8 configuration compatible?
>>>     
>>
>> It should with the exception, perhaps, of the crm_config where
>> in all options underscores are replaced by dashes. But that
>> change could have happened earlier. It is still strongly
>> recommended to first test the existing configuration with the new
>> version on a test cluster.
>>
>>   
>>> With an unordered group, only 1 resource within the group would be 
>>> restarted?
>>>     
>>
>> I think so. Since there's no order I don't see any reason for
>> other resources to be affected. BTW, not that there will be a
>> herd of tomcats starting at the same time. And they are not
>> particularly light weight.
>>
>>   
>>> The problem is that i need to have mounted a drbd partition before tomcat 
>>> starts. so... In some way, i need an order... but i might think about 
>>> removing this requirement provided the unordered group helps me in my 
>>> problem.
>>>     
>>
>> That shouldn't be a problem. What you would need is a group
>> within a group, but nested groups are not supported. Anyway, you
>> can create an order constraint between the group of tomcats and
>> the drbd resource.
>>
>> Thanks,
>>
>> Dejan
>>
>>   
>>> Thank you very much.
>>>
>>> -Abraham
>>>
>>>
>>> Dejan Muhamedagic escribi?:
>>>     
>>>> Hi,
>>>>
>>>> On Tue, Feb 19, 2008 at 12:40:15PM +0100, Abraham Iglesias wrote:
>>>>         
>>>>> Hi all,
>>>>> I have configurede a 2 nodes v2 HA cluster with hearbeat 2.0.8. So far, 
>>>>> I included all resources in the same group. It is an easy way to offer 
>>>>> colocation and ordering features.
>>>>>
>>>>> The problem is that I have 8 tomcat instances within the same group, so 
>>>>> in a loaded environment it takes 3 minutes to start all tomcat 
>>>>> resources in the group.
>>>>>             
>>>> What could help is an unordered group of resources. But to do
>>>> that you'd have to upgrade, which you should do due to other
>>>> reasons as well. Can you run 2.1.3?
>>>>
>>>> Thanks,
>>>>
>>>> Dejan
>>>>
>>>>         
>>>>> I implemented a status function to provide every tomcat LSB script a 
>>>>> better way to meaure process health. Heartbeat uses this function to 
>>>>> get the information about resource health.
>>>>> If the tomcat fails, then heartbeat restart it. That's perfect! The 
>>>>> problem is that in case of groups, the whole resources within a group 
>>>>> are restarted!!!
>>>>>
>>>>> To improve uptime of the different services, I would like to make them 
>>>>> independent. I don't want "tomcat2-tomcat8" to be restarted when 
>>>>> "tomcat1" fails. I just want "tomcat1" to be restarted and leave all 
>>>>> other resources running normally in the cluster .
>>>>>
>>>>> The problem is that all tomcats need to run in the same node. If I set 
>>>>> collocation constraints to INFINITY, then resources will not move to 
>>>>> the passive node in case of continous failure of a resource.
>>>>>
>>>>> Anyone has some advice on how to configure colocation constraints? Or 
>>>>> any other solution?
>>>>>
>>>>> Thank you very much!!
>>>>>
>>>>> -Abraham
>>>>>
>>>>>
>>>>> _______________________________________________
>>>>> Linux-HA mailing list
>>>>> [email protected]
>>>>> http://lists.linux-ha.org/mailman/listinfo/linux-ha
>>>>> See also: http://linux-ha.org/ReportingProblems
>>>>>             
>>>> _______________________________________________
>>>> Linux-HA mailing list
>>>> [email protected]
>>>> http://lists.linux-ha.org/mailman/listinfo/linux-ha
>>>> See also: http://linux-ha.org/ReportingProblems
>>>>
>>>>
>>>>         
>>> _______________________________________________
>>> Linux-HA mailing list
>>> [email protected]
>>> http://lists.linux-ha.org/mailman/listinfo/linux-ha
>>> See also: http://linux-ha.org/ReportingProblems
>>>     
>> _______________________________________________
>> Linux-HA mailing list
>> [email protected]
>> http://lists.linux-ha.org/mailman/listinfo/linux-ha
>> See also: http://linux-ha.org/ReportingProblems
>>
>>
>>   
>
>
> _______________________________________________
> Linux-HA mailing list
> [email protected]
> http://lists.linux-ha.org/mailman/listinfo/linux-ha
> See also: http://linux-ha.org/ReportingProblems
_______________________________________________
Linux-HA mailing list
[email protected]
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems

Reply via email to