Hi,

> From my experience I recommend not to use the GUI. It never did the job
> for me (specially not creating a configuration).
yeah, the GUI is a bit...

> 
> From your description I assume you have timing problems. Keep in mind that
> cluster node startups really generate a load on the HA system. Each 
resource
> is probed (basically it runs a 'monitor' operation on each resource on 
each 
> cluster  node).
> 
> So if you have 2 nodes with 40 resources a node startup ---> 80 monitor 
actions 
> initiated ---> 80 responses ---> 80 changes in the CIB --> 80 
redistributions 
> (not to mention the engine calulcating your failove-rules for all the 
resources).
that's a good hint, thanks.

> 
> Did you write Resource Agents on your own? Or do you use only the standard
> HA RA?
I only use the standard resource files in that cluster, only for stonith I 
use self written scripts to kill the other nodes via ssh to the iLo board.

> 
> Are you using clones?

I have about 9 clone sets, and the same number of groups, each group 
containing 9 or 10 resources. 

Maybe I can try changeing the IPaddr script to allow me to give it a list of 
IP addresses, and a list of devices. then each group only consists of 2 or 
less resources. As far as I can see, that could drop the load from the 
server and maybe fix the timing problems.

> 
> You see ... attaching the CIB as attachment would help ;-)
> 
> > 
> > I don't want to add my resources as cib file here, because it is more 
than 
> > 20 pages printed out :)
> Well you could attach the CIB (bzip2 is your friend). Without it no one 
can help 
> here. So we can see which resource failed and maybe where the problem in
> your configuration is.
> 
I think the hint with the timings is good, and I'll first try to change the 
script. If that doesn't help, then I'll post it here.

thanks
Sebastian

_______________________________________________
Linux-HA mailing list
[email protected]
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems

Reply via email to