On Nov 5, 2008, at 5:12 PM, Annette Jäkel wrote:

Am 30.10.2008 15:34 Uhr schrieb "Annette Jäkel" unter
<[EMAIL PROTECTED]>:

Am 30.10.2008 14:27 Uhr schrieb "Lars Marowsky-Bree" unter <[EMAIL PROTECTED] >:

On 2008-10-30T12:40:43, Annette Jäkel <[EMAIL PROTECTED]> wrote:

But my problem is that specific order of the nfsserver resource after all filesystem resources. This works fine until one of the file system crashes. Then nfsserver stops and the whole system doesnt work. Thats not what I want. If a file system crashes, for example because physical device was broken, the remaining devices and file systems should work and exported to clients. So maybe the nfsserver should start without any order? But I'm not shure that this is a good idea. I also dont want to run NFS, if there is no
device it can manage.

Ah, OK, so the interleaving is not your highest priority.

I think rsc_order/_colocation constraints with score=0 achieve exactly
what you need.

I think, I have to learn more about individual scores. Until now I use heartbeats default scores, never set a score manually. Meanwhile I read the linux-HA score basics docs. I understand your proposal this way: score=0 mean, that nothing occurs because no score is changed. So if a filesystem resource fails, nfsserver resource should still run. Thats fine. But whats with the starting order of the resources? Does this occur in the right order if score of order=0? And what with the condition, that mounted filesystems and nfsserver MUST run on the same node? Does'nt this mean that colocation
score should be INFINITY?

BTW: All order constraints in my current cib.xml are written without a
score. Regarding to score documentation score then defaults to 0. All
colocation constraints defaults to INFINITY.

I will test the case on a testcluster with explicit setting of score=0.


Meanwhile i tested my resource definitions for NFS Service and corresponding orders/colocations between resources for mounting filesystems and starting
nfsserver. I tested:

1) order nfsserver after filesystems without explicit score setting and
without any colocation definition
2) order nfsserver after filesystems with score=0 and without any colocation
definition
3) stopping a filesystem resource and run a rcheartbeat stop/start
results:
- nfsserver started after filesystems
- regardless of order scores: if a filesystem resource was stopped,
nfsserver was still running resp. started at rcheartbeat start

4) order nfsserver after filesystems without explicit score setting and with
a colocation from nfsserver to filesystems, score=INFINITY
result: stopping any of the filesystem resources also stops nfsserver
resource
4a) same constraints, stop a filesystem resource, rcheartbeat stop/ start result: nfsserver stopped before heartbeat shutted down and dont restart
after rcheartbeat start

5) order nfsserver after filesystems without explicit score setting and with
a colocation from nfsserver to filesystems, score=0
results:
- if a filesystem resource was stopped, nfsserver was still running
- after stopping a filesystem resource and rcheartbeat stop/start, nfsserver
resource started

Until now I thought I have a problem with the orders, but now I think its
the problem of colocations. I have new questions:
1) is the behaviour of resources with regard to orders and colocations the same in case of stopping a resource as of failing of a resource (resp.: was
my test set complete?)

Normally yes, but its possible to construct situations for which it isn't (by using non-symmetrical ordering constraints).


2) Is there any sense for a colocation with score=0?

not really, but its not an either/or proposition.
you can also have 0 < score < INFINITY, which is "advisory" but not mandatory.

Isn't it a hint that I also can remove this colocation?

right



discussion:
My intention for my current productive colocations between filesystem
resources and nfsserver resource is: nfsserver should always run on the node with the devices. But maybe I should remove colocations from nfsserver to all single filesystem resources and set a colocation only from nfsserver to
the resource setting the MD Devices (but keep the orders of starting
nfsserver after filesystem resources!!)? My idea is: then I have the right order of resources and if md-devices fails, nfsserver stops - and this is
ok.


But if a single filesystem resource fails, nfsserver is still running
like in my tests.

This is possible even with mandatory colocation... you just have to get the constraint right.

   <rsc_colocation from="X" to="Y" score="INFINITY"/>

is not the same as

   <rsc_colocation from="Y" to="X" score="INFINITY"/>


The first version will allow Y to remain active when X stops, the second will not.
Does that help?



Any opinion?

Best regards,
Annette

BTW: Now I will play with groups for all the filesystem resources.



Meanwhile I read within Michael Schwartzkopff's Linux-HA book. He presented an example NFS service with DRBD devices and if I understand all setup steps right, there is really no order from any resource to nfsserver. I dont
understand this.

That would be wrong.

Within this context I think about setting up a group of all filesystem resources, setting attribute "ordered" to "false" and define a single resource of nfsserver ordered after the group - but dont wait for the complete start of the group, but starts if any resource of the group is
running.

Well, that is the interleave part, which is not necessarily required for
correctness, but for speeding up the start. That is not currently
possible.

I'm also not sure if it is indeed what you want; you want to have tried starting everything before bringing online "nfsserver", I think, so that
they are all present to be exported - but not not start nfsserver if
just one of them has failed to start.


Think you're right. But does'nt this sounds like a scenario for a
on_fail=ignore|block for the start and monitor operation of every filesystem
resource?
Regards,
Annette

In another thread (HA-NFS strategic question) Xinwei Hu suggested not to manage nfs but the step of exporting the devices, but seems theres no RA for
this specific task. Until now I write /etc/exports by hand and call
"exportfs -a" from outside heartbeat, if I add or delete a filesystem.

Yes, this is currently missing I think. Even Xinwei's nfsserver RA is "global" and doesn't support managing individual exports. It would be
great if this was added.


Regards,
   Lars


_______________________________________________
Linux-HA mailing list
[email protected]
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems


_______________________________________________
Linux-HA mailing list
[email protected]
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems

_______________________________________________
Linux-HA mailing list
[email protected]
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems

Reply via email to