On 2008-08-25T14:04:14, Christoph Eßer <[EMAIL PROTECTED]> wrote:

> Hi there,
> 
> meanwhile I managed getting a logfile by a simple reboot of both nodes.
> 
> The real problem I have is that Heartbeat doesn't start a configured
> Filesystem resource. The logfile entry is rather cryptic and doesn't
> help me understand the problem. Here ist my cib.xml.

The problem is that the logfile message is what we need to solve your
problem.

>        <group id="rg_ip_fs">
>          <primitive class="ocf" type="IPaddr2" provider="heartbeat"
> id="ip_common">
>            <instance_attributes id="58d0fb5b-186d-4408-97aa-20c2dd129bd0">
>              <attributes>
>                <nvpair name="ip" value="212.66.145.22"
> id="85a68a28-fe8f-4722-a136-1210c34f75d8"/>
>                <nvpair name="nic" value="eth0"
> id="e2be8f61-583c-48a5-bbe4-79f46a8bf7c0"/>
>              </attributes>
>            </instance_attributes>
>            <instance_attributes id="14bdaa30-27b2-4983-a949-c04cc70213b8">
>              <attributes>
>                <nvpair name="ip" value="212.66.145.22"
> id="a50b4af0-5cd6-454b-8ded-4e2a0da96dc0"/>
>                <nvpair name="nic" value="eth0"
> id="6d4ff239-09a6-4d68-ac25-95fef42e951e"/>
>              </attributes>
>            </instance_attributes>

This configuration looks rather broken too, btw. Two sets of instance
attributes with exactly the same settings? Why are you doing that?
>          <primitive id="fs_mount" class="ocf" type="Filesystem"
> provider="heartbeat"> <!-- this one does not start! -->
>            <instance_attributes id="fs_mount_instance_attrs">
>              <attributes>
>                <nvpair name="fstype" value="ext3"
> id="d3e224b6-6573-47b9-8dfa-44f6e"/>
>                <nvpair name="device" value="/dev/drbd0"
> id="d3e224b6-6573-47b9-8dfa-44f6e09b3b"/>
>                <nvpair name="directory" value="/mnt/drbd0"
> id="d3e224b6-6573-47b9-8dfa-44f6e09b3cb"/>
>              </attributes>
>            </instance_attributes>
>            <meta_attributes id="fs_mount_meta_attrs">
>              <attributes>
>                <nvpair id="fs_mount_metaattr_target_role"
> name="target_role" value="started"/>
>              </attributes>
>            </meta_attributes>

> 
> This is the corresponding error message in ha-log when I try to start
> it via GUI:
> 
> mgmtd[2994]: 2008/08/25_13:44:51 ERROR: unpack_rsc_op: Remapping
> fs_mount_start_0 (rc=2) on viktor-02 to an ERROR
> mgmtd[2994]: 2008/08/25_13:44:51 ERROR: unpack_rsc_op: Remapping
> fs_mount_start_0 (rc=2) on viktor-01 to an ERROR
> 
> ha-debug says:

Set "use_logd yes" in the ha.cf file and log to syslog; the logs are so
much more readable.

> mgmtd[2994]: 2008/08/25_13:59:25 ERROR: unpack_rsc_op: Remapping
> fs_mount_start_0 (rc=2) on viktor-02 to an ERROR
> mgmtd[2994]: 2008/08/25_13:59:25 ERROR: unpack_rsc_op: Remapping
> fs_mount_start_0 (rc=2) on viktor-01 to an ERROR

You want to read the systems logfiles for the lines logged by
"Filesystem" - the resource agent will log why the mount has failed. The
mgmtd has absolutely nothing to do with this.


Regards,
    Lars

-- 
Teamlead Kernel, SuSE Labs, Research and Development
SUSE LINUX Products GmbH, GF: Markus Rex, HRB 16746 (AG Nürnberg)
"Experience is the name everyone gives to their mistakes." -- Oscar Wilde

_______________________________________________
Linux-HA mailing list
[email protected]
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems

Reply via email to