Hi,

On Tue, Sep 16, 2008 at 10:24:12AM -0400, Jill Schaumloeffel wrote:
> Actually I eventually figured out the problem. What is
> happening is when I reboot the server, it takes my provate
> container and makes it deported so my EVMS-Failover can't mount
> it because it is not there. Once I go in and make the container
> private again, the EVMS-Failover works.
> 
> >From reading other documents and etc, I think it has something to do with 
> >the order in which Heartbeat and EVMS are started, but can't figure out 
> >where to go change it.
>  
> Here are the ha.cf:
>  
> #debug 1
> keepalive 1
> deadtime 10
> warntime 5
> use_logd yes
> crm true
> bcast eth1
> bcast eth3
> node nodeb
> node nodea
> respawn root /sbin/evmsd
> apiauth evms uid=hacluster,root
> apiauth crm uid=hacluster,root
> 
>  
> and cib files: (I also attached them)
>  
>  <cib generated="true" admin_epoch="0" have_quorum="true" ignore_dtd="false" 
> num_peers="2" cib_feature_revision="2.0" crm_feature_set="2.0" 
> ccm_transition="2" dc_uuid="c868f4ab-1423-4438-b716-7135c4a03e7f" epoch="773" 
> num_updates="8" cib-last-written="Tue Sep 16 10:24:40 2008">
>    <configuration>
>      <crm_config>
>        <cluster_property_set id="cib-bootstrap-options">
>          <attributes>
>            <nvpair id="cib-bootstrap-options-dc-version" name="dc-version" 
> value="2.1.3-node: a3184d5240c6e7032aef9cce6e5b7752ded544b3"/>
>            <nvpair name="last-lrm-refresh" 
> id="cib-bootstrap-options-last-lrm-refresh" value="1221571011"/>
>            <nvpair id="cib-bootstrap-options-stonith-enabled" 
> name="stonith-enabled" value="true"/>

It's good to enable stonith, but then you need stonith resources
as well. There aren't any in this CIB.

>            <nvpair id="cib-bootstrap-options-default-resource-stickiness" 
> name="default-resource-stickiness" value="0"/>
>            <nvpair 
> id="cib-bootstrap-options-default-resource-failure-stickiness" 
> name="default-resource-failure-stickiness" value="0"/>
>          </attributes>
>        </cluster_property_set>
>      </crm_config>
>      <nodes>
>        <node uname="nodea" type="normal" 
> id="d64161a8-a178-448a-b32b-d81ca990e22b">
>          <instance_attributes id="nodes-d64161a8-a178-448a-b32b-d81ca990e22b">
>            <attributes>
>              <nvpair name="standby" 
> id="standby-d64161a8-a178-448a-b32b-d81ca990e22b" value="off"/>
>            </attributes>
>          </instance_attributes>
>        </node>
>        <node id="c868f4ab-1423-4438-b716-7135c4a03e7f" uname="nodeb" 
> type="normal">
>          <instance_attributes id="nodes-c868f4ab-1423-4438-b716-7135c4a03e7f">
>            <attributes>
>              <nvpair id="standby-c868f4ab-1423-4438-b716-7135c4a03e7f" 
> name="standby" value="on"/>
>            </attributes>
>          </instance_attributes>
>        </node>
>      </nodes>
>      <resources>
>        <clone id="evms_activate">
>          <meta_attributes id="evms_activate_meta_attrs">
>            <attributes>
>              <nvpair id="evms_activate_metaattr_clone_max" name="clone_max" 
> value="2"/>
>              <nvpair id="evms_activate_metaattr_clone_node_max" 
> name="clone_node_max" value="1"/>
>              <nvpair id="evms_activate_metaattr_globally_unique" 
> name="globally_unique" value="false"/>
>              <nvpair id="evms_activate_metaattr_notify" name="notify" 
> value="true"/>
>              <nvpair id="evms_activate_metaattr_target_role" 
> name="target_role" value="stopped"/>
>            </attributes>
>          </meta_attributes>
>          <primitive id="evms_css" class="ocf" type="EvmsSCC" 
> provider="heartbeat">
>            <meta_attributes id="evms_css:0_meta_attrs">
>              <attributes/>
>            </meta_attributes>
>          </primitive>
>        </clone>
>        <group id="scalix_group">
>          <meta_attributes id="scalix_group_meta_attrs">
>            <attributes>
>              <nvpair name="target_role" 
> id="scalix_group_metaattr_target_role" value="started"/>
>            </attributes>
>          </meta_attributes>
>          <primitive id="evms_container" class="heartbeat" 
> type="evms_failover" provider="heartbeat">
>            <instance_attributes id="evms_container_instance_attrs">
>              <attributes>
>                <nvpair id="edaa2115-f613-4b7f-b72c-7861b5ad2717" name="1" 
> value="sxcontainer"/>
>              </attributes>
>            </instance_attributes>
>            <meta_attributes id="evms_container_meta_attrs">
>              <attributes/>
>            </meta_attributes>
>          </primitive>
>          <primitive id="filesystem_mount" class="ocf" type="Filesystem" 
> provider="heartbeat">
>            <instance_attributes id="filesystem_mount_instance_attrs">
>              <attributes>
>                <nvpair id="8f07f984-2a40-4370-affe-ff0fea19c9da" 
> name="device" value="/dev/evms/sxcontainer/scalix"/>
>                <nvpair id="59b72296-6232-4a38-9ea4-4dd93d927ae0" 
> name="directory" value="/var/opt/scalix/na"/>
>                <nvpair id="09fd0409-c33c-4eaa-99e9-731b807a506c" 
> name="fstype" value="ext3"/>
>              </attributes>
>            </instance_attributes>
>            <meta_attributes id="filesystem_mount_meta_attrs">
>              <attributes/>
>            </meta_attributes>
>          </primitive>
>          <primitive id="ipaddr_scalix" class="ocf" type="IPaddr" 
> provider="heartbeat">
>            <instance_attributes id="ipaddr_scalix_instance_attrs">
>              <attributes>
>                <nvpair id="dde96480-b79e-4874-a1e4-98ec72bba1eb" name="ip" 
> value="192.168.0.6"/>
>              </attributes>
>            </instance_attributes>
>            <meta_attributes id="ipaddr_scalix_meta_attrs">
>              <attributes/>
>            </meta_attributes>
>          </primitive>
>          <primitive id="scalixserver" class="lsb" type="scalix" 
> provider="heartbeat">

There's no provider for LSB.

>            <instance_attributes id="scalixserver_instance_attrs">
>              <attributes>
>                <nvpair id="47762ffa-bc64-4128-942c-9710bcf477f9" name="start" 
> value="omrc"/>

Here, name should be set to "1". In other words, values are
passed unnamed to the LSB class resource agents.

>              </attributes>
>            </instance_attributes>
>            <meta_attributes id="scalixserver_meta_attrs">
>              <attributes/>
>            </meta_attributes>
>          </primitive>
>          <primitive id="scalix_tomcat" class="lsb" type="scalix-tomcat" 
> provider="heartbeat">
>            <instance_attributes id="scalix_tomcat_instance_attrs">
>              <attributes>
>                <nvpair id="a61027ce-444a-4ecc-8605-abb5b8a08df2" name="start" 
> value="start"/>
>              </attributes>
>            </instance_attributes>
>            <meta_attributes id="scalix_tomcat_meta_attrs">
>              <attributes/>
>            </meta_attributes>
>          </primitive>
>          <primitive id="scalixpostgres" class="lsb" type="scalix-postgres" 
> provider="heartbeat">
>            <instance_attributes id="scalixpostgres_instance_attrs">
>              <attributes>
>                <nvpair id="843a2dbc-f904-49ee-b36a-a541365e7851" name="start" 
> value="start"/>
>              </attributes>
>            </instance_attributes>
>          </primitive>
>          <primitive id="ldapmapper" class="lsb" type="ldapmapper" 
> provider="heartbeat">
>            <instance_attributes id="ldapmapper_instance_attrs">
>              <attributes>
>                <nvpair id="17810dce-ef8f-4485-942c-7c809685c5db" name="start" 
> value="start"/>

All these "start=start" nv pairs look strange. I suppose that you
wanted the cluster to do, say: ldapmapper start? If that's the
case, it happens automatically, i.e. you don't need any
attributes.

Thanks,

Dejan

>              </attributes>
>            </instance_attributes>
>          </primitive>
>        </group>
>      </resources>
>      <constraints/>
>    </configuration>
>  </cib>
> 
> Thanks
> 
> 
> Jill R. Schaumloeffel
> IT Engineer
> Garrett Container Systems, Inc.
> Engineering Department
> 123 N. Industrial Park Ave.
> Accident, MD  21520
> Tel: (301) 746-8970 x213
>  
> This e-mail and any files transmitted with it are confidential and intended 
> solely for the use of the individual or entity to whom they are addressed.  
> If you have received this e-mail in error please notify the system manager.  
> Please note that any views or opinions presented in this e-mail are solely 
> those of the author and do not necessarily represent those of the Company.  
> Finally, the recipient should check this e-mail and any attachments for the 
> presence of viruses.  The Company accepts no liability for any damage caused 
> by any virus transmitted by this e-mail.

> #debug 1
> keepalive 1
> deadtime 10
> warntime 5
> use_logd yes
> crm true
> bcast eth1
> bcast eth3
> node nodeb
> node nodea
> respawn root /sbin/evmsd
> apiauth evms uid=hacluster,root
> apiauth crm uid=hacluster,root

> _______________________________________________
> Linux-HA mailing list
> [email protected]
> http://lists.linux-ha.org/mailman/listinfo/linux-ha
> See also: http://linux-ha.org/ReportingProblems
_______________________________________________
Linux-HA mailing list
[email protected]
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems

Reply via email to