Hi

A log and an configuration file were appended.

The output error log becomes the following.
----------------------------------------
heartbeat[3002]: 2007/07/03_11:28:13 ERROR: process_status_message: bad node
[192.168.40.1] in message
heartbeat[3002]: 2007/07/03_11:28:13 ERROR: MSG: Dumping message with 6
fields
heartbeat[3002]: 2007/07/03_11:28:13 ERROR: MSG[0] : [t=NS_st]
heartbeat[3002]: 2007/07/03_11:28:13 ERROR: MSG[1] : [st=ping]
heartbeat[3002]: 2007/07/03_11:28:13 ERROR: MSG[2] : [info=ping]
heartbeat[3002]: 2007/07/03_11:28:13 ERROR: MSG[3] : [src=192.168.40.1]
heartbeat[3002]: 2007/07/03_11:28:13 ERROR: MSG[4] : [ts=4689b43b]
heartbeat[3002]: 2007/07/03_11:28:13 ERROR: MSG[5] : [auth=2
7fc72b41da381e8a388d289eeaec25c4]
----------------------------------------------------------------------

When starting, it becomes an error when it sets more than 100 nodes.
I think that I should similarly become a start error in this case.

Moreover, the development version(Heartbeat-Dev-94d9a8c98f92.tar.gz) was the
same result though confirmed.

The registration of bugzilla becomes tomorrow.

Regard,
 Yamauchi


> -----Original Message-----
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] Behalf Of YAMAUCHI HIDEO
> Sent: Sunday, July 01, 2007 9:20 AM
> To: General Linux-HA mailing list
> Subject: RE: [Linux-HA] Problem of check on number of nodes
>
>
> Sorry....
>
> I wanted to confirm it concerning the number of nodes that was able to be
> treated.
> When confirming it, this problem was found by chance.
>
> Regard,
>  Yamauchi.
>
> > -----Original Message-----
> > From: [EMAIL PROTECTED]
> > [mailto:[EMAIL PROTECTED] Behalf Of Lars
> > Marowsky-Bree
> > Sent: Sunday, July 01, 2007 3:15 AM
> > To: General Linux-HA mailing list
> > Subject: Re: [Linux-HA] Problem of check on number of nodes
> >
> >
> > On 2007-06-30T06:14:35, YAMAUCHI HIDEO
> <[EMAIL PROTECTED]> wrote:
> >
> > > I confirmed the maximum checks of the number of nodes.
> > > It is not thought that it uses it by never 100 nodes.
> >
> > I apologize, but I do not understand what you wrote.
> >
> > Do you mean to say that you wanted to test the maximum node size, but
> > that you never intend to run 100 nodes in practice?
> >
> > If so, what node count are you interested in?
> >
> >
> > Regards,
> >     Lars
> >
> > --
> > Teamlead Kernel, SuSE Labs, Research and Development
> > SUSE LINUX Products GmbH, GF: Markus Rex, HRB 16746 (AG Nürnberg)
> > "Experience is the name everyone gives to their mistakes." --
> Oscar Wilde
> >
> > _______________________________________________
> > Linux-HA mailing list
> > [email protected]
> > http://lists.linux-ha.org/mailman/listinfo/linux-ha
> > See also: http://linux-ha.org/ReportingProblems
> >
>
> _______________________________________________
> Linux-HA mailing list
> [email protected]
> http://lists.linux-ha.org/mailman/listinfo/linux-ha
> See also: http://linux-ha.org/ReportingProblems
>
 <cib generated="true" admin_epoch="0" epoch="3" num_updates="40" have_quorum="true" ignore_dtd="false" num_peers="1" cib_feature_revision="1.3" cib-last-written="Tue Jul  3 11:02:02 2007" ccm_transition="1" dc_uuid="4f44ec47-0721-4087-95de-bac5f1fee953">
   <configuration>
     <crm_config>
       <cluster_property_set id="idCluseterPropertySet">
         <attributes>
           <nvpair id="symmetric-cluster" name="symmetric-cluster" value="true"/>
           <nvpair id="no-quorum-policy" name="no-quorum-policy" value="ignore"/>
           <nvpair id="stonith-enabled" name="stonith-enabled" value="false"/>
           <nvpair id="short-resource-names" name="short_resource_names" value="true"/>
           <nvpair id="is-managed-default" name="is-managed-default" value="true"/>
           <nvpair id="transition-idle-timeout" name="transition-idle-timeout" value="120s"/>
           <nvpair id="default-resource-stickiness" name="default-resource-stickiness" value="0"/>
           <nvpair id="stop-orphan-resources" name="stop-orphan-resources" value="true"/>
           <nvpair id="stop-orphan-actions" name="stop-orphan-actions" value="true"/>
           <nvpair id="remove-after-stop" name="remove-after-stop" value="false"/>
           <nvpair id="default-resource-failure-stickiness" name="default-resource-failure-stickiness" value="-200"/>
           <nvpair id="stonith-action" name="stonith-action" value="reboot"/>
           <nvpair id="default-action-timeout" name="default-action-timeout" value="120s"/>
           <nvpair id="dc_deadtime" name="dc_deadtime" value="10s"/>
           <nvpair id="cluster_recheck_interval" name="cluster_recheck_interval" value="0"/>
           <nvpair id="election_timeout" name="election_timeout" value="2min"/>
           <nvpair id="shutdown_escalation" name="shutdown_escalation" value="20min"/>
           <nvpair id="crmd-integration-timeout" name="crmd-integration-timeout" value="3min"/>
           <nvpair id="crmd-finalization-timeout" name="crmd-finalization-timeout" value="10min"/>
           <nvpair id="cluster-delay" name="cluster-delay" value="120s"/>
           <nvpair id="pe-error-series-max" name="pe-error-series-max" value="-1"/>
           <nvpair id="pe-warn-series-max" name="pe-warn-series-max" value="-1"/>
           <nvpair id="pe-input-series-max" name="pe-input-series-max" value="-1"/>
           <nvpair id="startup-fencing" name="startup-fencing" value="true"/>
         </attributes>
       </cluster_property_set>
     </crm_config>
     <nodes>
       <node id="4f44ec47-0721-4087-95de-bac5f1fee953" uname="hbtest03" type="normal"/>
     </nodes>
     <resources>
       <group id="grpDummy1">
         <primitive id="prmDummy1" class="ocf" type="Dummy" provider="heartbeat">
           <operations>
             <op id="opDummy1Start" name="start" timeout="60s" on_fail="restart"/>
             <op id="opDummy1Monitor" name="monitor" interval="5s" timeout="10s" on_fail="restart"/>
             <op id="opDummy1Stop" name="stop" timeout="60s" on_fail="restart"/>
           </operations>
           <instance_attributes id="atrDummy1">
             <attributes>
               <nvpair id="atrDummy11" name="delay" value="1"/>
               <nvpair id="atrDummy12" name="state" value="/var/run/heartbeat/rsctmp/Dummy1.state"/>
               <nvpair id="atrDummy13" name="resource_stickiness" value="150"/>
             </attributes>
           </instance_attributes>
         </primitive>
       </group>
       <group id="grpDummy2">
         <primitive id="prmDummy2" class="ocf" type="Dummy" provider="heartbeat">
           <operations>
             <op id="opDummy2Start" name="start" timeout="60s" on_fail="restart"/>
             <op id="opDummy2Monitor" name="monitor" interval="5s" timeout="10s" on_fail="restart"/>
             <op id="opDummy2Stop" name="stop" timeout="60s" on_fail="restart"/>
           </operations>
           <instance_attributes id="atrDummy2">
             <attributes>
               <nvpair id="atrDummy21" name="delay" value="1"/>
               <nvpair id="atrDummy22" name="state" value="/var/run/heartbeat/rsctmp/Dummy2.state"/>
               <nvpair id="atrDummy23" name="resource_stickiness" value="0"/>
             </attributes>
           </instance_attributes>
         </primitive>
       </group>
     </resources>
     <constraints>
       <rsc_location id="rlcDummy1" rsc="grpDummy1">
         <rule score="300" id="rulNode11">
           <expression value="hbtest03" attribute="#uname" operation="eq" id="expNode11"/>
         </rule>
         <rule score="200" id="rulNode12">
           <expression value="hbtest04" attribute="#uname" operation="eq" id="expNode12"/>
         </rule>
       </rsc_location>
       <rsc_location id="rlcDummy2" rsc="grpDummy2">
         <rule score="300" id="rulNode21">
           <expression value="hbtest03" attribute="#uname" operation="eq" id="expNode21"/>
         </rule>
         <rule score="200" id="rulNode22">
           <expression value="hbtest04" attribute="#uname" operation="eq" id="expNode22"/>
         </rule>
       </rsc_location>
     </constraints>
   </configuration>
 </cib>

Attachment: ha.cf
Description: Binary data

Attachment: ha-log
Description: Binary data

_______________________________________________
Linux-HA mailing list
[email protected]
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems

Reply via email to