On 6/1/07, Yan Fitterer <[EMAIL PROTECTED]> wrote:
Yes, I thought about this, but haven't you already solved the problem
somehow with the "cib_feature_revision" ? Supposedly, that at well could
 differ between nodes?

thats the minimum protocol version that everyone has agreed to

not completely the same thing

In any case, maybe the version should be attached to the node anyway,
and we'd get the added benefit to easily be able to spot potential
inconsistent versions across the cluster ?

well its allowed to be different - otherwise rolling upgrades wont work

we can also grab the version info from the logs, if only people would
include them :-)


Andrew Beekhof wrote:
> the problem is that the cib is synchronized between nodes that may or
> may not have the same version.  but yes, it is something i've been
> mulling over
>
> it still doesn't address the issue of logs or overly brief problem
> descriptions but would be better than nothing
>
> On 6/1/07, Yan Fitterer <[EMAIL PROTECTED]> wrote:
>> Andrew, this is such a common issue (people not giving us version...),
>> is there any way we could include the hb version in the cib? We aready
>> have "cib_feature_revision", but maybe we should have
>> "heartbeat_version" as well?
>>
>> Feedback anyone?
>>
>> Yan
>>
>>
>> Andrew Beekhof wrote:
>> > logs? version?
>> >
>> > come on people, we can't read minds.
>> >
>> > On 5/30/07, John Moerenhout <[EMAIL PROTECTED]> wrote:
>> >> Hi all,
>> >>
>> >> I configured a cluster in a test environment. Two SLES10 boxes as Xen
>> >> guest
>> >> on a SLES10 host.
>> >> I managed to create 2 resources, one for a secondary ipaddress and one
>> >> for a
>> >> sharedstorage (iscsi). This worked fine, the only thing i was trying
>> >> to get
>> >> done was to specify on what node th eresources start by default.
>> >>
>> >> I created a group with hb_gui.
>> >> Instantly hb_gui stopped responding it now only says "can not get
>> >> information from cluster".
>> >> crm_mon says "Not connected". So basically nothing works anymore.
>> >>
>> >> Is there any way to revert back to the old configuration, or
>> (manually ?)
>> >> change the configuration to something default. I found that the only
>> >> way out
>> >> at this stage is to remove everything and reinstall from scratch.
>> >>
>> >> Here is the cib.xml file:
>> >> sles102:~ # cat /var/lib/heartbeat/crm/cib.xml
>> >>  <cib generated="true" admin_epoch="0" have_quorum="true"
>> num_peers="2"
>> >> cib_feature_revision="1.3" ccm_transition="2"
>> >> dc_uuid="05e40de9-ca83-413a-9490-438b3f9a1852" epoch="27"
>> >> num_updates="271"
>> >> cib-last-written="Wed May 30 02:20:20 2007">
>> >>    <configuration>
>> >>      <crm_config/>
>> >>      <nodes>
>> >>        <node id="09405ec0-ae5d-4418-8b0b-a94f7b82f031" uname="sles102"
>> >> type="normal"/>
>> >>        <node id="05e40de9-ca83-413a-9490-438b3f9a1852" uname="sles101"
>> >> type="normal">
>> >>          <instance_attributes
>> >> id="standby-05e40de9-ca83-413a-9490-438b3f9a1852">
>> >>            <attributes>
>> >>              <nvpair id="standby-05e40de9-ca83-413a-9490-438b3f9a1852"
>> >> name="standby" value="off"/>
>> >>            </attributes>
>> >>          </instance_attributes>
>> >>        </node>
>> >>      </nodes>
>> >>      <resources>
>> >>        <primitive id="secondary_ipaddr" class="heartbeat"
>> type="IPaddr"
>> >> provider="heartbeat" restart_type="ignore" is_managed="default"
>> >> resource_stickiness="0" multiple_active="stop_start">
>> >>          <instance_attributes id="secondary_ipaddr_instance_attrs">
>> >>            <attributes>
>> >>              <nvpair id="secondary_ipaddr_target_role"
>> name="target_role"
>> >> value="started"/>
>> >>              <nvpair id="6cf1999d-dbdb-4be1-a8f7-20901585fc3f"
>> name="1"
>> >> value="192.168.69.123"/>
>> >>            </attributes>
>> >>          </instance_attributes>
>> >>        </primitive>
>> >>        <primitive id="mount_sharedstorage" class="heartbeat"
>> >> type="Filesystem" provider="heartbeat">
>> >>          <instance_attributes id="mount_sharedstorage_instance_attrs">
>> >>            <attributes>
>> >>              <nvpair id="mount_sharedstorage_target_role"
>> >> name="target_role"
>> >> value="started"/>
>> >>              <nvpair id="a76926fb-6614-4b2c-8b14-0c641a0c0855"
>> name="1"
>> >> value="/dev/sda1"/>
>> >>              <nvpair id="02c99357-c47d-48be-80b8-af4ca835902c"
>> name="2"
>> >> value="/vol3"/>
>> >>            </attributes>
>> >>          </instance_attributes>
>> >>        </primitive>
>> >>        <group id="group_1">
>> >>          <instance_attributes id="group_1_instance_attrs">
>> >>            <attributes/>
>> >>          </instance_attributes>
>> >>        </group>
>> >>      </resources>
>> >>      <constraints/>
>> >>    </configuration>
>> >>  </cib>
>> >>
>> >> TIA,
>> >> John
>> >> _______________________________________________
>> >> Linux-HA mailing list
>> >> [email protected]
>> >> http://lists.linux-ha.org/mailman/listinfo/linux-ha
>> >> See also: http://linux-ha.org/ReportingProblems
>> >>
>> > _______________________________________________
>> > Linux-HA mailing list
>> > [email protected]
>> > http://lists.linux-ha.org/mailman/listinfo/linux-ha
>> > See also: http://linux-ha.org/ReportingProblems
>>
>> _______________________________________________
>> Linux-HA mailing list
>> [email protected]
>> http://lists.linux-ha.org/mailman/listinfo/linux-ha
>> See also: http://linux-ha.org/ReportingProblems
>>
> _______________________________________________
> Linux-HA mailing list
> [email protected]
> http://lists.linux-ha.org/mailman/listinfo/linux-ha
> See also: http://linux-ha.org/ReportingProblems
_______________________________________________
Linux-HA mailing list
[email protected]
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems

_______________________________________________
Linux-HA mailing list
[email protected]
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems

Reply via email to