Tundra,

Yes, startup and shutdown change the membership.

If they are not supported it probably means that they have never been 
tested but that doesn't mean they won't necessarily work. If you are 
just doing a test configuration I seem to recall that you can configure 
a cluster with just one share NIC. May be you could use the e1000g for 
that and the other cards for the public networks?

I don't know enough about the drivers to comment on what might cause the 
message you are seeing. If you feel inclined, you could look at the 
code! :-)

Tim
---


On 10/20/09 15:46, Tundra Slosek wrote:
>> Tundra,
>> 
>> A cluster is only reliant on a quorum server (QS) or quorum disk
>> (QD) when the cluster membership changes. Thus neither a single QS
>> or QD is a single point of failure because they are essentially 
>> passive entities. Having said that, they should be replaced as soon
>> as a fault is detected to avoid having any effect on the cluster
>> should nodes join or leave it (the cluster).
> 
> Thank you for the clarification Tim. Does join/leave include
> 'shutdown/startup' as well?
> 
> On my experiment with dedicated NICs for the private interconnect,
> I'm starting to suspect something that I'm hoping you might be able
> to answer. The Open HA Cluster 2009.6 release notes at
> http://www.opensolaris.org/os/community/ha-clusters/ohac/Documentation/OHACdocs/relnotes/
> list only two x86 supported platforms, both Sunfires. The hardware
> that I have is not either of those. I'm wondering if there is a
> specific hardware requirement for the interconnect of the actual
> physical NIC that an Intel card which uses the 'e1000g' driver
> satisfies but devices recognized as the 'dnet' and 'rge' drivers do
> not support.
> 
> My initial setup is one 'rge' device (resident on the motherboard)
> and one 'e1000g' (server grade Intel PCI-E card) in each node, and a
> VLAN device on each for the interconnect.
> 
> In order to test dedicated NIC for the interconnect, I added a third
> NIC to each node - scraping up what I had on hand, I had one desktop
> grade PCI-E Intel e1000g, one older PCI Intel card that is seen as
> 'iprb', and two old PCI SMC cards which are seen as 'dnet'.
> 
> What I noticed is that 'scinstall' offered to use the second e1000g
> for interconnect, but for the various other NICs I had to explicitly
> type in the NIC name, and had to explicitly state that they are
> Ethernet.
> 
> I also see, in dmesg, the following warning that makes me wonder
> again about hardware capabilities:
> 
> WARNING: Received non interrupt heartbeat on mltproc1:dnet0 -
> mltproc0:dnet0 - path timeouts are likely.

-- 

Tim Read
Staff Engineer
Solaris Availability Engineering
Sun Microsystems Ltd
Springfield
Linlithgow
EH49 7LR

Phone: +44 (0)1506 672 684
Mobile: +44 (0)7802 212 137
Twitter: @timread

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

NOTICE: This email message is for the sole use of the intended
recipient(s) and may contain confidential and privileged information.
Any unauthorized review, use, disclosure or distribution is prohibited.
If you are not the intended recipient, please contact the sender by
reply email and destroy all copies of the original message.

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Reply via email to