Dear John, et al.,

I could bond all the nodes in my cluster (Dell 1750s and Dell1850s with dual Gig ethernets on the motherboard). However, I cannot tell if I would get more bandwidth that way or just more ethernet packet resends because of out of order packets.

Do you get improved performance on your "bonded" cluster?

Does  bonding two ethernet cards add  to the performance of an NFS beowulf cluster, with the master node being the NFS server? Or is bonding just a method to improve availability to the node if one ethernet card fails?

I cannot find any metrics on bonding or teaming ethernet cards.
------
Sincerely,

  Tom Pierce
 



Message: 11
Date: Fri, 05 May 2006 10:08:47 +0100
From: John Hearns <[EMAIL PROTECTED]>
Subject: Re: [Beowulf] 512 nodes Myrinet cluster Challanges
To: [EMAIL PROTECTED]
Cc: [email protected]
Message-ID: <[EMAIL PROTECTED]>
Content-Type: text/plain

On Fri, 2006-05-05 at 10:23 +0200, Alan Louis Scheinine wrote:
> Since you'all are talking about IPMI, I have a question.
> The newer Tyan boards have a plug-in IPMI 2.0 that uses
> one of the two Gigabit Ethernet channels for the Ethernet
> connection to IPMI.  If I use channel bonding (trunking) of the
> two GbE channels, can I still communicate with IPMI on Ethernet?

We recently put in a cluster with bonded gigabit, however that was done
using a separate dual-port PCI card.
On Supermicro, the IPMI card by default uses the same MAC address as the
eth0 port which it shares. You could reconfigure this I think.
(lan set 1 maccaddr <x:x:x:x:x:x>
Also Supermicro have a riser card00:30:48:2d:49:44
which provides a separate network and serial port for the IPMI card.
Tyan probably have similar.

***************************************

------
Sincerely,

  Tom Pierce
   Bldg 7/ Rm 207D - Spring House, PA
 
_______________________________________________
Beowulf mailing list, [email protected]
To change your subscription (digest mode or unsubscribe) visit 
http://www.beowulf.org/mailman/listinfo/beowulf

Reply via email to