On Thu, Feb 17, 2011 at 11:49 PM, Tracy Reed <tr...@ultraviolet.org> wrote:
> On Thu, Feb 17, 2011 at 10:11:18PM +0200, Kenneth Kalmer spake thusly:
>> Since vblade uses a specified device, should I use channel bonding to
>> aggregate multiple links together for more performance ? If yes, is
>> 802.3ad the best bonding method since the switch is involved in
>> deciding down which link the ethernet frames are sent, or am I missing
>> the plot on this one.
>
> I say use channel bonding but understand that the connection between a
> particular pair of machines will only use one of the links due to the
> MAC hashing used by 802.3ad to choose a link to transmit to a particular
> host over. So no one connection will be getting more than 1Gb but the
> aggregate throughput from the AoE target to multiple iniators will be
> greater. And of course make sure you have enough disk performance to
> actually use the bandwidth.

Good points. Since I'm using 4GB FC to the storage array, my thoughts
of 4GBE to the initiators was to eliminate all but physical disk
bottleneck. I'll check out the other bonding mechanisms as well and
see if one could possibly give us higher throughput than a single GBE
link.

>> I currently have 4 GBE ports per storage controller that I can
>> leverage, and am considering jumping to dual 10 GBE interfaces to the
>> switch.
>
> 10GBE interfaces would certainly get you faster individual connections
> than 1Gb. But worry about disk throughput first. Measure the bandwidth
> on your 1Gb links and make sure you can actually hit that before
> investing in 10Gb links and assuming the bandwidth is the issue. It
> takes a lot of disks to full even a 1Gb pipe on anything but pure
> streaming workloads.

Duly noted, thanks for the reality check.

>> Then, on the initiator side my understanding is that "aggregation"
>> comes for free. So in this case all I need to do is ensure I have a
>> vlan interface per physical interface on the server, and use
>> `aoe-interfaces` to restrict the scope to the multiple vlan
>> interfaces. If not, would I need to bond here as well? My plan is to
>> setup at least 2 GBE per Xen host.
>
> I suppose this would work although you wouldn't have protection against
> physical layer failure.

I'm a bit lost here, I'll have two physical links on the initiator
side, apart from switch failure what else would I need to worry about
in terms of the physical layer ?

>> I hope to consolidate this information, and the other questions I'll
>> post over the coming days into several blog posts. The biggest
>> downside to learning AoE is cutting through the tons of AoE vs iSCSI
>> crap and getting to the useful parts. I plan to publish the useful
>> parts.
>
> Please post the link to your blog posts here when you get them up. I
> would love to read them. I have no idea what Coraid is doing these days
> but I never hear about them and I have long thought AoE is a poorly
> marketed and greatly under-appreciated storage technology.

I'll definitely do, with high quality feedback like this and some
testing in the lab this week I'm sure it will be a steady stream of
documentation. Keep an eye on opensourcery.co.za, that is where
they'll be posted.

Thanks for the help so far, I'll fire off more questions in the week.

Best

-- 
Kenneth Kalmer
kenneth.kal...@gmail.com
http://opensourcery.co.za
@kennethkalmer

------------------------------------------------------------------------------
The ultimate all-in-one performance toolkit: Intel(R) Parallel Studio XE:
Pinpoint memory and threading errors before they happen.
Find and fix more than 250 security defects in the development cycle.
Locate bottlenecks in serial and parallel code that limit performance.
http://p.sf.net/sfu/intel-dev2devfeb
_______________________________________________
Aoetools-discuss mailing list
Aoetools-discuss@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/aoetools-discuss

Reply via email to