As for dm-multipath being stable, haven't used it enough to know.


On Feb 24, 2009, at 8:51 PM, Tracy Reed wrote:

> On Tue, Feb 24, 2009 at 07:55:03PM -0800, Matthew Ingersoll spake  
> thusly:
>> I'm hoping somebody can verify my logic on how MPIO works with aoe/
>> vblade.
>
> Awesome timing. I was just looking into setting up MPIO also. iSCSI
> has MC/S and certain people in my shop are a big fan of it. I want to
> show that we can do the same thing with AoE and MPIO more simply. What
> distribution are you using? Do you find dm-multipath to be stable?

Running Debian 5.0. As for dm-multipath being stable, haven't used it  
enough to know and hopefully won't need it.

>
>
>> If an aoe target machine exported a file using a command like
>>
>>  vbladed 0 0 eth0 /path/to/file
>>
>> and repeated this process using the same file to broadcast on a  
>> second
>> ethernet port:
>>
>>  vbladed 0 0 eth1 /path/to/file
>>
>> On the initiator with 4 ethernet ports we would see:
>>  e0.0        xyzGB   eth0,eth1,eth2,eth3     up
>
> So the target has two interfaces but the initiator has 4?

Yep.  I wanted to make it clear that the initiator was different and  
could keep up with the target.  I'm going to bet that by having only  
two interfaces on the initiator, you'll get the same throughput.

>
>> The results are what I expected:
>>
>> In my write tests,  I see traffic balancing on interfaces eth0-3.  On
>> the target side, traffic flows on eth0-1 in what appears to be a  
>> round-
>> robin manner.  On killing a single vblade, the throughput is almost
>> cut in half after aoe sees it as down.
>
> Almost in half? How linear is the scaling? I'm wondering what happens
> when we get 8 interfaces in a machine. We have quad port gig-e stuff
> in our lab. It would be fun to put two of them in each target and
> initiator.

An example with two ports (on target, always 4 on the initiator) using  
dd:
  dd if=/dev/zero of=test bs=1M count=128 oflag=direct
  182 MB/s

... and one port on the target:
   dd if=/dev/zero of=test bs=1M count=128 oflag=direct
  102 MB/s

So it doesn't really double, but it may relate to other bottlenecks  
other than disk I/O and network (but we're still pretty much hitting  
the limit here).  These were run on the XFS filesystem.
One thing to be aware of is how fast the system can run locally.  So  
if the network scales right by adding more ports, I would keep putting  
in NIC's until I reach the local I/O speed or the vblades eat up too  
many resources.  Its basically a race to see what gets saturated first.

>
>
>> My question then is, can corruption occur when having two vblades
>> running for the same file or is this the recommended MPIO method or
>> are there better solutions (this ones seems great to me)?
>
> AoE itself is stateless so that's a big plus. As both vblade processes
> are dealing with exactly the same kernel page cache I would expect it
> to be fine. I look forward to seeing what others have to say.
>

Looking at the vblade code, it uses the system calls open, read and  
write.  So I think the main question would be, what makes it so the  
writes don't overlap?  Didn't see any locking, is it another subsystem  
that handles the organization?

> -- 
> Tracy Reed
> http://tracyreed.org


------------------------------------------------------------------------------
Open Source Business Conference (OSBC), March 24-25, 2009, San Francisco, CA
-OSBC tackles the biggest issue in open source: Open Sourcing the Enterprise
-Strategies to boost innovation and cut costs with open source participation
-Receive a $600 discount off the registration fee with the source code: SFAD
http://p.sf.net/sfu/XcvMzF8H
_______________________________________________
Aoetools-discuss mailing list
Aoetools-discuss@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/aoetools-discuss

Reply via email to