On Oct 31, 2007, at 2:58 PM, Salmon, Rene wrote:

Hi Sam,

Thanks for the reply.  You where right.  Taking out all the IB stuff
made a big difference.  Now I am seeing much better performance using
bmi_tcp.  Thanks.

Hi Rene,

Good to hear it worked out. The configure script will warn about enabling multiple BMI methods at once, although, I think it actually assumes tcp is the slow one, whereas in your case its IB.


hpca4000(salmr0)140:dd if=/dev/zero of=/pvfs2-fs/salmr0/foo1 bs=4M
count=2000
2000+0 records in
2000+0 records out
8388608000 bytes (8.4 GB) copied, 21.3792 seconds, 392 MB/s

hpca4000(salmr0)137:pvfs2-cp -b4194304 -t /pvfs2-fs/salmr0/Hmig-8GB
/pvfs2-fs/salmr0/Hmig-8GB-copy
Wrote 8590895600 bytes in 50.296533 seconds. 162.892271 MB/seconds

hpca4000(salmr0)142:pvfs2-cp -t -b 4194304 /vol0/salmr0/Hmig-8GB
/pvfs2-fs/salmr0/foo-4
Wrote 8590895600 bytes in 14.417761 seconds. 568.251657 MB/seconds

That's interesting, I would have expected the second test (write to pvfs) to be about twice as fast as the first test (read from pvfs, write to pvfs). You're seeing writes that are much faster than reads. What numbers do you get copying a file from pvfs to local?

Also, pvfs2-cp is a little inefficient because it does blocking reads and writes using a single buffer. I've attached a patch to pvfs2-cp that uses multiple buffers. If you're inclined, you might see even better numbers with it.

-sam

Attachment: async-pvfs2-cp.patch
Description: Binary data




 Rene

-----Original Message-----
From: Sam Lang [mailto:[EMAIL PROTECTED]
Sent: Tuesday, October 30, 2007 4:57 PM
To: Salmon, Rene
Cc: Scott Atchley; pvfs2-users
Subject: Re: [Pvfs2-users] Pvfs performance over 10Ge


Hi Rene,

With both ib and tcp methods enabled, both methods are
getting polled in succession.  This is a current limitation
in PVFS, and we've seen pretty much the same behavior that
you've described, so disabling the IB stuff should help fix
your bandwidth limitations.

-sam

On Oct 30, 2007, at 1:56 PM, Rene Salmon wrote:

Hi Scott,

I have pvfs compiled and configured to use both bmi_ib and bmi_tcp
(will probaly try MX later):

        BMIModules bmi_ib,bmi_tcp

        Alias hpcxe001a ib://hpcxe001:3335,tcp://hpcxe001:3334
        Alias hpcxe001b ib://hpcxe001:3337,tcp://hpcxe001:3336
        Alias hpcxe003a ib://hpcxe003:3335,tcp://hpcxe003:3334
        Alias hpcxe003b ib://hpcxe003:3337,tcp://hpcxe003:3336


I guess maybe its trying to use tcp over IB or something.
I will rip
out the IB stuff so there is no chance of that and try again.

Just looked in cvs and did not see the bmi_pingpong test do
you mind
sending me a copy?

Thanks
Rene


On Tue, 2007-10-30 at 11:08 +0000, Scott Atchley wrote:
On Oct 29, 2007, at 11:32 PM, Rob Ross wrote:

To answer your original question, no there's nothing in
PVFS that is
purposefully limiting you to 1Gbit/sec.

Hi Rene,

When I used bmi_pingpong to test the network IO
performance of PVFS2
(ignoring the file system parts) on Myri-10G Ethernet, bmi_tcp
achieved 625-650 MB/s using a single client and single server:

http://www.myri.com/scs/performance/MX-10G/PVFS2-MX/

Performance of pvfs2-cp will be lower, but it should be
above 1 Gb/s.

Scott


_______________________________________________
Pvfs2-users mailing list
[email protected]
http://www.beowulf-underground.org/mailman/listinfo/pvfs2-users





_______________________________________________
Pvfs2-users mailing list
[email protected]
http://www.beowulf-underground.org/mailman/listinfo/pvfs2-users

Reply via email to