Eric:

Running "dd" uses the kernel module code, which will process the 1GB buffer
in 4MB chunks, which affects your throughput.  Try running this command:

dd if=/dev/zero of=/mnt3/my5gigfile bs=1G count=5

<OrangeFS Install dir>/bin/pvfs2-cp -t -b 1073741824 /mnt3/my5gigfile
/dev/shm/my5gigfile

You will see better performance by NOT going through the kernel module.  I
ran this test on a 10GE network and got 360 MB/s.  My filesystem is using 4
servers, not 1 as in your case.  The more servers you have, the more data
that can be written simultaneously, which is where you get the performance
bump!

Becky


On Fri, May 16, 2014 at 8:16 AM, Eric Ulmer <[email protected]> wrote:

> Setup:
> Version 2.8.8
>
> Server volume:
> A tmpfs volume on RHEL6U4
>
> Connectivity:
> QDR IB between hosts
> Mellanox Technologies MT26428 [ConnectX VPI PCIe 2.0 5GT/s - IB QDR /
> 10GigE] (rev b0)
>
> Host/Client CPU Info (2x sockets each):
> model name      : Intel(R) Xeon(R) CPU           X5675  @ 3.07GHz
> cpu MHz         : 3066.821
>
>
> I got the following results using IPOIB TCP transport as I haven't been
> able to get IB Verbs working, yet...
>
> Write Tests
> =============
> ---- Connected mode, 2044 MTU
> [root@esekilx6301 mnt3]# echo connected > /sys/class/net/ib0/mode
> [root@esekilx6301 mnt3]# dd if=/dev/zero of=test bs=1G count=9
> 9+0 records in
> 9+0 records out
> 9663676416 bytes (9.7 GB) copied, 75.3552 s, 128 MB/s
>
> ---- Datagram, 2044 MTU
> [root@esekilx6301 mnt3]# echo datagram > /sys/class/net/ib0/mode
> [root@esekilx6301 mnt3]# dd if=/dev/zero of=test bs=1G count=9
> 9+0 records in
> 9+0 records out
> 9663676416 bytes (9.7 GB) copied, 38.6233 s, 250 MB/s
> [root@esekilx6301 mnt3]# dd if=/dev/zero of=test bs=1G count=9
> 9+0 records in
> 9+0 records out
> 9663676416 bytes (9.7 GB) copied, 37.8484 s, 255 MB/s
> [root@esekilx6301 mnt3]#
>
> ---- Connected mode, 64000 MTU
> root@esekilx6301 mnt3]# echo connected > /sys/class/net/ib0/mode
> [root@esekilx6301 mnt3]# ifconfig ib0 mtu 64000
>  [root@esekilx4055481 ramdisk]#  echo connected > /sys/class/net/ib0/mode
> [root@esekilx4055481 ramdisk]# ifconfig ib0 mtu 64000
> [root@esekilx6301 mnt3]# dd if=/dev/zero of=test bs=1G count=9
> 9+0 records in
> 9+0 records out
> 9663676416 bytes (9.7 GB) copied, 74.5399 s, 130 MB/s
> [root@esekilx6301 mnt3]#
>
>
> Read Test, datagram mode
> ===============
> [root@esekilx6301 mnt3]# dd if=test of=/dev/null bs=1G count=9
> 9+0 records in
> 9+0 records out
> 9663676416 bytes (9.7 GB) copied, 30.8198 s, 314 MB/s
> [root@esekilx6301 mnt3]#
>
>
>
>
> I noted that the CPU on the server side was 97% on writes...
> Client was 100% CPU on reads..
>
> I'm wondering what I can do to tune this up better?
> My "Goal" is 500MB/sec read and write
>
>
> I got the following IO locally on the server itself..
> [root@esekilx4055481 ramdisk]# dd if=/dev/zero of=test bs=1G count=9
> 9+0 records in
> 9+0 records out
> 9663676416 bytes (9.7 GB) copied, 5.39749 s, 1.8 GB/s
> [root@esekilx4055481 ramdisk]#
>
> Server XML config
> =================
> [root@esekilx4055481 ramdisk]# cat /etc/pvfs2-fs.conf
> <Defaults>
>         UnexpectedRequests 50
>         EventLogging none
>         EnableTracing no
>         LogStamp datetime
>         BMIModules bmi_tcp
>         FlowModules flowproto_multiqueue
>         PerfUpdateInterval 1000
>         ServerJobBMITimeoutSecs 30
>         ServerJobFlowTimeoutSecs 30
>         ClientJobBMITimeoutSecs 300
>         ClientJobFlowTimeoutSecs 300
>         ClientRetryLimit 5
>         ClientRetryDelayMilliSecs 2000
>         PrecreateBatchSize 0,32,512,32,32,32,0
>         PrecreateLowThreshold 0,16,256,16,16,16,0
>
>         DataStorageSpace /ramdisk/pvfs2
>         MetadataStorageSpace /ramdisk/pvfs2
>
>         LogFile /ramdisk/logs
> </Defaults>
>
> <Aliases>
>         Alias esekilx4055481 tcp://esekilx4055481:3336
> </Aliases>
>
> <Filesystem>
>         Name pvfs2-fs
>         ID 299453833
>         RootHandle 1048576
>         FileStuffing yes
>         <MetaHandleRanges>
>                 Range esekilx4055481 3-4611686018427387904
>         </MetaHandleRanges>
>         <DataHandleRanges>
>                 Range esekilx4055481
> 4611686018427387905-9223372036854775806
>         </DataHandleRanges>
>         <StorageHints>
>                 TroveSyncMeta yes
>                 TroveSyncData no
>                 TroveMethod alt-aio
>         </StorageHints>
> </Filesystem>
> [root@esekilx4055481 ramdisk]#
>
> _______________________________________________
> Pvfs2-users mailing list
> [email protected]
> http://www.beowulf-underground.org/mailman/listinfo/pvfs2-users
>
_______________________________________________
Pvfs2-users mailing list
[email protected]
http://www.beowulf-underground.org/mailman/listinfo/pvfs2-users

Reply via email to