Re: [nfs-rdma-devel] [ofa-general] Status of NFS-RDMA ? (fwd)

2008-03-06 Thread Pawel Dziekonski
On Wed, 05 Mar 2008 at 11:13:14PM -0500, Talpey, Thomas wrote:
 At 08:20 AM 3/3/2008, Pawel Dziekonski wrote:
 both nfs server and client have Mellanox MT25204 HCAs. tests were
 done by connecting them port to port (without switch) with DDR
 cable.  reported link was 20Gbps.
 
 What kernel base is your client running also? (uname -a) There are
 some known issues with cached write throughput over NFS above 1Gb,
 that we may be able to work around but it's kernel-dependent.
 
 currently I have a Flextronix switch that reports itself as
 MT47396 Infiniscale-III Mellanox Technologies.
 
 Looking at your results from earlier this month, it's not at all
 clear that the nfs/rdma run was actually using nfs/rdma. The speeds
 and cpu loads were very similar to the ethernet results.
 
 Can we take this offline and look into it more? All on the client,
 I'll be interested in the exact mount command you ran, the output of
 cat /proc/mounts, the contents of dmesg/kernel logs after a run,
 and the output of nfsstat.

Hi,

thanks for answering.

I can not provide any details now. I will repeat whole setup and
benchmarks ASAP.
Pawel
-- 
Pawel Dziekonski [EMAIL PROTECTED]
Wroclaw Centre for Networking  Supercomputing, HPC Department
Politechnika Wr., pl. Grunwaldzki 9, bud. D2/101, 50-377 Wroclaw, POLAND
phone: +48 71 3202043, fax: +48 71 3225797, http://www.wcss.wroc.pl
___
general mailing list
general@lists.openfabrics.org
http://lists.openfabrics.org/cgi-bin/mailman/listinfo/general

To unsubscribe, please visit http://openib.org/mailman/listinfo/openib-general


Re: [nfs-rdma-devel] [ofa-general] Status of NFS-RDMA ? (fwd)

2008-03-05 Thread Talpey, Thomas
At 08:20 AM 3/3/2008, Pawel Dziekonski wrote:
both nfs server and client have Mellanox MT25204 HCAs. tests were done
by connecting them port to port (without switch) with DDR cable.
reported link was 20Gbps.

What kernel base is your client running also? (uname -a) There are some
known issues with cached write throughput over NFS above 1Gb, that
we may be able to work around but it's kernel-dependent.

currently I have a Flextronix switch that reports itself as MT47396
Infiniscale-III Mellanox Technologies.

Looking at your results from earlier this month, it's not at all clear that the
nfs/rdma run was actually using nfs/rdma. The speeds and cpu loads were
very similar to the ethernet results.

Can we take this offline and look into it more? All on the client, I'll be 
interested
in the exact mount command you ran, the output of cat /proc/mounts, the
contents of dmesg/kernel logs after a run, and the output of nfsstat.

Tom.

___
general mailing list
general@lists.openfabrics.org
http://lists.openfabrics.org/cgi-bin/mailman/listinfo/general

To unsubscribe, please visit http://openib.org/mailman/listinfo/openib-general


Re: [nfs-rdma-devel] [ofa-general] Status of NFS-RDMA ? (fwd)

2008-03-03 Thread Pawel Dziekonski
On Sun, 02 Mar 2008 at 06:48:11PM -0600, Tom Tucker wrote:
 
 On Fri, 2008-02-29 at 09:29 +0100, Sebastian Schmitzdorff wrote:
  hi pawel,
  
  I was wondering if you have achieved better nfs rdma benchmark results 
  by now?
 
 Pawel:
 
 What is your network hardware setup? 

hi,

both nfs server and client have Mellanox MT25204 HCAs. tests were done
by connecting them port to port (without switch) with DDR cable.
reported link was 20Gbps.

currently I have a Flextronix switch that reports itself as MT47396
Infiniscale-III Mellanox Technologies.

cheers, P
-- 
Pawel Dziekonski [EMAIL PROTECTED]
Wroclaw Centre for Networking  Supercomputing, HPC Department
Politechnika Wr., pl. Grunwaldzki 9, bud. D2/101, 50-377 Wroclaw, POLAND
phone: +48 71 3202043, fax: +48 71 3225797, http://www.wcss.wroc.pl
___
general mailing list
general@lists.openfabrics.org
http://lists.openfabrics.org/cgi-bin/mailman/listinfo/general

To unsubscribe, please visit http://openib.org/mailman/listinfo/openib-general


Re: [nfs-rdma-devel] [ofa-general] Status of NFS-RDMA ? (fwd)

2008-03-02 Thread Tom Tucker

On Fri, 2008-02-29 at 09:29 +0100, Sebastian Schmitzdorff wrote:
 hi pawel,
 
 I was wondering if you have achieved better nfs rdma benchmark results 
 by now?

Pawel:

What is your network hardware setup? 

Thanks,
Tom

 
 regards
 Sebastian
 
 Pawel Dziekonski schrieb:
  hi,
 
  the saga continues. ;)
 
  very basic benchmarks and surprising (at least for me) results - it
  look's like reading is much slower than writing and NFS/RDMA is twice
  slower in reading than classic NFS. :o
 
  results below - comments appreciated!
  regards, Pawel
 
 
  both nfs server and client have 8-cores, 16 GB RAM, Mellanox DDR HCAs
  (MT25204) connected port-port (no switch).
 
  local_hdd - 2 sata2 disks in soft-raid0,
  nfs_ipoeth - classic nfs over ethernet,
  nfs_ipoib - classic nfs over IPoIB,
  nfs_rdma - NFS/RDMA.
 
  simple write of 36GB file with dd (both machines have 16GB RAM):
  /usr/bin/time -p dd if=/dev/zero of=/mnt/qqq bs=1M count=36000
 
  local_hddsys 54.52user 0.04real 254.59
   
  nfs_ipoibsys 36.35user 0.00real 266.63
  nfs_rdma sys 39.03user 0.02real 323.77
  nfs_ipoeth   sys 34.21user 0.01real 375.24
 
  remount /mnt to clear cache and read a file from nfs share and
  write it to /dev/:
  /usr/bin/time -p dd if=/mnt/qqq of=/scratch/qqq bs=1M
 
  nfs_ipoib   sys 59.04user 0.02real 571.57
  nfs_ipoeth  sys 58.92user 0.02real 606.61
  nfs_rdmasys 62.57user 0.03real 1296.36
 
 
 
  results from bonnie++:
 
  Version  1.03c  --Sequential Write -- --Sequential Read -- 
  --Random-
  -Per Chr- --Block-- -Rewrite- -Per Chr-  --Block-- 
  --Seeks--
  MachineSize K/sec %CP K/sec %CP K/sec %CP K/sec %CP  K/sec %CP  
  /sec %CP
  local_hdd  35G:128k   93353  12 58329   6   143293   7 
  243.6   1
  local_hdd  35G:256k   92283  11 58189   6   144202   8 
  172.2   2
  local_hdd  35G:512k   93879  12 57715   6   144167   8 
  128.2   4
  local_hdd 35G:1024k   93075  12 58637   6   144172   8  
  95.3   7
  nfs_ipoeth 35G:128k   91325   7 31848   464299   4 
  170.2   1
  nfs_ipoeth 35G:256k   90668   7 32036   564542   4 
  163.2   2
  nfs_ipoeth 35G:512k   93348   7 31757   564454   4  
  85.7   3
  nfs_ipoet 35G:1024k   91283   7 31869   564241   5  
  51.7   4
  nfs_ipoib  35G:128k   91733   7 36641   565839   4 
  178.4   2
  nfs_ipoib  35G:256k   92453   7 36567   666682   4 
  166.9   3
  nfs_ipoib  35G:512k   91157   7 37660   666318   4  
  86.8   3
  nfs_ipoib 35G:1024k   92111   7 35786   666277   5  
  53.3   4
  nfs_rdma   35G:128k   91152   8 29942   532147   2 
  187.0   1
  nfs_rdma   35G:256k   89772   7 30560   534587   2 
  158.4   3
  nfs_rdma   35G:512k   91290   7 29698   534277   2  
  60.9   2
  nfs_rdma  35G:1024k   91336   8 29052   531742   2  
  41.5   3
  --Sequential Create-- Random 
  Create
  -Create-- --Read--- -Delete-- -Create-- --Read--- 
  -Delete--
  files:max:min/sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec 
  %CP
  local_hdd16 10587  36 + +++  8674  29 10727  35 + +++  7015 
   28
  local_hdd16 11372  41 + +++  8490  29 11192  43 + +++  6881 
   27
  local_hdd16 10789  35 + +++  8520  29 11468  46 + +++  6651 
   24
  local_hdd16 10841  40 + +++  8443  28 11162  41 + +++  6441 
   22
  nfs_ipoeth   16  3753   7 13390  12  3795   7  3773   8 22181  16  3635 
7
  nfs_ipoeth   16  3762   8 12358   7  3713   8  3753   7 20448  13  3632 
6
  nfs_ipoeth   16  3834   7 12697   6  3729   8  3725   9 22807  11  3673 
7
  nfs_ipoeth   16  3729   8 14260  10  3774   7  3744   7 25285  14  3688 
7
  nfs_ipoib16  6803  17 + +++  6843  15  6820  14 + +++  5834 
   11
  nfs_ipoib16  6587  16 + +++  4959   9  6832  14 + +++  5608 
   12
  nfs_ipoib16  6820  18 + +++  6636  15  6479  15 + +++  5679 
   13
  nfs_ipoib16  6475  14 + +++  6435  14  5543  11 + +++  5431 
   11
  nfs_rdma 16  7014  15 + +++  6714  10  7001  14 + +++  5683 
8
  nfs_rdma 16  7038  13 + +++  6713  12  6956  11 + +++  5488 
8
  nfs_rdma 16  7058  12 + +++  6797  11  6989  14 + +++  5761 
9
  nfs_rdma 16  7201  13 + +++  6821  12  7072  15 + +++  5609 
9
 
 

 
 

___
general mailing list
general@lists.openfabrics.org
http://lists.openfabrics.org/cgi-bin/mailman/listinfo/general

To unsubscribe, please visit 

Re: [nfs-rdma-devel] [ofa-general] Status of NFS-RDMA ? (fwd)

2008-02-29 Thread Sebastian Schmitzdorff

hi pawel,

I was wondering if you have achieved better nfs rdma benchmark results 
by now?


regards
Sebastian

Pawel Dziekonski schrieb:

hi,

the saga continues. ;)

very basic benchmarks and surprising (at least for me) results - it
look's like reading is much slower than writing and NFS/RDMA is twice
slower in reading than classic NFS. :o

results below - comments appreciated!
regards, Pawel


both nfs server and client have 8-cores, 16 GB RAM, Mellanox DDR HCAs
(MT25204) connected port-port (no switch).

local_hdd - 2 sata2 disks in soft-raid0,
nfs_ipoeth - classic nfs over ethernet,
nfs_ipoib - classic nfs over IPoIB,
nfs_rdma - NFS/RDMA.

simple write of 36GB file with dd (both machines have 16GB RAM):
/usr/bin/time -p dd if=/dev/zero of=/mnt/qqq bs=1M count=36000

local_hddsys 54.52user 0.04real 254.59
 
nfs_ipoibsys 36.35user 0.00real 266.63

nfs_rdma sys 39.03user 0.02real 323.77
nfs_ipoeth   sys 34.21user 0.01real 375.24

remount /mnt to clear cache and read a file from nfs share and
write it to /dev/:
/usr/bin/time -p dd if=/mnt/qqq of=/scratch/qqq bs=1M

nfs_ipoib   sys 59.04user 0.02real 571.57
nfs_ipoeth  sys 58.92user 0.02real 606.61
nfs_rdmasys 62.57user 0.03real 1296.36



results from bonnie++:

Version  1.03c  --Sequential Write -- --Sequential Read -- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr-  --Block-- --Seeks--
MachineSize K/sec %CP K/sec %CP K/sec %CP K/sec %CP  K/sec %CP  /sec %CP
local_hdd  35G:128k   93353  12 58329   6   143293   7 243.6   1
local_hdd  35G:256k   92283  11 58189   6   144202   8 172.2   2
local_hdd  35G:512k   93879  12 57715   6   144167   8 128.2   4
local_hdd 35G:1024k   93075  12 58637   6   144172   8  95.3   7
nfs_ipoeth 35G:128k   91325   7 31848   464299   4 170.2   1
nfs_ipoeth 35G:256k   90668   7 32036   564542   4 163.2   2
nfs_ipoeth 35G:512k   93348   7 31757   564454   4  85.7   3
nfs_ipoet 35G:1024k   91283   7 31869   564241   5  51.7   4
nfs_ipoib  35G:128k   91733   7 36641   565839   4 178.4   2
nfs_ipoib  35G:256k   92453   7 36567   666682   4 166.9   3
nfs_ipoib  35G:512k   91157   7 37660   666318   4  86.8   3
nfs_ipoib 35G:1024k   92111   7 35786   666277   5  53.3   4
nfs_rdma   35G:128k   91152   8 29942   532147   2 187.0   1
nfs_rdma   35G:256k   89772   7 30560   534587   2 158.4   3
nfs_rdma   35G:512k   91290   7 29698   534277   2  60.9   2
nfs_rdma  35G:1024k   91336   8 29052   531742   2  41.5   3
--Sequential Create-- Random Create
-Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files:max:min/sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
local_hdd16 10587  36 + +++  8674  29 10727  35 + +++  7015  28
local_hdd16 11372  41 + +++  8490  29 11192  43 + +++  6881  27
local_hdd16 10789  35 + +++  8520  29 11468  46 + +++  6651  24
local_hdd16 10841  40 + +++  8443  28 11162  41 + +++  6441  22
nfs_ipoeth   16  3753   7 13390  12  3795   7  3773   8 22181  16  3635   7
nfs_ipoeth   16  3762   8 12358   7  3713   8  3753   7 20448  13  3632   6
nfs_ipoeth   16  3834   7 12697   6  3729   8  3725   9 22807  11  3673   7
nfs_ipoeth   16  3729   8 14260  10  3774   7  3744   7 25285  14  3688   7
nfs_ipoib16  6803  17 + +++  6843  15  6820  14 + +++  5834  11
nfs_ipoib16  6587  16 + +++  4959   9  6832  14 + +++  5608  12
nfs_ipoib16  6820  18 + +++  6636  15  6479  15 + +++  5679  13
nfs_ipoib16  6475  14 + +++  6435  14  5543  11 + +++  5431  11
nfs_rdma 16  7014  15 + +++  6714  10  7001  14 + +++  5683   8
nfs_rdma 16  7038  13 + +++  6713  12  6956  11 + +++  5488   8
nfs_rdma 16  7058  12 + +++  6797  11  6989  14 + +++  5761   9
nfs_rdma 16  7201  13 + +++  6821  12  7072  15 + +++  5609   9


  



--

Hamburgnet, Geschäftsführer Sebastian Schmitzdorff
http://www.hamburgnet.de
Kottwitzstrasse 49 D-20253 Hamburg
fon: 040 / 736 72 322 fax: 040 / 736 72 321

___
general mailing list
general@lists.openfabrics.org
http://lists.openfabrics.org/cgi-bin/mailman/listinfo/general

To unsubscribe, please visit http://openib.org/mailman/listinfo/openib-general


Re: [nfs-rdma-devel] [ofa-general] Status of NFS-RDMA ? (fwd)

2008-02-29 Thread Pawel Dziekonski
On Fri, 29 Feb 2008 at 09:29:21AM +0100, Sebastian Schmitzdorff wrote:
 hi pawel,

 I was wondering if you have achieved better nfs rdma benchmark results by 
 now?

no :(


-- 
Pawel Dziekonski [EMAIL PROTECTED]
Wroclaw Centre for Networking  Supercomputing, HPC Department
Politechnika Wr., pl. Grunwaldzki 9, bud. D2/101, 50-377 Wroclaw, POLAND
phone: +48 71 3202043, fax: +48 71 3225797, http://www.wcss.wroc.pl
___
general mailing list
general@lists.openfabrics.org
http://lists.openfabrics.org/cgi-bin/mailman/listinfo/general

To unsubscribe, please visit http://openib.org/mailman/listinfo/openib-general


Re: [nfs-rdma-devel] [ofa-general] Status of NFS-RDMA ? (fwd)

2008-02-09 Thread Pawel Dziekonski
hi,

the saga continues. ;)

very basic benchmarks and surprising (at least for me) results - it
look's like reading is much slower than writing and NFS/RDMA is twice
slower in reading than classic NFS. :o

results below - comments appreciated!
regards, Pawel


both nfs server and client have 8-cores, 16 GB RAM, Mellanox DDR HCAs
(MT25204) connected port-port (no switch).

local_hdd - 2 sata2 disks in soft-raid0,
nfs_ipoeth - classic nfs over ethernet,
nfs_ipoib - classic nfs over IPoIB,
nfs_rdma - NFS/RDMA.

simple write of 36GB file with dd (both machines have 16GB RAM):
/usr/bin/time -p dd if=/dev/zero of=/mnt/qqq bs=1M count=36000

local_hddsys 54.52user 0.04real 254.59
 
nfs_ipoibsys 36.35user 0.00real 266.63
nfs_rdma sys 39.03user 0.02real 323.77
nfs_ipoeth   sys 34.21user 0.01real 375.24

remount /mnt to clear cache and read a file from nfs share and
write it to /dev/:
/usr/bin/time -p dd if=/mnt/qqq of=/scratch/qqq bs=1M

nfs_ipoib   sys 59.04user 0.02real 571.57
nfs_ipoeth  sys 58.92user 0.02real 606.61
nfs_rdmasys 62.57user 0.03real 1296.36



results from bonnie++:

Version  1.03c  --Sequential Write -- --Sequential Read -- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr-  --Block-- --Seeks--
MachineSize K/sec %CP K/sec %CP K/sec %CP K/sec %CP  K/sec %CP  /sec %CP
local_hdd  35G:128k   93353  12 58329   6   143293   7 243.6   1
local_hdd  35G:256k   92283  11 58189   6   144202   8 172.2   2
local_hdd  35G:512k   93879  12 57715   6   144167   8 128.2   4
local_hdd 35G:1024k   93075  12 58637   6   144172   8  95.3   7
nfs_ipoeth 35G:128k   91325   7 31848   464299   4 170.2   1
nfs_ipoeth 35G:256k   90668   7 32036   564542   4 163.2   2
nfs_ipoeth 35G:512k   93348   7 31757   564454   4  85.7   3
nfs_ipoet 35G:1024k   91283   7 31869   564241   5  51.7   4
nfs_ipoib  35G:128k   91733   7 36641   565839   4 178.4   2
nfs_ipoib  35G:256k   92453   7 36567   666682   4 166.9   3
nfs_ipoib  35G:512k   91157   7 37660   666318   4  86.8   3
nfs_ipoib 35G:1024k   92111   7 35786   666277   5  53.3   4
nfs_rdma   35G:128k   91152   8 29942   532147   2 187.0   1
nfs_rdma   35G:256k   89772   7 30560   534587   2 158.4   3
nfs_rdma   35G:512k   91290   7 29698   534277   2  60.9   2
nfs_rdma  35G:1024k   91336   8 29052   531742   2  41.5   3
--Sequential Create-- Random Create
-Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files:max:min/sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
local_hdd16 10587  36 + +++  8674  29 10727  35 + +++  7015  28
local_hdd16 11372  41 + +++  8490  29 11192  43 + +++  6881  27
local_hdd16 10789  35 + +++  8520  29 11468  46 + +++  6651  24
local_hdd16 10841  40 + +++  8443  28 11162  41 + +++  6441  22
nfs_ipoeth   16  3753   7 13390  12  3795   7  3773   8 22181  16  3635   7
nfs_ipoeth   16  3762   8 12358   7  3713   8  3753   7 20448  13  3632   6
nfs_ipoeth   16  3834   7 12697   6  3729   8  3725   9 22807  11  3673   7
nfs_ipoeth   16  3729   8 14260  10  3774   7  3744   7 25285  14  3688   7
nfs_ipoib16  6803  17 + +++  6843  15  6820  14 + +++  5834  11
nfs_ipoib16  6587  16 + +++  4959   9  6832  14 + +++  5608  12
nfs_ipoib16  6820  18 + +++  6636  15  6479  15 + +++  5679  13
nfs_ipoib16  6475  14 + +++  6435  14  5543  11 + +++  5431  11
nfs_rdma 16  7014  15 + +++  6714  10  7001  14 + +++  5683   8
nfs_rdma 16  7038  13 + +++  6713  12  6956  11 + +++  5488   8
nfs_rdma 16  7058  12 + +++  6797  11  6989  14 + +++  5761   9
nfs_rdma 16  7201  13 + +++  6821  12  7072  15 + +++  5609   9


-- 
Pawel Dziekonski [EMAIL PROTECTED]
Wroclaw Centre for Networking  Supercomputing, HPC Department
Politechnika Wr., pl. Grunwaldzki 9, bud. D2/101, 50-377 Wroclaw, POLAND
phone: +48 71 3202043, fax: +48 71 3225797, http://www.wcss.wroc.pl
___
general mailing list
general@lists.openfabrics.org
http://lists.openfabrics.org/cgi-bin/mailman/listinfo/general

To unsubscribe, please visit http://openib.org/mailman/listinfo/openib-general


Re: [nfs-rdma-devel] [ofa-general] Status of NFS-RDMA ? (fwd)

2008-02-06 Thread Pawel Dziekonski
On Wed, 06 Feb 2008 at 11:54:51AM -0600, Tom Tucker wrote:
 Pawel:
 
 
 On Wed, 2008-02-06 at 12:19 -0500, James Lentini wrote:
  
  -- Forwarded message --
cat /proc/fs/nfsd/portlist
  
  # cat /proc/fs/nfsd/portlist
  tcp 0.0.0.0, port=2049
  udp 0.0.0.0, port=2049
  
 
 From the output of the portlist file, I can tell that you have a patch
 that I have since removed from the tree. The syntax for creating a
 listener with this patch is different from James' README. The syntax
 with that patch is as follows:
 
 echo rdma 2 0.0.0.0 2050  /proc/fs/nfsd/portlist

and it works!!!

server:

# echo rdma 2 0.0.0.0 2050  /proc/fs/nfsd/portlist
# cat /proc/fs/nfsd/portlist
rdma 0.0.0.0, port=2050
tcp 0.0.0.0, port=2049
udp 0.0.0.0, port=2049

client:

# mount.nfs 10.2.2.1:/scratch /mnt -i -o rdma,port=2050 -v
mount.nfs: timeout set for Wed Feb  6 19:30:18 2008
mount.nfs: text-based options: 'rdma,port=2050,addr=10.2.2.1'
10.2.2.1:/scratch on /mnt type nfs (rdma,port=2050)
# ls -la /mnt
total 28
drwxr-xr-x   3 root root  4096 Feb  6 12:39 ./
drwxr-xr-x  24 root root  4096 Feb  6 13:33 ../
drwx--   2 root root 16384 Jan 25 16:29 lost+found/
-rw-r--r--   1 root root 0 Feb  6 12:39 qqq

thanks!!

I'm going to start performance tests now - I will report results.

cheers, P

-- 
Pawel Dziekonski [EMAIL PROTECTED]
Wroclaw Centre for Networking  Supercomputing, HPC Department
Politechnika Wr., pl. Grunwaldzki 9, bud. D2/101, 50-377 Wroclaw, POLAND
phone: +48 71 3202043, fax: +48 71 3225797, http://www.wcss.wroc.pl
___
general mailing list
general@lists.openfabrics.org
http://lists.openfabrics.org/cgi-bin/mailman/listinfo/general

To unsubscribe, please visit http://openib.org/mailman/listinfo/openib-general