Re: openiscsi 10gbe network

2009-11-30 Thread Ulrich Windl
On 26 Nov 2009 at 11:06, Chris K. wrote:

 I thought of posting the statistics for all cores but chose the sum
 instead but here are all the details :
 
 Client :
 Tasks:  98 total,   2 running,  96 sleeping,   0 stopped,   0 zombie
 Cpu0  :  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
 Cpu1  :  0.0%us,  1.3%sy,  0.0%ni,  0.0%id, 98.7%wa,  0.0%hi,  0.0%si,  0.0%st
 Cpu2  :  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
 Cpu3  :  0.0%us,  0.3%sy,  0.0%ni,  0.0%id, 99.3%wa,  0.0%hi,  0.3%si,  0.0%st

The 99.x% wait clearly shows what's going on.

 
 SAN :
 Tasks: 221 total,   1 running, 220 sleeping,   0 stopped,   0 zombie
 Cpu0  :  0.3%us,  2.3%sy,  0.0%ni, 92.5%id,  4.9%wa,  0.0%hi,  0.0%si,  0.0%st
 Cpu1  :  0.3%us,  0.9%sy,  0.0%ni,  0.0%id, 98.7%wa,  0.0%hi,  0.0%si,  0.0%st
 Cpu2  :  0.3%us,  1.9%sy,  0.0%ni, 68.5%id, 28.3%wa,  0.0%hi,  1.0%si,  0.0%st
 Cpu3  :  0.3%us,  0.6%sy,  0.0%ni, 65.1%id, 24.1%wa,  0.0%hi,  9.8%si,  0.0%st
 Cpu4  :  0.0%us,  1.3%sy,  0.0%ni, 36.2%id, 62.6%wa,  0.0%hi,  0.0%si,  0.0%st
 Cpu5  :  0.7%us,  0.3%sy,  0.0%ni, 69.4%id, 29.3%wa,  0.3%hi,  0.0%si,  0.0%st
 Cpu6  :  0.0%us,  1.0%sy,  0.0%ni, 82.1%id, 16.6%wa,  0.0%hi,  0.3%si,  0.0%st
 Cpu7  :  1.2%us,  0.9%sy,  0.0%ni, 86.6%id, 11.2%wa,  0.0%hi,  0.0%si,  0.0%st

It seems your bottleneck is clearly I/O.

Regards,
Ulrich

--

You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.




Re: openiscsi 10gbe network

2009-11-27 Thread Chris K.
I thought of posting the statistics for all cores but chose the sum
instead but here are all the details :

Client :
Tasks:  98 total,   2 running,  96 sleeping,   0 stopped,   0 zombie
Cpu0  :  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Cpu1  :  0.0%us,  1.3%sy,  0.0%ni,  0.0%id, 98.7%wa,  0.0%hi,  0.0%si,  0.0%st
Cpu2  :  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Cpu3  :  0.0%us,  0.3%sy,  0.0%ni,  0.0%id, 99.3%wa,  0.0%hi,  0.3%si,  0.0%st

SAN :
Tasks: 221 total,   1 running, 220 sleeping,   0 stopped,   0 zombie
Cpu0  :  0.3%us,  2.3%sy,  0.0%ni, 92.5%id,  4.9%wa,  0.0%hi,  0.0%si,  0.0%st
Cpu1  :  0.3%us,  0.9%sy,  0.0%ni,  0.0%id, 98.7%wa,  0.0%hi,  0.0%si,  0.0%st
Cpu2  :  0.3%us,  1.9%sy,  0.0%ni, 68.5%id, 28.3%wa,  0.0%hi,  1.0%si,  0.0%st
Cpu3  :  0.3%us,  0.6%sy,  0.0%ni, 65.1%id, 24.1%wa,  0.0%hi,  9.8%si,  0.0%st
Cpu4  :  0.0%us,  1.3%sy,  0.0%ni, 36.2%id, 62.6%wa,  0.0%hi,  0.0%si,  0.0%st
Cpu5  :  0.7%us,  0.3%sy,  0.0%ni, 69.4%id, 29.3%wa,  0.3%hi,  0.0%si,  0.0%st
Cpu6  :  0.0%us,  1.0%sy,  0.0%ni, 82.1%id, 16.6%wa,  0.0%hi,  0.3%si,  0.0%st
Cpu7  :  1.2%us,  0.9%sy,  0.0%ni, 86.6%id, 11.2%wa,  0.0%hi,  0.0%si,  0.0%st


- Here are some more statistics with bonnie++ on the iscsi drive :

Version 1.93c   --Sequential Output-- --Sequential Input- --Random-
Concurrency   1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
MachineSize K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
BLIZZARD 4G   411  99 73792  22 70240  17  1035  99 178117  23  7469 186
Latency 19616us1010ms 836ms9074us 189ms2537us
Version 1.93c   --Sequential Create-- Random Create
BLIZZARD-Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
  files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
 16  3043  28 + +++  4010  29  3914  32 + +++  3034  23
Latency 24691us 446us8549us   14497us  87us   23648us
1.93c,1.93c,BLIZZARD,1,1259236182,4G,,411,99,73792,22,70240,17,1035,99,178117,23,7469,186,16,3043,28,+,+++,4010,29,3914,32,+,+++,3034,23,19616us,1010ms,836ms,9074us,189ms,2537us,24691us,446us,8549us,14497us,87us,23648us

Christ K.

On Wed, Nov 25, 2009 at 7:18 PM, Mike Christie micha...@cs.wisc.edu wrote:
 Boaz Harrosh wrote:

 On 11/24/2009 06:07 PM, Chris K. wrote:

 Hello,
    I'm writing in regards to the performance with open-iscsi on a
 10gbe network. On your website you posted performance results
 indicating you reached read and write speeds of 450 MegaBytes per
 second.

 In our environment we use Myricom dual channel 10gbe network cards on
 a gentoo linux system connected via fiber to a 10gbe interfaced SAN
 with a raid 0 volume mounted with 4 15000rpm SAS drives.

 That is the iscsi-target machine, right?
 What is the SW environment of the initiator box?

 Unfortunately, the maximum speed we are acheiving is 94 MB/s. We do
 know that the network interfaces can stream data at 822MB/s (results
 obtained with netperf). we know that local read performance on the
 disks is 480MB/s. When using netcat or direct tcp/ip connection we get
 speeds in this range, however when we connect a volume via the iscsi
 protocol using the open-iscsi initiator we drop to 94MB/s(best result.
 Obtained with bonnie++ and dd).


 What iscsi target are you using?

 Mike, is it still best to use no-op-io-scheduler on initiator?


 Sometimes.

 Chris, try doing

 echo noop  /sys/block/sdXYZ/queue/scheduler

 Then rerun your tests.

 For your tests you might want something that can do more IO. If you can
 could try disktest or fio or even do multiple dds at the same time.

 Also what is the output of

 iscsiadm -m session -P 3


--

You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.




Re: openiscsi 10gbe network

2009-11-27 Thread Chris K.

I thought of posting the statistics for all cores but chose the sum
instead but here are all the details :

Client :
Tasks:  98 total,   2 running,  96 sleeping,   0 stopped,   0 zombie
Cpu0  :  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi,
0.0%si,  0.0%st
Cpu1  :  0.0%us,  1.3%sy,  0.0%ni,  0.0%id, 98.7%wa,  0.0%hi,
0.0%si,  0.0%st
Cpu2  :  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi,
0.0%si,  0.0%st
Cpu3  :  0.0%us,  0.3%sy,  0.0%ni,  0.0%id, 99.3%wa,  0.0%hi,
0.3%si,  0.0%st

SAN :
Tasks: 221 total,   1 running, 220 sleeping,   0 stopped,   0 zombie
Cpu0  :  0.3%us,  2.3%sy,  0.0%ni, 92.5%id,  4.9%wa,  0.0%hi,
0.0%si,  0.0%st
Cpu1  :  0.3%us,  0.9%sy,  0.0%ni,  0.0%id, 98.7%wa,  0.0%hi,
0.0%si,  0.0%st
Cpu2  :  0.3%us,  1.9%sy,  0.0%ni, 68.5%id, 28.3%wa,  0.0%hi,
1.0%si,  0.0%st
Cpu3  :  0.3%us,  0.6%sy,  0.0%ni, 65.1%id, 24.1%wa,  0.0%hi,
9.8%si,  0.0%st
Cpu4  :  0.0%us,  1.3%sy,  0.0%ni, 36.2%id, 62.6%wa,  0.0%hi,
0.0%si,  0.0%st
Cpu5  :  0.7%us,  0.3%sy,  0.0%ni, 69.4%id, 29.3%wa,  0.3%hi,
0.0%si,  0.0%st
Cpu6  :  0.0%us,  1.0%sy,  0.0%ni, 82.1%id, 16.6%wa,  0.0%hi,
0.3%si,  0.0%st
Cpu7  :  1.2%us,  0.9%sy,  0.0%ni, 86.6%id, 11.2%wa,  0.0%hi,
0.0%si,  0.0%st


- Here are some more statistics with bonnie++ on the iscsi drive :

Version 1.93c   --Sequential Output-- --Sequential Input-
--Random-
Concurrency   1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
--Seeks--
MachineSize K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec
%CP  /sec %CP
BLIZZARD 4G   411  99 73792  22 70240  17  1035  99 178117
23  7469 186
Latency 19616us1010ms 836ms9074us 189ms
2537us
Version 1.93c   --Sequential Create-- Random
Create
BLIZZARD-Create-- --Read--- -Delete-- -Create-- --Read--- -
Delete--
  files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec
%CP  /sec %CP
 16  3043  28 + +++  4010  29  3914  32 + +++
3034  23
Latency 24691us 446us8549us   14497us  87us
23648us
1.93c,1.93c,BLIZZARD,1,1259236182,4G,,
411,99,73792,22,70240,17,1035,99,178117,23,7469,186,16,3043,28,
+,+++,4010,29,3914,32,+,+++,3034,23,19616us,1010ms,836ms,9074us,
189ms,2537us,24691us,446us,8549us,14497us,87us,23648us

On Nov 26, 2:55 am, Ulrich Windl ulrich.wi...@rz.uni-regensburg.de
wrote:
 On 25 Nov 2009 at 14:15, Chris K. wrote:

  Here are the cpu values :
  Cpu(s):  0.0%us,  8.7%sy,  0.0%ni, 25.0%id, 64.0%wa,  0.4%hi,

 A note: I don't know how well open-iscsi uses multiple threads, but looking at
 individual CPUs may be interesting, as the above is only an average for 
 multiple
 CPUs. Press '1' in top to switch to individual CPU display. Hope you don't 
 have
 too many cores ;-)

 Here's some example for the different displays:

 Cpu(s): 23.0%us,  1.2%sy,  0.0%ni, 73.8%id,  1.9%wa,  0.0%hi,  0.2%si,  0.0%st

 Cpu0  :  4.2%us,  0.5%sy,  0.1%ni, 89.2%id,  5.6%wa,  0.1%hi,  0.3%si,  0.0%st
 Cpu1  :  4.8%us,  0.5%sy,  0.1%ni, 94.0%id,  0.6%wa,  0.0%hi,  0.0%si,  0.0%st
 Cpu2  :  7.9%us,  0.7%sy,  0.0%ni, 90.7%id,  0.7%wa,  0.0%hi,  0.0%si,  0.0%st
 Cpu3  :  8.6%us,  0.7%sy,  0.0%ni, 90.2%id,  0.4%wa,  0.0%hi,  0.0%si,  0.0%st

 Have fun!
 Ulrich

--

You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.




Re: openiscsi 10gbe network

2009-11-27 Thread Chris K.

I thought of posting the individuals core statistics but opted for the
sum but here are all the details during the dd transfer :

Client :
top - 05:33:59 up 5 days, 17:03,  2 users,  load average: 0.46, 0.10,
0.03
Tasks:  98 total,   2 running,  96 sleeping,   0 stopped,   0 zombie
Cpu0  :  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi,
0.0%si,  0.0%st
Cpu1  :  0.0%us, 19.0%sy,  0.0%ni,  0.0%id, 81.0%wa,  0.0%hi,
0.0%si,  0.0%st
Cpu2  :  0.0%us,  0.0%sy,  0.0%ni,  0.0%id,100.0%wa,  0.0%hi,
0.0%si,  0.0%st
Cpu3  :  0.0%us,  2.0%sy,  0.0%ni,  0.0%id, 92.0%wa,  0.3%hi,
5.7%si,  0.0%st

SAN :

Tasks: 219 total,   1 running, 218 sleeping,   0 stopped,   0 zombie
Cpu0  :  0.6%us,  2.9%sy,  0.0%ni, 88.0%id,  7.6%wa,  0.0%hi,
0.9%si,  0.0%st
Cpu1  :  1.3%us,  1.6%sy,  0.0%ni, 85.9%id,  9.8%wa,  0.0%hi,
1.3%si,  0.0%st
Cpu2  :  0.3%us,  3.1%sy,  0.0%ni, 87.0%id,  8.7%wa,  0.3%hi,
0.6%si,  0.0%st
Cpu3  :  0.6%us,  1.3%sy,  0.0%ni, 90.3%id,  7.8%wa,  0.0%hi,
0.0%si,  0.0%st
Cpu4  :  0.6%us,  2.2%sy,  0.0%ni, 90.2%id,  6.6%wa,  0.0%hi,
0.3%si,  0.0%st
Cpu5  :  0.6%us,  4.4%sy,  0.0%ni, 86.3%id,  8.4%wa,  0.3%hi,
0.0%si,  0.0%st
Cpu6  :  0.9%us,  4.3%sy,  0.0%ni, 86.5%id,  7.3%wa,  0.0%hi,
0.9%si,  0.0%st
Cpu7  :  0.3%us,  6.2%sy,  0.0%ni, 84.0%id,  8.6%wa,  0.0%hi,
0.9%si,  0.0%st


Christ K.

On Nov 26, 2:55 am, Ulrich Windl ulrich.wi...@rz.uni-regensburg.de
wrote:
 On 25 Nov 2009 at 14:15, Chris K. wrote:

  Here are the cpu values :
  Cpu(s):  0.0%us,  8.7%sy,  0.0%ni, 25.0%id, 64.0%wa,  0.4%hi,

 A note: I don't know how well open-iscsi uses multiple threads, but looking at
 individual CPUs may be interesting, as the above is only an average for 
 multiple
 CPUs. Press '1' in top to switch to individual CPU display. Hope you don't 
 have
 too many cores ;-)

 Here's some example for the different displays:

 Cpu(s): 23.0%us,  1.2%sy,  0.0%ni, 73.8%id,  1.9%wa,  0.0%hi,  0.2%si,  0.0%st

 Cpu0  :  4.2%us,  0.5%sy,  0.1%ni, 89.2%id,  5.6%wa,  0.1%hi,  0.3%si,  0.0%st
 Cpu1  :  4.8%us,  0.5%sy,  0.1%ni, 94.0%id,  0.6%wa,  0.0%hi,  0.0%si,  0.0%st
 Cpu2  :  7.9%us,  0.7%sy,  0.0%ni, 90.7%id,  0.7%wa,  0.0%hi,  0.0%si,  0.0%st
 Cpu3  :  8.6%us,  0.7%sy,  0.0%ni, 90.2%id,  0.4%wa,  0.0%hi,  0.0%si,  0.0%st

 Have fun!
 Ulrich

--

You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.




Re: openiscsi 10gbe network

2009-11-27 Thread Chris K.

I thought of posting the statistics for all cores but chose the sum
instead but here are all the details :

Client :
Tasks:  98 total,   2 running,  96 sleeping,   0 stopped,   0 zombie
Cpu0  :  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi,
0.0%si,  0.0%st
Cpu1  :  0.0%us,  1.3%sy,  0.0%ni,  0.0%id, 98.7%wa,  0.0%hi,
0.0%si,  0.0%st
Cpu2  :  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi,
0.0%si,  0.0%st
Cpu3  :  0.0%us,  0.3%sy,  0.0%ni,  0.0%id, 99.3%wa,  0.0%hi,
0.3%si,  0.0%st

SAN :
Tasks: 221 total,   1 running, 220 sleeping,   0 stopped,   0 zombie
Cpu0  :  0.3%us,  2.3%sy,  0.0%ni, 92.5%id,  4.9%wa,  0.0%hi,
0.0%si,  0.0%st
Cpu1  :  0.3%us,  0.9%sy,  0.0%ni,  0.0%id, 98.7%wa,  0.0%hi,
0.0%si,  0.0%st
Cpu2  :  0.3%us,  1.9%sy,  0.0%ni, 68.5%id, 28.3%wa,  0.0%hi,
1.0%si,  0.0%st
Cpu3  :  0.3%us,  0.6%sy,  0.0%ni, 65.1%id, 24.1%wa,  0.0%hi,
9.8%si,  0.0%st
Cpu4  :  0.0%us,  1.3%sy,  0.0%ni, 36.2%id, 62.6%wa,  0.0%hi,
0.0%si,  0.0%st
Cpu5  :  0.7%us,  0.3%sy,  0.0%ni, 69.4%id, 29.3%wa,  0.3%hi,
0.0%si,  0.0%st
Cpu6  :  0.0%us,  1.0%sy,  0.0%ni, 82.1%id, 16.6%wa,  0.0%hi,
0.3%si,  0.0%st
Cpu7  :  1.2%us,  0.9%sy,  0.0%ni, 86.6%id, 11.2%wa,  0.0%hi,
0.0%si,  0.0%st


- Here are some more statistics with bonnie++ on the iscsi drive :

Version 1.93c   --Sequential Output-- --Sequential Input-
--Random-
Concurrency   1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
--Seeks--
MachineSize K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec
%CP  /sec %CP
BLIZZARD 4G   411  99 73792  22 70240  17  1035  99 178117
23  7469 186
Latency 19616us1010ms 836ms9074us 189ms
2537us
Version 1.93c   --Sequential Create-- Random
Create
BLIZZARD-Create-- --Read--- -Delete-- -Create-- --Read--- -
Delete--
  files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec
%CP  /sec %CP
 16  3043  28 + +++  4010  29  3914  32 + +++
3034  23
Latency 24691us 446us8549us   14497us  87us
23648us
1.93c,1.93c,BLIZZARD,1,1259236182,4G,,
411,99,73792,22,70240,17,1035,99,178117,23,7469,186,16,3043,28,
+,+++,4010,29,3914,32,+,+++,3034,23,19616us,1010ms,836ms,9074us,
189ms,2537us,24691us,446us,8549us,14497us,87us,23648us




On Nov 26, 2:55 am, Ulrich Windl ulrich.wi...@rz.uni-regensburg.de
wrote:
 On 25 Nov 2009 at 14:15, Chris K. wrote:

  Here are the cpu values :
  Cpu(s):  0.0%us,  8.7%sy,  0.0%ni, 25.0%id, 64.0%wa,  0.4%hi,

 A note: I don't know how well open-iscsi uses multiple threads, but looking at
 individual CPUs may be interesting, as the above is only an average for 
 multiple
 CPUs. Press '1' in top to switch to individual CPU display. Hope you don't 
 have
 too many cores ;-)

 Here's some example for the different displays:

 Cpu(s): 23.0%us,  1.2%sy,  0.0%ni, 73.8%id,  1.9%wa,  0.0%hi,  0.2%si,  0.0%st

 Cpu0  :  4.2%us,  0.5%sy,  0.1%ni, 89.2%id,  5.6%wa,  0.1%hi,  0.3%si,  0.0%st
 Cpu1  :  4.8%us,  0.5%sy,  0.1%ni, 94.0%id,  0.6%wa,  0.0%hi,  0.0%si,  0.0%st
 Cpu2  :  7.9%us,  0.7%sy,  0.0%ni, 90.7%id,  0.7%wa,  0.0%hi,  0.0%si,  0.0%st
 Cpu3  :  8.6%us,  0.7%sy,  0.0%ni, 90.2%id,  0.4%wa,  0.0%hi,  0.0%si,  0.0%st

 Have fun!
 Ulrich

--

You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.




Re: openiscsi 10gbe network

2009-11-25 Thread Pasi Kärkkäinen
On Tue, Nov 24, 2009 at 08:07:12AM -0800, Chris K. wrote:
 Hello,
 I'm writing in regards to the performance with open-iscsi on a
 10gbe network. On your website you posted performance results
 indicating you reached read and write speeds of 450 MegaBytes per
 second.
 
 In our environment we use Myricom dual channel 10gbe network cards on
 a gentoo linux system connected via fiber to a 10gbe interfaced SAN
 with a raid 0 volume mounted with 4 15000rpm SAS drives.
 Unfortunately, the maximum speed we are acheiving is 94 MB/s. We do
 know that the network interfaces can stream data at 822MB/s (results
 obtained with netperf). we know that local read performance on the
 disks is 480MB/s. When using netcat or direct tcp/ip connection we get
 speeds in this range, however when we connect a volume via the iscsi
 protocol using the open-iscsi initiator we drop to 94MB/s(best result.
 Obtained with bonnie++ and dd).


What block size are you using with dd? 
Try: dd if=/dev/foo of=/dev/null bs=1024k count=32768

How's the CPU usage on both the target and the initiator when you run
that? Is there iowait?

Did you try with nullio LUN from the target?

-- Pasi

--

You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.




Re: openiscsi 10gbe network

2009-11-25 Thread Chris K.
The dd command I am running is time dd if=/dev/zero bs=1024k of=/mnt/
iscsi/10gfile.txt count=10240
My fs is xfs (mkfs.xfs -d agcount=8 -l internal,size=128m -n size=8k -
i size=2048 /dev/sdb1 -f) those are the parameters used to format the
drive.

Here are the top values: Cpu(s):  0.0%us,  6.1%sy,  0.0%ni, 25.0%id,
67.2%wa,  0.1%hi,  1.7%si,  0.0%st

I have not tried nullio LUN from target. I'm not sure how to go about
it actually...

Thanks for your help !

On Nov 25, 5:04 am, Pasi Kärkkäinen pa...@iki.fi wrote:
 On Tue, Nov 24, 2009 at 08:07:12AM -0800, Chris K. wrote:
  Hello,
      I'm writing in regards to the performance with open-iscsi on a
  10gbe network. On your website you posted performance results
  indicating you reached read and write speeds of 450 MegaBytes per
  second.

  In our environment we use Myricom dual channel 10gbe network cards on
  a gentoo linux system connected via fiber to a 10gbe interfaced SAN
  with a raid 0 volume mounted with 4 15000rpm SAS drives.
  Unfortunately, the maximum speed we are acheiving is 94 MB/s. We do
  know that the network interfaces can stream data at 822MB/s (results
  obtained with netperf). we know that local read performance on the
  disks is 480MB/s. When using netcat or direct tcp/ip connection we get
  speeds in this range, however when we connect a volume via the iscsi
  protocol using the open-iscsi initiator we drop to 94MB/s(best result.
  Obtained with bonnie++ and dd).

 What block size are you using with dd?
 Try: dd if=/dev/foo of=/dev/null bs=1024k count=32768

 How's the CPU usage on both the target and the initiator when you run
 that? Is there iowait?

 Did you try with nullio LUN from the target?

 -- Pasi

--

You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.




Re: openiscsi 10gbe network

2009-11-25 Thread Chris K.
Thank you for your response. The SAN is a 10gbe Nimbus with I believe
to be iscsitarget(http://iscsitarget.sourceforge.net/) as it's target
server.
The switch is a Cisco Nexus5010 set to jumbo frame and flow control.
We have through tcp/ip performance tests in conjunction with Cisco
proved that this works. Furthermore using netcat and dd conjointly we
have achieved speeds around 200MB/s. This is far from the 822MB/s
shown in our testing with netperf and Cisco's performance tests, but
it is way above what we are getting with iscsi at 94MB/s which
technically is a GiG network not a 10gbe network.

I am not familiar with no-op-io-scheduler where exactly is this set
and what are it's implications ?

Thank you once again for your help.

On Wed, Nov 25, 2009 at 4:11 AM, Boaz Harrosh bharr...@panasas.com wrote:
 On 11/24/2009 06:07 PM, Chris K. wrote:
 Hello,
     I'm writing in regards to the performance with open-iscsi on a
 10gbe network. On your website you posted performance results
 indicating you reached read and write speeds of 450 MegaBytes per
 second.

 In our environment we use Myricom dual channel 10gbe network cards on
 a gentoo linux system connected via fiber to a 10gbe interfaced SAN
 with a raid 0 volume mounted with 4 15000rpm SAS drives.

 That is the iscsi-target machine, right?
 What is the SW environment of the initiator box?

 Unfortunately, the maximum speed we are acheiving is 94 MB/s. We do
 know that the network interfaces can stream data at 822MB/s (results
 obtained with netperf). we know that local read performance on the
 disks is 480MB/s. When using netcat or direct tcp/ip connection we get
 speeds in this range, however when we connect a volume via the iscsi
 protocol using the open-iscsi initiator we drop to 94MB/s(best result.
 Obtained with bonnie++ and dd).


 What iscsi target are you using?

 Mike, is it still best to use no-op-io-scheduler on initiator?

 Boaz
 We were wondering if you would have any recommendations in terms of
 configuring the initiator or perhaps the linux system to achieve
 higher throughput.
 We have also set the the interfaces on both ends to jumbo frames (mtu
 9000). We have also modified sysctl parameters to look as follows :

 net.core.rmem_max = 16777216
 net.core.wmem_max = 16777216
 net.ipv4.tcp_rmem = 4096 87380 16777216
 net.ipv4.tcp_wmem = 4096 65536 16777216
 net.core.netdev_max_backlog = 25

 Any help would greatly be appreciated,
 Thank you for your time and  your work.



--

You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.




Re: openiscsi 10gbe network

2009-11-25 Thread Chris K.

Here is the dd command : time dd if=/dev/zero bs=1024k of=/mnt/iscsi/
10gfile.txt count=10240

Here are the cpu values :
Cpu(s):  0.0%us,  8.7%sy,  0.0%ni, 25.0%id, 64.0%wa,  0.4%hi,
1.9%si,  0.0%st - Client
Cpu(s):  0.6%us,  2.8%sy,  0.0%ni, 86.4%id,  9.7%wa,  0.0%hi,
0.4%si,  0.0%st - SAN

I have not tried the nullio LUN from the target... I'm not sure how to
go about this ...?

Thank you for your help.


On Nov 25, 5:04 am, Pasi Kärkkäinen pa...@iki.fi wrote:
 On Tue, Nov 24, 2009 at 08:07:12AM -0800, Chris K. wrote:
  Hello,
      I'm writing in regards to the performance with open-iscsi on a
  10gbe network. On your website you posted performance results
  indicating you reached read and write speeds of 450 MegaBytes per
  second.

  In our environment we use Myricom dual channel 10gbe network cards on
  a gentoo linux system connected via fiber to a 10gbe interfaced SAN
  with a raid 0 volume mounted with 4 15000rpm SAS drives.
  Unfortunately, the maximum speed we are acheiving is 94 MB/s. We do
  know that the network interfaces can stream data at 822MB/s (results
  obtained with netperf). we know that local read performance on the
  disks is 480MB/s. When using netcat or direct tcp/ip connection we get
  speeds in this range, however when we connect a volume via the iscsi
  protocol using the open-iscsi initiator we drop to 94MB/s(best result.
  Obtained with bonnie++ and dd).

 What block size are you using with dd?
 Try: dd if=/dev/foo of=/dev/null bs=1024k count=32768

 How's the CPU usage on both the target and the initiator when you run
 that? Is there iowait?

 Did you try with nullio LUN from the target?

 -- Pasi

--

You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.




Re: openiscsi 10gbe network

2009-11-25 Thread Mike Christie
Boaz Harrosh wrote:
 On 11/24/2009 06:07 PM, Chris K. wrote:
 Hello,
 I'm writing in regards to the performance with open-iscsi on a
 10gbe network. On your website you posted performance results
 indicating you reached read and write speeds of 450 MegaBytes per
 second.

 In our environment we use Myricom dual channel 10gbe network cards on
 a gentoo linux system connected via fiber to a 10gbe interfaced SAN
 with a raid 0 volume mounted with 4 15000rpm SAS drives.
 
 That is the iscsi-target machine, right?
 What is the SW environment of the initiator box?
 
 Unfortunately, the maximum speed we are acheiving is 94 MB/s. We do
 know that the network interfaces can stream data at 822MB/s (results
 obtained with netperf). we know that local read performance on the
 disks is 480MB/s. When using netcat or direct tcp/ip connection we get
 speeds in this range, however when we connect a volume via the iscsi
 protocol using the open-iscsi initiator we drop to 94MB/s(best result.
 Obtained with bonnie++ and dd).

 
 What iscsi target are you using?
 
 Mike, is it still best to use no-op-io-scheduler on initiator?
 

Sometimes.

Chris, try doing

echo noop  /sys/block/sdXYZ/queue/scheduler

Then rerun your tests.

For your tests you might want something that can do more IO. If you can 
could try disktest or fio or even do multiple dds at the same time.

Also what is the output of

iscsiadm -m session -P 3

--

You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.




Re: openiscsi 10gbe network

2009-11-25 Thread Ulrich Windl
On 25 Nov 2009 at 14:15, Chris K. wrote:

 Here are the cpu values :
 Cpu(s):  0.0%us,  8.7%sy,  0.0%ni, 25.0%id, 64.0%wa,  0.4%hi,

A note: I don't know how well open-iscsi uses multiple threads, but looking at 
individual CPUs may be interesting, as the above is only an average for 
multiple 
CPUs. Press '1' in top to switch to individual CPU display. Hope you don't have 
too many cores ;-)

Here's some example for the different displays:

Cpu(s): 23.0%us,  1.2%sy,  0.0%ni, 73.8%id,  1.9%wa,  0.0%hi,  0.2%si,  0.0%st

Cpu0  :  4.2%us,  0.5%sy,  0.1%ni, 89.2%id,  5.6%wa,  0.1%hi,  0.3%si,  0.0%st
Cpu1  :  4.8%us,  0.5%sy,  0.1%ni, 94.0%id,  0.6%wa,  0.0%hi,  0.0%si,  0.0%st
Cpu2  :  7.9%us,  0.7%sy,  0.0%ni, 90.7%id,  0.7%wa,  0.0%hi,  0.0%si,  0.0%st
Cpu3  :  8.6%us,  0.7%sy,  0.0%ni, 90.2%id,  0.4%wa,  0.0%hi,  0.0%si,  0.0%st

Have fun!
Ulrich

--

You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.