Re: iSCSI overhead

2008-12-23 Thread Jerome Martin
I think Rudolph is just asking why doing disk I/Os at 50MB/s translate to 80
to 100MB/s on the wire, aka 60 to 100% overhead in the protocol vs the
actual data being transfered.

On Tue, Dec 23, 2008 at 3:37 PM, Bart Van Assche
wrote:

>
> On Tue, Dec 23, 2008 at 3:00 PM,   wrote:
> > Hmmm maybe you misunderstood me - a performance improvement would be nice
> but that was not the point of my mail: I'm just wondering why there seems to
> be such a big overhead on the network while doing synchronus reads and
> writes to the iSCSI device.
>
> It's possible that I misunderstood you. I should also have made more
> clear what I had in mind, namely that the readahead settings on the
> initiator system have a significant impact on random I/O performance.
> But apparently you were running a sequential I/O test ?
>
> Bart.
>
> >
>

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



Re: iSCSI overhead

2008-12-23 Thread Bart Van Assche

On Tue, Dec 23, 2008 at 3:00 PM,   wrote:
> Hmmm maybe you misunderstood me - a performance improvement would be nice but 
> that was not the point of my mail: I'm just wondering why there seems to be 
> such a big overhead on the network while doing synchronus reads and writes to 
> the iSCSI device.

It's possible that I misunderstood you. I should also have made more
clear what I had in mind, namely that the readahead settings on the
initiator system have a significant impact on random I/O performance.
But apparently you were running a sequential I/O test ?

Bart.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



Re: iSCSI overhead

2008-12-23 Thread rb

- "Bart Van Assche"  schrieb:

> On Tue, Dec 23, 2008 at 12:21 PM,   wrote:
> > my storage backend system is capable of handling writes at around
> 120MB/sec. Its running
> > the iscsi enterprise target under ubuntu hardy and the open-iscsi
> initiators are debian etch
> > systems, all connected via gigabit lan. Yesterday I did some load
> tests and it looks like I won't
> > get any further than 50MB/sec (read and write) on the client side.
> This is enough for my
> > purpose but what struck me during the tests is that dstat showed
> network traffic of roughly
> > 80-100MB/sec coming in on the target side. Is this a problem of
> dstat or does iSCSI really add
> > 100% transportation overhead? I haven't done any tweaks to this side
> of the setup and the
> > tests were done via dd (writing/reading from/to /dev/zero and
> /dev/null with the fsync option set).
> 
> Please check the readahead settings, as is e.g. explained in this
> thread:
> http://groups.google.com/group/open-iscsi/browse_thread/thread/37741fb3b3eca1e4/3f1fb18a136ff00f?lnk=gst&q=readahead#3f1fb18a136ff00f.

Hmmm maybe you misunderstood me - a performance improvement would be nice but 
that was not the point of my mail: I'm just wondering why there seems to be 
such a big overhead on the network while doing synchronus reads and writes to 
the iSCSI device.

Read-Ahead settings on the target are set to 16384 (on the raw disk). The SAN 
side is rather complex, involving 3ware RAID-10, DRBD replication and LVM on 
top to slice volumes for use with IET. The write performance (locally mounted 
LVM Volume) of one SAN box in DRBD-disconnected mode is ~150MB/sec and goes 
down to 120MB/sec in connected mode (which is fine with me). The read-head on 
the initatior was set to default values on linux (I think 256kb?)

The following is the Output of dstat during the execution of "dd 
if=/mnt/testfile of=/dev/null"
/mnt/testfile was previously generated with data read from /dev/zero (2GB)

total-cpu-usage -dsk/total- --net/eth0net/eth1net/eth2- 
---paging-- ---system--
usr sys idl wai hiq siq|_read _writ|_recv _send:_recv _send:_recv _send|__in_ 
_out_|_int_ _csw_
  3   9  57  28   0   3|  54M 8192B| 431B  484B: 112M  580k:   0 0 |   0
 0 |5310  1380 
  1  12  57  26   0   3|  51M   52k| 192B  484B:   0 0 :   0 0 |   0
 0 |5107  1322 
  1  12  59  25   1   4|  54M   16k| 251B  484B: 106M  565k:   0 0 |   0
 0 |5380  1376 
  2  15  60  20   1   3|  56M   20k| 192B  500B:   0 0 :   0 0 |   0
 0 |5434  1411 
  1  13  59  23   0   5|  53M0 | 251B  484B: 112M  580k:   0 0 |   0
 0 |5257  1373 
  2  12  59  24   0   4|  56M 8192B| 192B  484B:   0 0 :   0 0 |   0
 0 |5494  1422 
  3   9  57  27   1   3|  52M0 | 251B  484B: 112M  578k:   0 0 |   0
 0 |5147  1330 
  2   8  56  32   0   3|  49M   12k| 192B  484B:   0 0 :   0 0 |   0
 0 |4791  1286 
  2  11  57  26   0   2|  52M0 | 251B  484B: 102M  540k:   0 0 |   0
 0 |5032  1321 
  3  11  58  26   1   2|  52M0 | 192B  484B:   0 0 :   0 0 |   0
 0 |5139  1319 
  2  12  61  21   0   3|  54M0 | 311B  484B: 105M  546k:   0 0 |   0
 0 |5272  1372 
  3   9  59  25   0   4|  57M0 | 192B  484B:   0 0 :   0 0 |   0
 0 |5585  1483 
  2   9  55  31   1   3|  51M   12k| 251B  484B: 112M  582k:   0 0 |   0
 0 |4986  1318 
  2  12  60  24   0   2|  53M0 | 192B  500B:   0 0 :   0 0 |   0
 0 |5153  1329 
  3  11  59  23   0   3|  54M0 | 251B  484B: 105M  559k:   0 0 |   0
 0 |5250  1360 

While copying this I think I noticed what the problem is with these statistics 
:) Every line says the system is reading ~50MB from the local disk but only 
every second line has the network traffic coming in - that would explain the 
"overhead" I thought I was seeing here last night. BTW, the initiator system is 
a Dual Opteron (2x 2GHz) and the MTU on the interface is set to 9000.

> 
> Bart.
> 
> 

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



Re: iSCSI overhead

2008-12-23 Thread Bart Van Assche

On Tue, Dec 23, 2008 at 12:21 PM,   wrote:
> my storage backend system is capable of handling writes at around 120MB/sec. 
> Its running
> the iscsi enterprise target under ubuntu hardy and the open-iscsi initiators 
> are debian etch
> systems, all connected via gigabit lan. Yesterday I did some load tests and 
> it looks like I won't
> get any further than 50MB/sec (read and write) on the client side. This is 
> enough for my
> purpose but what struck me during the tests is that dstat showed network 
> traffic of roughly
> 80-100MB/sec coming in on the target side. Is this a problem of dstat or does 
> iSCSI really add
> 100% transportation overhead? I haven't done any tweaks to this side of the 
> setup and the
> tests were done via dd (writing/reading from/to /dev/zero and /dev/null with 
> the fsync option set).

Please check the readahead settings, as is e.g. explained in this
thread: 
http://groups.google.com/group/open-iscsi/browse_thread/thread/37741fb3b3eca1e4/3f1fb18a136ff00f?lnk=gst&q=readahead#3f1fb18a136ff00f.

Bart.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



iSCSI overhead

2008-12-23 Thread rb

Hey Folks,

my storage backend system is capable of handling writes at around 120MB/sec. 
Its running the iscsi enterprise target under ubuntu hardy and the open-iscsi 
initiators are debian etch systems, all connected via gigabit lan. Yesterday I 
did some load tests and it looks like I won't get any further than 50MB/sec 
(read and write) on the client side. This is enough for my purpose but what 
struck me during the tests is that dstat showed network traffic of roughly 
80-100MB/sec coming in on the target side. Is this a problem of dstat or does 
iSCSI really add 100% transportation overhead? I haven't done any tweaks to 
this side of the setup and the tests were done via dd (writing/reading from/to 
/dev/zero and /dev/null with the fsync option set).

Mit freundlichen Grüßen / with kind regards

Rudolph Bott
-- 
Megabit Informationstechnik GmbH
Karstr.25  41068 Moenchengladbach  Tel:02161/30898-0  Fax:-18
AG MG HRB 10141, GF: Dipl.-Ing. Thomas Tillig, Michael Benten 

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---