I thought of posting the statistics for all cores but chose the sum
instead but here are all the details :
Client :
Tasks: 98 total, 2 running, 96 sleeping, 0 stopped, 0 zombie
Cpu0 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu1 : 0.0%us, 1.3%sy, 0.0%ni, 0.0%id, 98.7%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu2 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu3 : 0.0%us, 0.3%sy, 0.0%ni, 0.0%id, 99.3%wa, 0.0%hi, 0.3%si, 0.0%st
SAN :
Tasks: 221 total, 1 running, 220 sleeping, 0 stopped, 0 zombie
Cpu0 : 0.3%us, 2.3%sy, 0.0%ni, 92.5%id, 4.9%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu1 : 0.3%us, 0.9%sy, 0.0%ni, 0.0%id, 98.7%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu2 : 0.3%us, 1.9%sy, 0.0%ni, 68.5%id, 28.3%wa, 0.0%hi, 1.0%si, 0.0%st
Cpu3 : 0.3%us, 0.6%sy, 0.0%ni, 65.1%id, 24.1%wa, 0.0%hi, 9.8%si, 0.0%st
Cpu4 : 0.0%us, 1.3%sy, 0.0%ni, 36.2%id, 62.6%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu5 : 0.7%us, 0.3%sy, 0.0%ni, 69.4%id, 29.3%wa, 0.3%hi, 0.0%si, 0.0%st
Cpu6 : 0.0%us, 1.0%sy, 0.0%ni, 82.1%id, 16.6%wa, 0.0%hi, 0.3%si, 0.0%st
Cpu7 : 1.2%us, 0.9%sy, 0.0%ni, 86.6%id, 11.2%wa, 0.0%hi, 0.0%si, 0.0%st
- Here are some more statistics with bonnie++ on the iscsi drive :
Version 1.93c ------Sequential Output------ --Sequential Input- --Random-
Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
BLIZZARD 4G 411 99 73792 22 70240 17 1035 99 178117 23 7469 186
Latency 19616us 1010ms 836ms 9074us 189ms 2537us
Version 1.93c ------Sequential Create------ --------Random Create--------
BLIZZARD -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
16 3043 28 +++++ +++ 4010 29 3914 32 +++++ +++ 3034 23
Latency 24691us 446us 8549us 14497us 87us 23648us
1.93c,1.93c,BLIZZARD,1,1259236182,4G,,411,99,73792,22,70240,17,1035,99,178117,23,7469,186,16,,,,,3043,28,+++++,+++,4010,29,3914,32,+++++,+++,3034,23,19616us,1010ms,836ms,9074us,189ms,2537us,24691us,446us,8549us,14497us,87us,23648us
Christ K.
On Wed, Nov 25, 2009 at 7:18 PM, Mike Christie <[email protected]> wrote:
> Boaz Harrosh wrote:
>>
>> On 11/24/2009 06:07 PM, Chris K. wrote:
>>>
>>> Hello,
>>> I'm writing in regards to the performance with open-iscsi on a
>>> 10gbe network. On your website you posted performance results
>>> indicating you reached read and write speeds of 450 MegaBytes per
>>> second.
>>>
>>> In our environment we use Myricom dual channel 10gbe network cards on
>>> a gentoo linux system connected via fiber to a 10gbe interfaced SAN
>>> with a raid 0 volume mounted with 4 15000rpm SAS drives.
>>
>> That is the iscsi-target machine, right?
>> What is the SW environment of the initiator box?
>>
>>> Unfortunately, the maximum speed we are acheiving is 94 MB/s. We do
>>> know that the network interfaces can stream data at 822MB/s (results
>>> obtained with netperf). we know that local read performance on the
>>> disks is 480MB/s. When using netcat or direct tcp/ip connection we get
>>> speeds in this range, however when we connect a volume via the iscsi
>>> protocol using the open-iscsi initiator we drop to 94MB/s(best result.
>>> Obtained with bonnie++ and dd).
>>>
>>
>> What iscsi target are you using?
>>
>> Mike, is it still best to use no-op-io-scheduler on initiator?
>>
>
> Sometimes.
>
> Chris, try doing
>
> echo noop > /sys/block/sdXYZ/queue/scheduler
>
> Then rerun your tests.
>
> For your tests you might want something that can do more IO. If you can
> could try disktest or fio or even do multiple dds at the same time.
>
> Also what is the output of
>
> iscsiadm -m session -P 3
>
--
You received this message because you are subscribed to the Google Groups
"open-iscsi" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to
[email protected].
For more options, visit this group at
http://groups.google.com/group/open-iscsi?hl=en.