Erez Zilber wrote:
> Hi,
> 
> I'm running some performance tests with open-iscsi (iops & throughput).
> With open-iscsi over TCP, I see very low numbers:
> 
>     * iops: READ - 20000, WRITE - 13000
>     * throughput: READ - 185, WRITE - 185
> 
> In open-iscsi.org, I see much higher numbers. Did anyone measure
> open-iscsi performance lately? Can you share your numbers? Which

Yeah, I have been running tests for a while. What test program are you 
using, what io sizes, and what io scheduler and kernel and what nic module?

And what are the throughput values in? Is that 185 KB/s.

With smaller IOs I get really bad iop numbers. We just talked about this 
on the list. For throughput if I use larger IOs I get this though (this 
is from some tests I did when I was testing what I was putting in git):

disktest -PT -T30 -h1 -K32 -B256k -ID /dev/sdb -D 0:100
| 2007/11/18-12:54:17 | STAT  | 4176 | v1.2.8 | /dev/sdb | Write 
throughput: 117615274.7B/s (112.17MB/s), IOPS 454.0/s.

disktest -PT -T30 -h1 -K32 -B256k -ID /dev/sdb
| 2007/11/18-12:49:58 | STAT  | 3749 | v1.2.8 | /dev/sdb | Read 
throughput: 96521420.8B/s (92.05MB/s), IOPS 374.6/s.

Normally for reads I also get 112 MB/s. For some reason with my home 
setup where I tool those numbers though, I am getting really bad read 
numbers. I did a patch to switch the read path to always use a thread 
and then the throughput went back to 112. Not sure if you are hitting 
that problem, but I also noticed with some targets and workloads I need 
to switch to noop instead of cfq or throughput drops to 3-12 MB/s even 
with large IOs like above.

This is using the default values in the iscsid.conf.

--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~----------~----~----~----~------~----~------~--~---

Reply via email to