Re: iscsi performance via 10 Gig / Equallogic PS6010

2010-06-06 Thread Pasi Kärkkäinen
On Wed, May 26, 2010 at 10:32:58AM -0700, Taylor wrote:
> I'm curious what kind of performance numbers people can get from their
> iscsi setup, specifically via 10 Gig.
> 
> We are running with Linux servers connected to Dell Equallogic 10 Gig
> arrays on Suse.
> 
> Recently we were running under SLES 11, and with multipath were seeing
> about 2.5 Gig per NIC, or 5.0 Gbit/sec total IO throughput, but we
> were getting a large number of iscsi connection errors.  We are using
> 10 Gig NICs with jumbo frames.
> 
> We reimaged the server to OpenSuse, same hardware and configs
> otherwise, and since then we are getting about half, or 1.2 to 1.3
> Gbit per NIC, or 2.5 to 3.0 Gbit total IO throughput, but we've not
> had any iscsi connection errors.
> 
> What are other people seeing?  Doesn't need to be an equallogic, just
> any 10 Gig connection to an iscsi array and single host throughput
> numbers.
> 

Some Equallogic PS6010 10 Gbit numbers here..

NOTE! In this thread Vladislav said he doesn't have problems getting
10 Gbit linerate on his environment using sequential IO with dd. 
He was using 10 SAS disks in RAID-0, while in this Equallogic test 
I *ONLY* have 8 SAS disks in RAID-10, so basicly I get write performance 
of only 4 spindles. Vladislav had 2,5x more spindles/disk-performance available!

(Yeah, it's stupid to run a 10 Gbit performance test with only 8 disks in use,
and in RAID-10, but I don't unfortunately have other 10 Gbit equipment atm.)

Initiator is standard CentOS 5.5, using Intel 10 Gbit NIC, no configuration 
tweaks done.
Should I adjust something? queue depth of open-iscsi? NIC settings?


dmesg:

scsi9 : iSCSI Initiator over TCP/IP
eth5: no IPv6 routers present
  Vendor: EQLOGIC   Model: 100E-00   Rev: 4.3
  Type:   Direct-Access  ANSI SCSI revision: 05
SCSI device sdd: 134246400 512-byte hdwr sectors (68734 MB)
sdd: Write Protect is off
sdd: Mode Sense: ad 00 00 00
SCSI device sdd: drive cache: write through
SCSI device sdd: 134246400 512-byte hdwr sectors (68734 MB)
sdd: Write Protect is off
sdd: Mode Sense: ad 00 00 00
SCSI device sdd: drive cache: write through
 sdd: unknown partition table
sd 9:0:0:0: Attached scsi disk sdd
sd 9:0:0:0: Attached scsi generic sg6 type 0


[r...@dellr710 ~]# cat /sys/block/sdd/device/vendor
EQLOGIC
[r...@dellr710 ~]# cat /sys/block/sdd/device/model
100E-00

[r...@dellr710 ~]# cat /proc/partitions
major minor  #blocks  name

   8 0  285474816 sda
   8 1 248976 sda1
   8 2  285218010 sda2
 253 0   33554432 dm-0
 253 1   14352384 dm-1
   848   67123200 sdd

[r...@dellr710 ~]# cat /sys/block/sdd/queue/scheduler
noop anticipatory deadline [cfq]



So here we go, numbers using *default* settings:


write tests:


# for bs in 512 4k 8k 16k 32k 64k 128k 256k 512k 1024k; do echo "bs: $bs" && dd 
if=/dev/zero of=/dev/sdd bs=$bs count=32768 oflag=direct && sync; done
bs: 512
32768+0 records in
32768+0 records out
16777216 bytes (17 MB) copied, 12.25 seconds, 1.4 MB/s
bs: 4k
32768+0 records in
32768+0 records out
134217728 bytes (134 MB) copied, 11.8131 seconds, 11.4 MB/s
bs: 8k
32768+0 records in
32768+0 records out
268435456 bytes (268 MB) copied, 14.3359 seconds, 18.7 MB/s
bs: 16k
32768+0 records in
32768+0 records out
536870912 bytes (537 MB) copied, 19.7916 seconds, 27.1 MB/s
bs: 32k
32768+0 records in
32768+0 records out
1073741824 bytes (1.1 GB) copied, 19.9889 seconds, 53.7 MB/s
bs: 64k
32768+0 records in
32768+0 records out
2147483648 bytes (2.1 GB) copied, 28.4471 seconds, 75.5 MB/s
bs: 128k
32768+0 records in
32768+0 records out
4294967296 bytes (4.3 GB) copied, 46.6343 seconds, 92.1 MB/s
bs: 256k
32768+0 records in
32768+0 records out
8589934592 bytes (8.6 GB) copied, 84.692 seconds, 101 MB/s
bs: 512k
32768+0 records in
32768+0 records out
17179869184 bytes (17 GB) copied, 168.305 seconds, 102 MB/s
bs: 1024k
32768+0 records in
32768+0 records out
34359738368 bytes (34 GB) copied, 216.441 seconds, 159 MB/s



iostat during bs=1024k write test:

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
   0.000.002.25   10.610.00   87.14

Device:tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
sda   0.00 0.00 0.00  0  0
sda1  0.00 0.00 0.00  0  0
sda2  0.00 0.00 0.00  0  0
dm-0  0.00 0.00 0.00  0  0
dm-1  0.00 0.00 0.00  0  0
sdd 299.01 0.00306186.14  0 309248


read tests:
---

# for bs in 512 4k 8k 16k 32k 64k 128k 256k 512k 1024k; do echo "bs: $bs" && dd 
if=/dev/sdd bs=$bs of=/dev/zero count=32768 iflag=direct && sync; done
bs: 512
32768+0 records in
32768+0 records out
16777216 bytes (17 MB) copied, 4.1097 seconds, 4.1 MB/s
bs: 4k
32768+0 records in
32768+0 records out
134217728 bytes (134 M

Re: iscsi performance via 10 Gig

2010-05-28 Thread Pasi Kärkkäinen
On Fri, May 28, 2010 at 09:54:32AM -0700, Taylor wrote:
> Ulrich, I'll check on the fragmenting.  When you say IRQ assignments,
> are you just talking about cat /proc/interrupts?
> 
> 
> My tests were just doing dd from /dev/zero to create large files on 4
> seperate mount points of disk from the equallogic.
> 
> We have two 10 Gig Equallogics each with 15K 600 GB SAS drives.
> Beleive they are configured as Raid 50.
>

You may want to try Raid 10 instead if you're concerned about IO performance.

> With equallogics, data is supposed to be striped over number of arrays
> in storage group, so if we were to add another array, some background
> process would stripe exisiting data evenly over 48 disks.  I am not
> concerned about number of spindles or RAID config at this point.
> 

For optimal access to striped volumes you need to have equallogic specific
multipath plugin so that it can immediately access correct blocks from 
correct arrays, without going through redirect sequences.

I don't think there's an equallogic multipath plugin for Linux yet.. 
Afaik they're working on creating one. Windows already has one.

-- Pasi

> 
> 
> On May 27, 2:11 am, Vladislav Bolkhovitin  wrote:
> > Boaz Harrosh, on 05/26/2010 10:58 PM wrote:
> >
> >
> >
> >
> >
> > > On 05/26/2010 09:52 PM, Vladislav Bolkhovitin wrote:
> > >> Boaz Harrosh, on 05/26/2010 10:45 PM wrote:
> > >>> On 05/26/2010 09:42 PM, Vladislav Bolkhovitin wrote:
> >  Taylor, on 05/26/2010 09:32 PM wrote:
> > > I'm curious what kind of performance numbers people can get from their
> > > iscsi setup, specifically via 10 Gig.
> >
> > > We are running with Linux servers connected to Dell Equallogic 10 Gig
> > > arrays on Suse.
> >
> > > Recently we were running under SLES 11, and with multipath were seeing
> > > about 2.5 Gig per NIC, or 5.0 Gbit/sec total IO throughput, but we
> > > were getting a large number of iscsi connection errors.  We are using
> > > 10 Gig NICs with jumbo frames.
> >
> > > We reimaged the server to OpenSuse, same hardware and configs
> > > otherwise, and since then we are getting about half, or 1.2 to 1.3
> > > Gbit per NIC, or 2.5 to 3.0 Gbit total IO throughput, but we've not
> > > had any iscsi connection errors.
> >
> > > What are other people seeing?  Doesn't need to be an equallogic, just
> > > any 10 Gig connection to an iscsi array and single host throughput
> > > numbers.
> >  ISCSI-SCST/open-iscsi on a decent hardware can fully saturate 10GbE
> >  link. On writes even with a single stream, i.e. something like a single
> >  dd writing data to a single device.
> >
> > >>> Off topic question:
> > >>> That's a fast disk. A sata HD? the best I got for single sata was like
> > >>> 90 MB/s. Did you mean a RAM device of sorts.
> > >> The single stream data were both from a SAS RAID and RAMFS. The
> > >> multi-stream data were from RAMFS, because I don't have any reports
> > >> about any tests of iSCSI-SCST on fast enough SSDs.
> >
> > > Right thanks. So the SAS RAID had what? like 12-15 spindles?
> >
> > If I remember correctly, it was 10 spindles each capable of 150+MB/s.
> > The RAID was MD RAID0.
> >
> > Vlad- Hide quoted text -
> >
> > - Show quoted text -
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "open-iscsi" group.
> To post to this group, send email to open-is...@googlegroups.com.
> To unsubscribe from this group, send email to 
> open-iscsi+unsubscr...@googlegroups.com.
> For more options, visit this group at 
> http://groups.google.com/group/open-iscsi?hl=en.
> 

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.



Re: iscsi performance via 10 Gig

2010-05-28 Thread Taylor
Ulrich, I'll check on the fragmenting.  When you say IRQ assignments,
are you just talking about cat /proc/interrupts?


My tests were just doing dd from /dev/zero to create large files on 4
seperate mount points of disk from the equallogic.

We have two 10 Gig Equallogics each with 15K 600 GB SAS drives.
Beleive they are configured as Raid 50.
With equallogics, data is supposed to be striped over number of arrays
in storage group, so if we were to add another array, some background
process would stripe exisiting data evenly over 48 disks.  I am not
concerned about number of spindles or RAID config at this point.



On May 27, 2:11 am, Vladislav Bolkhovitin  wrote:
> Boaz Harrosh, on 05/26/2010 10:58 PM wrote:
>
>
>
>
>
> > On 05/26/2010 09:52 PM, Vladislav Bolkhovitin wrote:
> >> Boaz Harrosh, on 05/26/2010 10:45 PM wrote:
> >>> On 05/26/2010 09:42 PM, Vladislav Bolkhovitin wrote:
>  Taylor, on 05/26/2010 09:32 PM wrote:
> > I'm curious what kind of performance numbers people can get from their
> > iscsi setup, specifically via 10 Gig.
>
> > We are running with Linux servers connected to Dell Equallogic 10 Gig
> > arrays on Suse.
>
> > Recently we were running under SLES 11, and with multipath were seeing
> > about 2.5 Gig per NIC, or 5.0 Gbit/sec total IO throughput, but we
> > were getting a large number of iscsi connection errors.  We are using
> > 10 Gig NICs with jumbo frames.
>
> > We reimaged the server to OpenSuse, same hardware and configs
> > otherwise, and since then we are getting about half, or 1.2 to 1.3
> > Gbit per NIC, or 2.5 to 3.0 Gbit total IO throughput, but we've not
> > had any iscsi connection errors.
>
> > What are other people seeing?  Doesn't need to be an equallogic, just
> > any 10 Gig connection to an iscsi array and single host throughput
> > numbers.
>  ISCSI-SCST/open-iscsi on a decent hardware can fully saturate 10GbE
>  link. On writes even with a single stream, i.e. something like a single
>  dd writing data to a single device.
>
> >>> Off topic question:
> >>> That's a fast disk. A sata HD? the best I got for single sata was like
> >>> 90 MB/s. Did you mean a RAM device of sorts.
> >> The single stream data were both from a SAS RAID and RAMFS. The
> >> multi-stream data were from RAMFS, because I don't have any reports
> >> about any tests of iSCSI-SCST on fast enough SSDs.
>
> > Right thanks. So the SAS RAID had what? like 12-15 spindles?
>
> If I remember correctly, it was 10 spindles each capable of 150+MB/s.
> The RAID was MD RAID0.
>
> Vlad- Hide quoted text -
>
> - Show quoted text -

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.



Re: iscsi performance via 10 Gig

2010-05-27 Thread Vladislav Bolkhovitin

Boaz Harrosh, on 05/26/2010 10:58 PM wrote:

On 05/26/2010 09:52 PM, Vladislav Bolkhovitin wrote:

Boaz Harrosh, on 05/26/2010 10:45 PM wrote:

On 05/26/2010 09:42 PM, Vladislav Bolkhovitin wrote:

Taylor, on 05/26/2010 09:32 PM wrote:

I'm curious what kind of performance numbers people can get from their
iscsi setup, specifically via 10 Gig.

We are running with Linux servers connected to Dell Equallogic 10 Gig
arrays on Suse.

Recently we were running under SLES 11, and with multipath were seeing
about 2.5 Gig per NIC, or 5.0 Gbit/sec total IO throughput, but we
were getting a large number of iscsi connection errors.  We are using
10 Gig NICs with jumbo frames.

We reimaged the server to OpenSuse, same hardware and configs
otherwise, and since then we are getting about half, or 1.2 to 1.3
Gbit per NIC, or 2.5 to 3.0 Gbit total IO throughput, but we've not
had any iscsi connection errors.

What are other people seeing?  Doesn't need to be an equallogic, just
any 10 Gig connection to an iscsi array and single host throughput
numbers.
ISCSI-SCST/open-iscsi on a decent hardware can fully saturate 10GbE 
link. On writes even with a single stream, i.e. something like a single 
dd writing data to a single device.



Off topic question:
That's a fast disk. A sata HD? the best I got for single sata was like
90 MB/s. Did you mean a RAM device of sorts.
The single stream data were both from a SAS RAID and RAMFS. The 
multi-stream data were from RAMFS, because I don't have any reports 
about any tests of iSCSI-SCST on fast enough SSDs.




Right thanks. So the SAS RAID had what? like 12-15 spindles?


If I remember correctly, it was 10 spindles each capable of 150+MB/s. 
The RAID was MD RAID0.


Vlad

--
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.



Re: iscsi performance via 10 Gig

2010-05-26 Thread Ulrich Windl
On 26 May 2010 at 10:32, Taylor wrote:

> I'm curious what kind of performance numbers people can get from their
> iscsi setup, specifically via 10 Gig.

I can imagine that with 10Gb NICs the offload features and assignment 
of IRQ may become important. Usually "procinfo" allows you to get a 
quick overview which IRQs are assigned to which driver.

> 
> We are running with Linux servers connected to Dell Equallogic 10 Gig
> arrays on Suse.
> 
> Recently we were running under SLES 11, and with multipath were seeing
> about 2.5 Gig per NIC, or 5.0 Gbit/sec total IO throughput, but we
> were getting a large number of iscsi connection errors.  We are using
> 10 Gig NICs with jumbo frames.

Did you verify that those pass unfragmented? Just an idea.

> 
> We reimaged the server to OpenSuse, same hardware and configs
> otherwise, and since then we are getting about half, or 1.2 to 1.3
> Gbit per NIC, or 2.5 to 3.0 Gbit total IO throughput, but we've not
> had any iscsi connection errors.

Did you compare IRQ assignments to SLES?

> 
> What are other people seeing?  Doesn't need to be an equallogic, just
> any 10 Gig connection to an iscsi array and single host throughput
> numbers.

Actually I know little about 10Gb Ethernet.

Regards,
Ulrich

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.



Re: iscsi performance via 10 Gig

2010-05-26 Thread Joe Landman

Boaz Harrosh wrote:


We reimaged the server to OpenSuse, same hardware and configs
otherwise, and since then we are getting about half, or 1.2 to 1.3
Gbit per NIC, or 2.5 to 3.0 Gbit total IO throughput, but we've not
had any iscsi connection errors.

What are other people seeing?  Doesn't need to be an equallogic, just
any 10 Gig connection to an iscsi array and single host throughput
numbers.
ISCSI-SCST/open-iscsi on a decent hardware can fully saturate 10GbE 
link. On writes even with a single stream, i.e. something like a single 
dd writing data to a single device.




Off topic question:
That's a fast disk. A sata HD? the best I got for single sata was like
90 MB/s. Did you mean a RAM device of sorts.


[not a commercial, just a "thumbs up"* for iSCSI-SCST]

We measure (sustained) typically 500+ MB/s for our iSCSI over 10GbE 
using SCST for simple copies, IOMeter type loads, etc.  When we push 
things a bit, we can (and do) saturate the link.  Our back-end arrays 
sustain 1.8+ GB/s reads and 1.4+ GB/s writes for files much larger than 
RAM.



Thanks Boaz



Joe

* "thumbs up" is a colloquialism for a recommendation.

--
Joseph Landman, Ph.D
Founder and CEO
Scalable Informatics Inc.
email: land...@scalableinformatics.com
web  : http://scalableinformatics.com
   http://scalableinformatics.com/jackrabbit
phone: +1 734 786 8423 x121
fax  : +1 866 888 3112
cell : +1 734 612 4615

--
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.



Re: iscsi performance via 10 Gig

2010-05-26 Thread Boaz Harrosh
On 05/26/2010 09:52 PM, Vladislav Bolkhovitin wrote:
> Boaz Harrosh, on 05/26/2010 10:45 PM wrote:
>> On 05/26/2010 09:42 PM, Vladislav Bolkhovitin wrote:
>>> Taylor, on 05/26/2010 09:32 PM wrote:
 I'm curious what kind of performance numbers people can get from their
 iscsi setup, specifically via 10 Gig.

 We are running with Linux servers connected to Dell Equallogic 10 Gig
 arrays on Suse.

 Recently we were running under SLES 11, and with multipath were seeing
 about 2.5 Gig per NIC, or 5.0 Gbit/sec total IO throughput, but we
 were getting a large number of iscsi connection errors.  We are using
 10 Gig NICs with jumbo frames.

 We reimaged the server to OpenSuse, same hardware and configs
 otherwise, and since then we are getting about half, or 1.2 to 1.3
 Gbit per NIC, or 2.5 to 3.0 Gbit total IO throughput, but we've not
 had any iscsi connection errors.

 What are other people seeing?  Doesn't need to be an equallogic, just
 any 10 Gig connection to an iscsi array and single host throughput
 numbers.
>>> ISCSI-SCST/open-iscsi on a decent hardware can fully saturate 10GbE 
>>> link. On writes even with a single stream, i.e. something like a single 
>>> dd writing data to a single device.
>>>
>>
>> Off topic question:
>> That's a fast disk. A sata HD? the best I got for single sata was like
>> 90 MB/s. Did you mean a RAM device of sorts.
> 
> The single stream data were both from a SAS RAID and RAMFS. The 
> multi-stream data were from RAMFS, because I don't have any reports 
> about any tests of iSCSI-SCST on fast enough SSDs.
> 

Right thanks. So the SAS RAID had what? like 12-15 spindles?

> Vlad
> 

Boaz

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.



Re: iscsi performance via 10 Gig

2010-05-26 Thread Vladislav Bolkhovitin

Taylor, on 05/26/2010 09:32 PM wrote:

I'm curious what kind of performance numbers people can get from their
iscsi setup, specifically via 10 Gig.

We are running with Linux servers connected to Dell Equallogic 10 Gig
arrays on Suse.

Recently we were running under SLES 11, and with multipath were seeing
about 2.5 Gig per NIC, or 5.0 Gbit/sec total IO throughput, but we
were getting a large number of iscsi connection errors.  We are using
10 Gig NICs with jumbo frames.

We reimaged the server to OpenSuse, same hardware and configs
otherwise, and since then we are getting about half, or 1.2 to 1.3
Gbit per NIC, or 2.5 to 3.0 Gbit total IO throughput, but we've not
had any iscsi connection errors.

What are other people seeing?  Doesn't need to be an equallogic, just
any 10 Gig connection to an iscsi array and single host throughput
numbers.


ISCSI-SCST/open-iscsi on a decent hardware can fully saturate 10GbE 
link. On writes even with a single stream, i.e. something like a single 
dd writing data to a single device.


Vlad



--
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.



Re: iscsi performance via 10 Gig

2010-05-26 Thread Vladislav Bolkhovitin

Boaz Harrosh, on 05/26/2010 10:45 PM wrote:

On 05/26/2010 09:42 PM, Vladislav Bolkhovitin wrote:

Taylor, on 05/26/2010 09:32 PM wrote:

I'm curious what kind of performance numbers people can get from their
iscsi setup, specifically via 10 Gig.

We are running with Linux servers connected to Dell Equallogic 10 Gig
arrays on Suse.

Recently we were running under SLES 11, and with multipath were seeing
about 2.5 Gig per NIC, or 5.0 Gbit/sec total IO throughput, but we
were getting a large number of iscsi connection errors.  We are using
10 Gig NICs with jumbo frames.

We reimaged the server to OpenSuse, same hardware and configs
otherwise, and since then we are getting about half, or 1.2 to 1.3
Gbit per NIC, or 2.5 to 3.0 Gbit total IO throughput, but we've not
had any iscsi connection errors.

What are other people seeing?  Doesn't need to be an equallogic, just
any 10 Gig connection to an iscsi array and single host throughput
numbers.
ISCSI-SCST/open-iscsi on a decent hardware can fully saturate 10GbE 
link. On writes even with a single stream, i.e. something like a single 
dd writing data to a single device.




Off topic question:
That's a fast disk. A sata HD? the best I got for single sata was like
90 MB/s. Did you mean a RAM device of sorts.


The single stream data were both from a SAS RAID and RAMFS. The 
multi-stream data were from RAMFS, because I don't have any reports 
about any tests of iSCSI-SCST on fast enough SSDs.


Vlad

--
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.



Re: iscsi performance via 10 Gig

2010-05-26 Thread Boaz Harrosh
On 05/26/2010 09:42 PM, Vladislav Bolkhovitin wrote:
> Taylor, on 05/26/2010 09:32 PM wrote:
>> I'm curious what kind of performance numbers people can get from their
>> iscsi setup, specifically via 10 Gig.
>>
>> We are running with Linux servers connected to Dell Equallogic 10 Gig
>> arrays on Suse.
>>
>> Recently we were running under SLES 11, and with multipath were seeing
>> about 2.5 Gig per NIC, or 5.0 Gbit/sec total IO throughput, but we
>> were getting a large number of iscsi connection errors.  We are using
>> 10 Gig NICs with jumbo frames.
>>
>> We reimaged the server to OpenSuse, same hardware and configs
>> otherwise, and since then we are getting about half, or 1.2 to 1.3
>> Gbit per NIC, or 2.5 to 3.0 Gbit total IO throughput, but we've not
>> had any iscsi connection errors.
>>
>> What are other people seeing?  Doesn't need to be an equallogic, just
>> any 10 Gig connection to an iscsi array and single host throughput
>> numbers.
> 
> ISCSI-SCST/open-iscsi on a decent hardware can fully saturate 10GbE 
> link. On writes even with a single stream, i.e. something like a single 
> dd writing data to a single device.
> 

Off topic question:
That's a fast disk. A sata HD? the best I got for single sata was like
90 MB/s. Did you mean a RAM device of sorts.

> Vlad
> 

Thanks Boaz

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.



Re: iscsi performance via 10 Gig

2010-05-26 Thread Pasi Kärkkäinen
On Wed, May 26, 2010 at 10:32:58AM -0700, Taylor wrote:
> I'm curious what kind of performance numbers people can get from their
> iscsi setup, specifically via 10 Gig.
> 
> We are running with Linux servers connected to Dell Equallogic 10 Gig
> arrays on Suse.
> 
> Recently we were running under SLES 11, and with multipath were seeing
> about 2.5 Gig per NIC, or 5.0 Gbit/sec total IO throughput, but we
> were getting a large number of iscsi connection errors.  We are using
> 10 Gig NICs with jumbo frames.
> 
> We reimaged the server to OpenSuse, same hardware and configs
> otherwise, and since then we are getting about half, or 1.2 to 1.3
> Gbit per NIC, or 2.5 to 3.0 Gbit total IO throughput, but we've not
> had any iscsi connection errors.
> 
> What are other people seeing?  Doesn't need to be an equallogic, just
> any 10 Gig connection to an iscsi array and single host throughput
> numbers.
> 

What's your equallogic model? how many and how fast (rpm) disks? 

How are you measuring the performance? 
Are you interested of the sequential throughput using large blocks,
or random io using small blocks (max iops)?

-- Pasi

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.



Re: iscsi performance via 10 Gig

2010-05-26 Thread Boaz Harrosh
On 05/26/2010 08:32 PM, Taylor wrote:
> I'm curious what kind of performance numbers people can get from their
> iscsi setup, specifically via 10 Gig.
> 
> We are running with Linux servers connected to Dell Equallogic 10 Gig
> arrays on Suse.
> 
> Recently we were running under SLES 11, and with multipath were seeing
> about 2.5 Gig per NIC, or 5.0 Gbit/sec total IO throughput, but we
> were getting a large number of iscsi connection errors.  We are using
> 10 Gig NICs with jumbo frames.
> 
> We reimaged the server to OpenSuse, same hardware and configs
> otherwise, and since then we are getting about half, or 1.2 to 1.3
> Gbit per NIC, or 2.5 to 3.0 Gbit total IO throughput, but we've not
> had any iscsi connection errors.
> 
> What are other people seeing?  Doesn't need to be an equallogic, just
> any 10 Gig connection to an iscsi array and single host throughput
> numbers.
> 

In this mail for example TOMO reached 850 MB/s
http://lists.wpkg.org/pipermail/stgt/2010-May/003722.html

Boaz

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.