Hi,

Well the target are NetApp Filers, one FAS2020 and the other a FAS3020.

The test system is directly attached to the filer, using either a bnx2 card
or an intel

The latency is calculated by bonnie++ 1.95 on a ext2 file system, the
bandwith came from interface monitoring using cacti or bmon and also
bonnie++

Iperf give 924Mbits/sec using iperf
Using a 4k dd read on a nfs file mounted on the same filer i got 96Mo/s
using
an un-optimized path (no jumbo frame, going thru a few switchs and a router)

Device blocksize is 4k, as is the FS blocksize

2009/6/4 Pasi Kärkkäinen <pa...@iki.fi>

>
> On Thu, Jun 04, 2009 at 04:03:28PM +0200, benoit plessis wrote:
> > Hi,
> >
> > What do you think of (real) iSCSI HBA like Qlogic cards under linux ?
> > We are using open-iscsi now with bnx2 cards and the perfs are far from
> > decent (top 50/100Mbps),
> > some comparative tests on an intel e1000 card show betters results
> > (500/600Mbps) but
> > far from gigabit saturation, and still high latency.
> >
>
> What target are you using? "nullio" mode or memdisk is good for
> benchmarking,
> if either of those are possible on your target.
>
> How did you measure the latency? Are you using direct crossover cables, or
> switches?
>
> How does your NICs perform with for example FTP? How about iperf?
>
> How did you get those 50/100 and 500/600 Mbit numbers? What benchmark did
> you use? What kind of blocksize?
>
> -- Pasi
>
>
> >
>

--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~----------~----~----~----~------~----~------~--~---

Reply via email to