Vlad,
Thanks for the suggestion. As I look via vmstat, my CSw/s rate is fairly constant around 280K when scst_threads=1 (per Vu's suggestion) and pops up to ~330-340K CSw/s when scst_threads is set to 8. I'm currently doing 512B writes, and this gives me about a 4:1 ratio of context switches to IOPs with 1 SCST thread (70K IOPs) and around 4.5:1 when there are 8 SCST threads (75K IOPs). You say those numbers could be overkill - do you know of a way to drop the number? I'm very interested in trying Vu's other suggestions (multiple initiators and multiple QPs, but my other initiator has been too busy all weekend to run on.

Debug, tracing, and all that was turned off in the SCST Makefiles.

-Cameron

Vladislav Bolkhovitin wrote:
Cameron Harr wrote:
I was able to get the latest scst code working with Vu's standalone ib_srpt and the kernel IB modules, and dropped my ib_srpt thread count to 2. However, I still get about the same IOP performance on the target although interrupts on the "busy" cpu have gone up to around 140K. Interesting, but now I'm at a bit of a loss as to where the bottleneck could be. I figured it was Interrupts, but if the CPU is handling more right now, perhaps the problem is elsewhere?

How many context switches per second do you have during your test on the target?

Once in scst-devel mailing list there was a thread about observation that SRP target driver produces 10 context switches per command. See http://sourceforge.net/mailarchive/message.php?msg_id=e2e108260802070110q1fa084a1j54945d06c16c94f2%40mail.gmail.com

If it is so in your case as well, it would very well explain your issue. 10 CS/cmd is a definite overkill, it should be 1 or, at max, 2 CS/cmd.

BTW, I suppose you don't use the debug SCST build, do you?

Vlad

Cameron

Cameron Harr wrote:
Cameron Harr wrote:
Additionally, I found that I can load the newer scst code if I use the kernel-supplied modules and the standalone srpt-1.0.0 package that I think you provide Vu. I was about to try it along with dropping a module param for ib_srpt (I was using a thread count of 32 that had given me better performance on an earlier test). I'll report back on this.
Not much luck using the newer scst code and default kernel modules (Running CentOS 5.2). If I try using the default kernel modules on the initiator, I can't get them to see anything (the ofed SM pkg doesn't see any devices to run on). When using the regular OFED on the initiator, my target dies when I try to attach to the target on the initiator:
---------------------------------
ib_srpt: Host login i_port_id=0x0:0x2c90300026053 t_port_id=0x2c90300026046:0x2c90300026046 it_iu_len=996 Oct 3 13:44:23 test05 kernel: i[4127]: scst: scst_mgmt_thread:5187:***CRITICAL ERROR*** session ffff8107f3222b88 is in scst_sess_shut_list, but in unknown shut phase 0
BUG at /usr/src/scst.tot/src/scst_targ.c:5188
----------- [cut here ] --------- [please bite here ] ---------
Kernel BUG at /usr/src/scst.tot/src/scst_targ.c:5188
invalid opcode: 0000 [1] SMP
last sysfs file: /devices/pci0000:00/0000:00:00.0/class
CPU 2
Modules linked in: ib_srpt(U) ib_cm ib_sa scst_vdisk(U) scst(U) fio_driver(PU) fio_port(PU) mlx4_ib ib_mad ib_core ipv6 xfrm_nalgo crypto_api autofs4 hidp rfcomm l2cap bluetooth sunrpc nls_utf8 hfsplus dm_mirror dm_multipath dm_mod video sbs backlight i2c_ec button battery asus_acpi acpi_memhotplug ac parport_pc lp parport i2c_i801 i5000_edac i2c_core edac_mc pcspkr shpchp mlx4_core e1000e ata_piix libata sd_mod scsi_mod ext3 jbd uhci_hcd ohci_hcd ehci_hcd
Pid: 4127, comm: scsi_tgt_mgmt Tainted: P      2.6.18-92.1.13.el5 #1
RIP: 0010:[<ffffffff88488a56>] [<ffffffff88488a56>] :scst:scst_mgmt_thread+0x3ff/0x577
---------------------------------

_______________________________________________
general mailing list
general@lists.openfabrics.org
http://lists.openfabrics.org/cgi-bin/mailman/listinfo/general

To unsubscribe, please visit http://openib.org/mailman/listinfo/openib-general


_______________________________________________
general mailing list
general@lists.openfabrics.org
http://lists.openfabrics.org/cgi-bin/mailman/listinfo/general

To unsubscribe, please visit http://openib.org/mailman/listinfo/openib-general

Reply via email to