performance of open-iscsi
Good day. I'm trying to rid of bottleneck in SAN environment. After some tests I've came to conclusion that bottleneck is in open-iscsi or IET. Here simple test to check it: 1) Setup relatively fast array of disks (in my case it is about 20 SATA drives in raid10) 2) set up iet in blockio mode. 3) discover/login on it locally (no network, no switches, just lo0) 4) run fio with following config: [test] blocksize=4k filename=/dev/sdal #iscsi disk rw=randwrite direct=1 buffered=0 ioengine=libaio iodepth=32 What I see: sdal (open-iscsi disk) utilization is 100%, all other disks are below 50% (about 35-45%) Changing filename from iscsi disk to raid disk (which is exported by iet) raise performance (in my case with 20 SATA disks from 4.5k IOPS to 5.4k IOPS). Here my configs: ietd.conf: Target iqn.2012-06.test5:testing Lun 1 Path=/dev/md100,Type=blockio,IOMode=wt,ScsiId=tst iscsid.conf (default debian config): iscsid.startup = /sbin/iscsid node.startup = manual node.session.timeo.replacement_timeout = 120 node.conn[0].timeo.login_timeout = 15 node.conn[0].timeo.logout_timeout = 15 node.conn[0].timeo.noop_out_interval = 5 node.conn[0].timeo.noop_out_timeout = 5 node.session.err_timeo.abort_timeout = 15 node.session.err_timeo.lu_reset_timeout = 30 node.session.err_timeo.tgt_reset_timeout = 30 node.session.initial_login_retry_max = 8 node.session.cmds_max = 128 node.session.queue_depth = 32 node.session.xmit_thread_priority = -20 node.session.iscsi.InitialR2T = No node.session.iscsi.ImmediateData = Yes node.session.iscsi.FirstBurstLength = 262144 node.session.iscsi.MaxBurstLength = 16776192 node.conn[0].iscsi.MaxRecvDataSegmentLength = 262144 node.conn[0].iscsi.MaxXmitDataSegmentLength = 0 discovery.sendtargets.iscsi.MaxRecvDataSegmentLength = 32768 node.session.iscsi.FastAbort = Yes Can someone comment those results? Is any way to reduce iscsi overhead? Thanks. -- You received this message because you are subscribed to the Google Groups open-iscsi group. To post to this group, send email to open-iscsi@googlegroups.com. To unsubscribe from this group, send email to open-iscsi+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/open-iscsi?hl=en.
Newbie question: why is portal setting ignored?
Hi all, sorry for newbie question, but what am i doing wrong? I have and openfiler NAS with two cards and two subnets (private - 172.16.1.* and public 192.168.49.*). So from my other machine: [root@edlrac1 rules.d]# iscsiadm -m discovery -t sendtargets -p 172.16.1.3 172.16.1.3:3260,1 iqn.2006-01.int.internal:edlrac.fra1 192.168.49.105:3260,1 iqn.2006-01.int.internal:edlrac.fra1 172.16.1.3:3260,1 iqn.2006-01.int.internal:edlrac.data1 192.168.49.105:3260,1 iqn.2006-01.int.internal:edlrac.data1 172.16.1.3:3260,1 iqn.2006-01.int.internal:edlrac.crs1 192.168.49.105:3260,1 iqn.2006-01.int.internal:edlrac.crs1 [root@edlrac1 rules.d]# iscsiadm -m node -T iqn.2006-01.int.internal:edlrac.crs1 -p 172.16.1.3 -l [root@edlrac1 rules.d]# iscsiadm -m node -T iqn.2006-01.int.internal:edlrac.data1 -p 172.16.1.3 -l [root@edlrac1 rules.d]# iscsiadm -m node -T iqn.2006-01.int.internal:edlrac.fra1 -p 172.16.1.3 -l [root@edlrac1 rules.d]# iscsiadm -m node --login Logging in to [iface: default, target: iqn.2006-01.int.webmedia:edlrac.data1, portal: 192.168.49.105,3260] (multiple) Logging in to [iface: default, target: iqn.2006-01.int.webmedia:edlrac.crs1, portal: 192.168.49.105,3260] (multiple) Logging in to [iface: default, target: iqn.2006-01.int.webmedia:edlrac.fra1, portal: 192.168.49.105,3260] (multiple) Why is iscsiadm using 192.168.49.105 and not the 172.16.1.3 ? [root@edlrac1 rules.d]# iscsiadm --version iscsiadm version 2.0-872.33.el6 -- You received this message because you are subscribed to the Google Groups open-iscsi group. To view this discussion on the web visit https://groups.google.com/d/msg/open-iscsi/-/QFaavQCC0KoJ. To post to this group, send email to open-iscsi@googlegroups.com. To unsubscribe from this group, send email to open-iscsi+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/open-iscsi?hl=en.
Re: [Iscsitarget-devel] performance of open-iscsi
On 18.06.2012 17:38, Ross S. W. Walker wrote: On Jun 18, 2012, at 7:02 AM, George Shuklingeorge.shuk...@gmail.com wrote: Good day. I'm trying to rid of bottleneck in SAN environment. After some tests I've came to conclusion that bottleneck is in open-iscsi or IET. Here simple test to check it: 1) Setup relatively fast array of disks (in my case it is about 20 SATA drives in raid10) 2) set up iet in blockio mode. 3) discover/login on it locally (no network, no switches, just lo0) 4) run fio with following config: [test] blocksize=4k filename=/dev/sdal #iscsi disk rw=randwrite direct=1 buffered=0 ioengine=libaio iodepth=32 What I see: sdal (open-iscsi disk) utilization is 100%, all other disks are below 50% (about 35-45%) Changing filename from iscsi disk to raid disk (which is exported by iet) raise performance (in my case with 20 SATA disks from 4.5k IOPS to 5.4k IOPS). I don't quite understand this, do you mean the performance of going direct to the native raid was 5400 IOPS? You could try disabling rx/tx checksums on the loopback if enabled. Ok, sorry, this is not related to open-iscsi. I done some more testing and found following data: direct test: 5.4k IOPS scst/vdisk_blockio: 5.3kIOPS iet: 4.5k IOPS. So this is definitively problem of iet, not open-iscsi. -- You received this message because you are subscribed to the Google Groups open-iscsi group. To post to this group, send email to open-iscsi@googlegroups.com. To unsubscribe from this group, send email to open-iscsi+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/open-iscsi?hl=en.
Re: [Iscsitarget-devel] performance of open-iscsi
On Jun 18, 2012, at 7:02 AM, George Shuklin george.shuk...@gmail.com wrote: Good day. I'm trying to rid of bottleneck in SAN environment. After some tests I've came to conclusion that bottleneck is in open-iscsi or IET. Here simple test to check it: 1) Setup relatively fast array of disks (in my case it is about 20 SATA drives in raid10) 2) set up iet in blockio mode. 3) discover/login on it locally (no network, no switches, just lo0) 4) run fio with following config: [test] blocksize=4k filename=/dev/sdal #iscsi disk rw=randwrite direct=1 buffered=0 ioengine=libaio iodepth=32 What I see: sdal (open-iscsi disk) utilization is 100%, all other disks are below 50% (about 35-45%) Changing filename from iscsi disk to raid disk (which is exported by iet) raise performance (in my case with 20 SATA disks from 4.5k IOPS to 5.4k IOPS). I don't quite understand this, do you mean the performance of going direct to the native raid was 5400 IOPS? You could try disabling rx/tx checksums on the loopback if enabled. -Ross __ This e-mail, and any attachments thereto, is intended only for use by the addressee(s) named herein and may contain legally privileged and/or confidential information. If you are not the intended recipient of this e-mail, you are hereby notified that any dissemination, distribution or copying of this e-mail, and any attachments thereto, is strictly prohibited. If you have received this e-mail in error, please immediately notify the sender and permanently delete the original and any copy or printout thereof. -- You received this message because you are subscribed to the Google Groups open-iscsi group. To post to this group, send email to open-iscsi@googlegroups.com. To unsubscribe from this group, send email to open-iscsi+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/open-iscsi?hl=en.
iSCSI Performance Benchmarking (Raw throughput)
Greetings All, I am evaluating the open-iscsi + IET SAN solution. Can someone point me to a benchmarking tools I should use and existing benchmark reports ? I am familiar with iometer / iozone. ( Want to re-run and compare with some existing benchmark reports) Also wil be great help if someone can suggest any industry recognized tool I should investigate for this benchmarking. Best Regards -vincy -- You received this message because you are subscribed to the Google Groups open-iscsi group. To view this discussion on the web visit https://groups.google.com/d/msg/open-iscsi/-/jUprvQ5ABvoJ. To post to this group, send email to open-iscsi@googlegroups.com. To unsubscribe from this group, send email to open-iscsi+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/open-iscsi?hl=en.
Re: Newbie question: why is portal setting ignored?
On 06/18/2012 02:40 PM, Meska wrote: Hi all, sorry for newbie question, but what am i doing wrong? I have and openfiler NAS with two cards and two subnets (private - 172.16.1.* and public 192.168.49.*). So from my other machine: [root@edlrac1 rules.d]# iscsiadm -m discovery -t sendtargets -p 172.16.1.3 172.16.1.3:3260,1 iqn.2006-01.int.internal:edlrac.fra1 192.168.49.105:3260,1 iqn.2006-01.int.internal:edlrac.fra1 172.16.1.3:3260,1 iqn.2006-01.int.internal:edlrac.data1 192.168.49.105:3260,1 iqn.2006-01.int.internal:edlrac.data1 172.16.1.3:3260,1 iqn.2006-01.int.internal:edlrac.crs1 192.168.49.105:3260,1 iqn.2006-01.int.internal:edlrac.crs1 After you run the discovery command could you do iscsiadm -m node -P 1 Could you also tar up the contents of /var/lib/iscsi and send it? iscsiadm basically looks in those dirs for the targets that were found with the discovery command. [root@edlrac1 rules.d]# iscsiadm -m node -T iqn.2006-01.int.internal:edlrac.crs1 -p 172.16.1.3 -l [root@edlrac1 rules.d]# iscsiadm -m node -T iqn.2006-01.int.internal:edlrac.data1 -p 172.16.1.3 -l [root@edlrac1 rules.d]# iscsiadm -m node -T iqn.2006-01.int.internal:edlrac.fra1 -p 172.16.1.3 -l [root@edlrac1 rules.d]# iscsiadm -m node --login Logging in to [iface: default, target: iqn.2006-01.int.webmedia:edlrac.data1, portal: 192.168.49.105,3260] (multiple) Logging in to [iface: default, target: iqn.2006-01.int.webmedia:edlrac.crs1, portal: 192.168.49.105,3260] (multiple) Logging in to [iface: default, target: iqn.2006-01.int.webmedia:edlrac.fra1, portal: 192.168.49.105,3260] (multiple) Would you run all these commands with debugging on and send the output? To do this add -d 8 to the end of the command. Why is iscsiadm using 192.168.49.105 and not the 172.16.1.3 ? [root@edlrac1 rules.d]# iscsiadm --version iscsiadm version 2.0-872.33.el6 Is this RHEL or Centos? -- You received this message because you are subscribed to the Google Groups open-iscsi group. To post to this group, send email to open-iscsi@googlegroups.com. To unsubscribe from this group, send email to open-iscsi+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/open-iscsi?hl=en.
Re: iSCSI Performance Benchmarking (Raw throughput)
On 06/19/2012 01:25 PM, vincent Ferrer wrote: Greetings All, I am evaluating the open-iscsi + IET SAN solution. Can someone point me to a benchmarking tools I should use and existing benchmark reports ? I am familiar with iometer / iozone. ( Want to re-run and compare with some existing benchmark reports) Also wil be great help if someone can suggest any industry recognized tool I should investigate for this benchmarking. I think fio is a popular tool. -- You received this message because you are subscribed to the Google Groups open-iscsi group. To post to this group, send email to open-iscsi@googlegroups.com. To unsubscribe from this group, send email to open-iscsi+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/open-iscsi?hl=en.
Re: iSCSI Performance Benchmarking (Raw throughput)
Thanks Mike, Can you state what more information fio can provide which iozone cannot On Tuesday, June 19, 2012 1:09:57 PM UTC-7, Mike Christie wrote: On 06/19/2012 01:25 PM, vincent Ferrer wrote: Greetings All, I am evaluating the open-iscsi + IET SAN solution. Can someone point me to a benchmarking tools I should use and existing benchmark reports ? I am familiar with iometer / iozone. ( Want to re-run and compare with some existing benchmark reports) Also wil be great help if someone can suggest any industry recognized tool I should investigate for this benchmarking. I think fio is a popular tool. -- You received this message because you are subscribed to the Google Groups open-iscsi group. To view this discussion on the web visit https://groups.google.com/d/msg/open-iscsi/-/-1gJl5HFs80J. To post to this group, send email to open-iscsi@googlegroups.com. To unsubscribe from this group, send email to open-iscsi+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/open-iscsi?hl=en.