Re: Antw: Re: Best way to create multiple TCP flows on 10 Gbps link

2014-08-28 Thread Ulrich Windl
 Mike Christie micha...@cs.wisc.edu schrieb am 27.08.2014 um 23:49 in
Nachricht 53fe5276.2060...@cs.wisc.edu:
 On 08/27/2014 02:24 AM, Ulrich Windl wrote:
 Learner Study learner.st...@gmail.com schrieb am 27.08.2014 um 02:13 in
 Nachricht
 CAP8+hKW=HApS+=vxeaaibtbbd7yzndu4squt+84se99aglc...@mail.gmail.com:
 Hi Mike,

 Thanks for suggestions

 I think you meant,

 echo 1  /sys/block/sdX/device/delete

 I don't see /sys/block/sdX/device/remove in my setup.
 
 I'm not sure: Is it echo offline  /sys/block/sdX/device/state, echo 
 scsi 
 remove-single-device ${host} ${channel} ${id} ${lun}  /proc/scsi/scsi, or 
 echo 1  
 /sys/class/scsi_device/${host}:${channel}:${id}:${lun}/device/delete
 ?
 
 
 To delete a device just do
 
 echo 1  /sys/block/sdX/device/delete

I think the confusing thing is that you don't see a delete in  
/sys/block/sdX/device.

 
 You can also do it through proc if it is enabled for your kernel.
 
 No need to offline the device before deleting. The scsi layer will
 handle the device state transitions.
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 open-iscsi group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to open-iscsi+unsubscr...@googlegroups.com.
 To post to this group, send email to open-iscsi@googlegroups.com.
 Visit this group at http://groups.google.com/group/open-iscsi.
 For more options, visit https://groups.google.com/d/optout.



-- 
You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To post to this group, send email to open-iscsi@googlegroups.com.
Visit this group at http://groups.google.com/group/open-iscsi.
For more options, visit https://groups.google.com/d/optout.


Re: Antw: Re: Best way to create multiple TCP flows on 10 Gbps link

2014-08-28 Thread Mike Christie
On 08/28/2014 12:59 AM, Ulrich Windl wrote:
  To delete a device just do
  
  echo 1  /sys/block/sdX/device/delete
 I think the confusing thing is that you don't see a delete in  
 /sys/block/sdX/device.

Not sure what you mean. I do:

ls  /sys/block/sda/device/
block   evt_media_change  max_sectors  rescanstate
bsg generic   modalias rev   subsystem
delete  iocounterbits modelscsi_device   timeout
device_blocked  iodone_cntpowerscsi_disk type
dh_stateioerr_cnt queue_depth  scsi_generic  uevent
driver  iorequest_cnt queue_type   scsi_levelvendor

-- 
You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To post to this group, send email to open-iscsi@googlegroups.com.
Visit this group at http://groups.google.com/group/open-iscsi.
For more options, visit https://groups.google.com/d/optout.


Re: Antw: Re: Best way to create multiple TCP flows on 10 Gbps link

2014-08-28 Thread Mike Christie
On 08/28/2014 11:29 AM, Mike Christie wrote:
 On 08/28/2014 12:59 AM, Ulrich Windl wrote:
 To delete a device just do

 echo 1  /sys/block/sdX/device/delete
 I think the confusing thing is that you don't see a delete in  
 /sys/block/sdX/device.
 
 Not sure what you mean. I do:
 
 ls  /sys/block/sda/device/
 block   evt_media_change  max_sectors  rescanstate
 bsg generic   modalias rev   subsystem
 delete  iocounterbits modelscsi_device   timeout
 device_blocked  iodone_cntpowerscsi_disk type
 dh_stateioerr_cnt queue_depth  scsi_generic  uevent
 driver  iorequest_cnt queue_type   scsi_levelvendor
 

Ah, I see. I think depending on the kernel config options used the
device symlink might not even be there.

-- 
You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To post to this group, send email to open-iscsi@googlegroups.com.
Visit this group at http://groups.google.com/group/open-iscsi.
For more options, visit https://groups.google.com/d/optout.


Antw: Re: Best way to create multiple TCP flows on 10 Gbps link

2014-08-27 Thread Ulrich Windl
 Learner Study learner.st...@gmail.com schrieb am 27.08.2014 um 02:13 in
Nachricht
CAP8+hKW=HApS+=vxeaaibtbbd7yzndu4squt+84se99aglc...@mail.gmail.com:
 Hi Mike,
 
 Thanks for suggestions
 
 I think you meant,
 
 echo 1  /sys/block/sdX/device/delete
 
 I don't see /sys/block/sdX/device/remove in my setup.

I'm not sure: Is it echo offline  /sys/block/sdX/device/state, echo scsi 
remove-single-device ${host} ${channel} ${id} ${lun}  /proc/scsi/scsi, or 
echo 1  /sys/class/scsi_device/${host}:${channel}:${id}:${lun}/device/delete?

 
 How do following FIO options look?
 
 [default]
 rw=read
 size=4g
 bs=1m
 ioengine=libaio
 direct=1
 numjobs=1
 filename=/dev/sda
 runtime=360
 iodepth=256
 
 Thanks for your time!
 
 
 On Tue, Aug 26, 2014 at 4:49 PM, Michael Christie micha...@cs.wisc.edu 
 wrote:

 On Aug 26, 2014, at 3:11 PM, Learner learner.st...@gmail.com wrote:

 Another related observation and some questions;

 I am using open iscsi on init with IET on trgt over a single 10gbps link

 There are three ip aliases on each side

 I have 3 ramdisks exported by IET to init

 I do  iscsi login 3 times, once using each underlying ip address and notice 
 that each iscsi session sees all 3 disks.

 Is it possible to restrict such that each init only sees one separate disk?


 There is no iscsi initiator or target setting for this. The default is to 
 show all paths (each /dev/sdx is a path to the same device)..

 You would have to manually delete some paths by doing

 echo 1  /sys/block/sdX/device/remove

 When I run fio on each mounted disk, I see that only two underlying tcp 
 sessions are being used - that limits the perf.
 Any ideas on how to overcome this?

 How are you matching sessions with devices? It should just be a matter of 
 running fio on the right devices. If you run:

 iscsiadm -m session -P 3

 you can see how the sdXs match up with sessions/connections. If you run fio 
 to a /dev/sdX from each session, you should be seeing IO to all 3 sessions.





 Thanks!


 Sent from my iPhone

 On Aug 26, 2014, at 12:53 PM, Mark Lehrer m...@knm.org wrote:

 On Tue, 26 Aug 2014 08:58:46 -0400 Alvin Starr al...@iplink.net wrote:

 I am trying to achieve10Gbps in my single initiator/single target
 env. (open-iscsi and IET)

 On a semi-related note, are there any good guides out there to
 tuning Linux for maximum single-socket performance?  On my 40 gigabit

 You are likely getting hit by the bandwidth-delay product.
 Take a look at http://en.wikipedia.org/wiki/Bandwidth-delay_product 
 and http://www.kehlet.cx/articles/99.html 

 Thanks that helped get my netcat transfer up over 500MB/sec using IPoIB. 
 Unfortunately that is still only about 10% of the available bandwidth.

 I'll keep on tweaking and see how far I can take it.

 Thanks,
 Mark

 --
 You received this message because you are subscribed to the Google Groups 
 open-iscsi group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to open-iscsi+unsubscr...@googlegroups.com.
 To post to this group, send email to open-iscsi@googlegroups.com.
 Visit this group at http://groups.google.com/group/open-iscsi.
 For more options, visit https://groups.google.com/d/optout.

 --
 You received this message because you are subscribed to the Google Groups 
 open-iscsi group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to open-iscsi+unsubscr...@googlegroups.com.
 To post to this group, send email to open-iscsi@googlegroups.com.
 Visit this group at http://groups.google.com/group/open-iscsi.
 For more options, visit https://groups.google.com/d/optout.

 --
 You received this message because you are subscribed to the Google Groups 
 open-iscsi group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to open-iscsi+unsubscr...@googlegroups.com.
 To post to this group, send email to open-iscsi@googlegroups.com.
 Visit this group at http://groups.google.com/group/open-iscsi.
 For more options, visit https://groups.google.com/d/optout.
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 open-iscsi group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to open-iscsi+unsubscr...@googlegroups.com.
 To post to this group, send email to open-iscsi@googlegroups.com.
 Visit this group at http://groups.google.com/group/open-iscsi.
 For more options, visit https://groups.google.com/d/optout.


-- 
You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To post to this group, send email to open-iscsi@googlegroups.com.
Visit this group at http://groups.google.com/group/open-iscsi.
For more options, visit https://groups.google.com/d/optout.


Re: Best way to create multiple TCP flows on 10 Gbps link

2014-08-27 Thread Mark Lehrer

On Tue, 26 Aug 2014 13:05:11 -0700
 Learner learner.st...@gmail.com wrote:

How many iscsi and underlying top sessions are u using? If multiple,
pls check if all to sessions are being used.

Btw, what tuning did u perform to fix Tcp BDP issue?


I'm just doing netcat tests to/from /dev/shm at the moment.

I wouldn't consider it fixed necessarily, but the info from this link was 
useful:


http://www.kehlet.cx/articles/99.html

Thanks,
Mark

--
You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To post to this group, send email to open-iscsi@googlegroups.com.
Visit this group at http://groups.google.com/group/open-iscsi.
For more options, visit https://groups.google.com/d/optout.


Re: Best way to create multiple TCP flows on 10 Gbps link

2014-08-27 Thread Learner
I had applied the tuning for my 10g link but didn't see much impact. Actually 
for me tcp is already line rate with 2/3 threads but iscsi/fio read is around 
5.5gbps only - with 3/4 fio threads. Perhaps the bottleneck is somewhere else...

Thanks!

Sent from my iPhone

On Aug 27, 2014, at 8:25 AM, Mark Lehrer m...@knm.org wrote:

 On Tue, 26 Aug 2014 13:05:11 -0700
 Learner learner.st...@gmail.com wrote:
 How many iscsi and underlying top sessions are u using? If multiple,
 pls check if all to sessions are being used.
 
 Btw, what tuning did u perform to fix Tcp BDP issue?
 
 I'm just doing netcat tests to/from /dev/shm at the moment.
 
 I wouldn't consider it fixed necessarily, but the info from this link was 
 useful:
 
 http://www.kehlet.cx/articles/99.html
 
 Thanks,
 Mark
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 open-iscsi group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to open-iscsi+unsubscr...@googlegroups.com.
 To post to this group, send email to open-iscsi@googlegroups.com.
 Visit this group at http://groups.google.com/group/open-iscsi.
 For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To post to this group, send email to open-iscsi@googlegroups.com.
Visit this group at http://groups.google.com/group/open-iscsi.
For more options, visit https://groups.google.com/d/optout.


Re: Antw: Re: Best way to create multiple TCP flows on 10 Gbps link

2014-08-27 Thread Mike Christie
On 08/27/2014 02:24 AM, Ulrich Windl wrote:
 Learner Study learner.st...@gmail.com schrieb am 27.08.2014 um 02:13 in
 Nachricht
 CAP8+hKW=HApS+=vxeaaibtbbd7yzndu4squt+84se99aglc...@mail.gmail.com:
 Hi Mike,

 Thanks for suggestions

 I think you meant,

 echo 1  /sys/block/sdX/device/delete

 I don't see /sys/block/sdX/device/remove in my setup.
 
 I'm not sure: Is it echo offline  /sys/block/sdX/device/state, echo scsi 
 remove-single-device ${host} ${channel} ${id} ${lun}  /proc/scsi/scsi, or 
 echo 1  
 /sys/class/scsi_device/${host}:${channel}:${id}:${lun}/device/delete?
 

To delete a device just do

echo 1  /sys/block/sdX/device/delete

You can also do it through proc if it is enabled for your kernel.

No need to offline the device before deleting. The scsi layer will
handle the device state transitions.

-- 
You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To post to this group, send email to open-iscsi@googlegroups.com.
Visit this group at http://groups.google.com/group/open-iscsi.
For more options, visit https://groups.google.com/d/optout.


Antw: Re: Best way to create multiple TCP flows on 10 Gbps link

2014-08-26 Thread Ulrich Windl
 Mark Lehrer m...@knm.org schrieb am 25.08.2014 um 20:58 in Nachricht
ximss-10382...@knm.org:
  I am trying to achieve10Gbps in my single initiator/single target
 env. (open-iscsi and IET)
 
 On a semi-related note, are there any good guides out there to tuning Linux 
 for maximum single-socket performance?  On my 40 gigabit setup, I seem to 

Hi!

You are referring to networks sockets, not to CPU sockets, I guess. Have you 
tried larger packets (if you can control the LAN). I don't know if open iSCSI 
can do IPv6, but from what I read IPv6 could give better TCP performance. Have 
you checked interrupt assignments for the NIC? I guess your card is PCIe and it 
uses one lane? Have you tried (for comparison) to do just a netcat to/from 
/dev/zero? You have to analyze the groups, hardware, network stack and iSCSI 
separately, I guess. iSCSI can not do any better than the networks stack, and 
the network stack cannot do better than the hardware can.

Regards,
Ulrich


 hit a wall around 3 gigabits when doing a single TCP socket.  To go far 
 above that I need to do multipath, initiator-side RAID, or RDMA.
 
 Thanks,
 Mark
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 open-iscsi group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to open-iscsi+unsubscr...@googlegroups.com.
 To post to this group, send email to open-iscsi@googlegroups.com.
 Visit this group at http://groups.google.com/group/open-iscsi.
 For more options, visit https://groups.google.com/d/optout.



-- 
You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To post to this group, send email to open-iscsi@googlegroups.com.
Visit this group at http://groups.google.com/group/open-iscsi.
For more options, visit https://groups.google.com/d/optout.


Re: Best way to create multiple TCP flows on 10 Gbps link

2014-08-26 Thread Alvin Starr

You are likely getting hit by the bandwidth-delay product.
Take a look at http://en.wikipedia.org/wiki/Bandwidth-delay_product
and http://www.kehlet.cx/articles/99.html



On 08/25/2014 02:58 PM, Mark Lehrer wrote:

I am trying to achieve10Gbps in my single initiator/single target
env. (open-iscsi and IET)


On a semi-related note, are there any good guides out there to tuning
Linux for maximum single-socket performance?  On my 40 gigabit setup,
I seem to hit a wall around 3 gigabits when doing a single TCP
socket.  To go far above that I need to do multipath, initiator-side
RAID, or RDMA.

Thanks,
Mark




--
Alvin Starr.


--
Alvin Starr   ||   voice: (416)585-9971x690
Interlink Connectivity||   fax:   (416)585-9974
al...@iplink.net  ||

--
You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To post to this group, send email to open-iscsi@googlegroups.com.
Visit this group at http://groups.google.com/group/open-iscsi.
For more options, visit https://groups.google.com/d/optout.


Re: Best way to create multiple TCP flows on 10 Gbps link

2014-08-26 Thread learner.study
iperf performance for TCP is line rate in both directions using 3 threads

However, I can just get 700MB/s Write and 570MB/s Reads with iSCSI.

Thanks for any pointers!

On Tuesday, August 26, 2014 1:11:59 PM UTC-7, learner.study wrote:

 Another related observation and some questions; 

 I am using open iscsi on init with IET on trgt over a single 10gbps link 

 There are three ip aliases on each side 

 I have 3 ramdisks exported by IET to init 

 I do  iscsi login 3 times, once using each underlying ip address and 
 notice that each iscsi session sees all 3 disks. 

  Is it possible to restrict such that each init only sees one separate 
 disk? 

 When I run fio on each mounted disk, I see that only two underlying tcp 
 sessions are being used - that limits the perf. 
 Any ideas on how to overcome this? 

 Thanks! 


 Sent from my iPhone 

 On Aug 26, 2014, at 12:53 PM, Mark Lehrer m...@knm.org wrote: 

  On Tue, 26 Aug 2014 08:58:46 -0400 Alvin Starr al...@iplink.net wrote: 
  
  I am trying to achieve10Gbps in my single initiator/single target 
  env. (open-iscsi and IET) 
  
  On a semi-related note, are there any good guides out there to 
  tuning Linux for maximum single-socket performance?  On my 40 gigabit 
  
  You are likely getting hit by the bandwidth-delay product. 
  Take a look at http://en.wikipedia.org/wiki/Bandwidth-delay_product 
  and http://www.kehlet.cx/articles/99.html 
  
  Thanks that helped get my netcat transfer up over 500MB/sec using IPoIB. 
 Unfortunately that is still only about 10% of the available bandwidth. 
  
  I'll keep on tweaking and see how far I can take it. 
  
  Thanks, 
  Mark 
  
  -- 
  You received this message because you are subscribed to the Google 
 Groups open-iscsi group. 
  To unsubscribe from this group and stop receiving emails from it, send 
 an email to open-iscsi+unsubscr...@googlegroups.com. 
  To post to this group, send email to open-iscsi@googlegroups.com. 
  Visit this group at http://groups.google.com/group/open-iscsi. 
  For more options, visit https://groups.google.com/d/optout. 


On Tuesday, August 26, 2014 1:11:59 PM UTC-7, learner.study wrote:

 Another related observation and some questions; 

 I am using open iscsi on init with IET on trgt over a single 10gbps link 

 There are three ip aliases on each side 

 I have 3 ramdisks exported by IET to init 

 I do  iscsi login 3 times, once using each underlying ip address and 
 notice that each iscsi session sees all 3 disks. 

  Is it possible to restrict such that each init only sees one separate 
 disk? 

 When I run fio on each mounted disk, I see that only two underlying tcp 
 sessions are being used - that limits the perf. 
 Any ideas on how to overcome this? 

 Thanks! 


 Sent from my iPhone 

 On Aug 26, 2014, at 12:53 PM, Mark Lehrer m...@knm.org wrote: 

  On Tue, 26 Aug 2014 08:58:46 -0400 Alvin Starr al...@iplink.net wrote: 
  
  I am trying to achieve10Gbps in my single initiator/single target 
  env. (open-iscsi and IET) 
  
  On a semi-related note, are there any good guides out there to 
  tuning Linux for maximum single-socket performance?  On my 40 gigabit 
  
  You are likely getting hit by the bandwidth-delay product. 
  Take a look at http://en.wikipedia.org/wiki/Bandwidth-delay_product 
  and http://www.kehlet.cx/articles/99.html 
  
  Thanks that helped get my netcat transfer up over 500MB/sec using IPoIB. 
 Unfortunately that is still only about 10% of the available bandwidth. 
  
  I'll keep on tweaking and see how far I can take it. 
  
  Thanks, 
  Mark 
  
  -- 
  You received this message because you are subscribed to the Google 
 Groups open-iscsi group. 
  To unsubscribe from this group and stop receiving emails from it, send 
 an email to open-iscsi+unsubscr...@googlegroups.com. 
  To post to this group, send email to open-iscsi@googlegroups.com. 
  Visit this group at http://groups.google.com/group/open-iscsi. 
  For more options, visit https://groups.google.com/d/optout. 


-- 
You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To post to this group, send email to open-iscsi@googlegroups.com.
Visit this group at http://groups.google.com/group/open-iscsi.
For more options, visit https://groups.google.com/d/optout.


Re: Best way to create multiple TCP flows on 10 Gbps link

2014-08-26 Thread Alvin Starr
I have a couple of iscsi links running on 1G and not in your range of hw 
and demand at all.


I ran an ISP for about 20 years and got bitten by the BDP a number of 
times now so when someone describes the problem I know what to look for.





On 08/26/2014 04:05 PM, Learner wrote:

How many iscsi and underlying top sessions are u using? If multiple, pls check 
if all to sessions are being used.

Btw, what tuning did u perform to fix Tcp BDP issue?


Thanks

Sent from my iPhone

On Aug 26, 2014, at 12:53 PM, Mark Lehrer m...@knm.org wrote:


On Tue, 26 Aug 2014 08:58:46 -0400 Alvin Starr al...@iplink.net wrote:

I am trying to achieve10Gbps in my single initiator/single target
env. (open-iscsi and IET)

On a semi-related note, are there any good guides out there to
tuning Linux for maximum single-socket performance?  On my 40 gigabit

You are likely getting hit by the bandwidth-delay product.
Take a look at http://en.wikipedia.org/wiki/Bandwidth-delay_product
and http://www.kehlet.cx/articles/99.html

Thanks that helped get my netcat transfer up over 500MB/sec using IPoIB. 
Unfortunately that is still only about 10% of the available bandwidth.

I'll keep on tweaking and see how far I can take it.

Thanks,
Mark

--
You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To post to this group, send email to open-iscsi@googlegroups.com.
Visit this group at http://groups.google.com/group/open-iscsi.
For more options, visit https://groups.google.com/d/optout.



--
Alvin Starr.

--
You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To post to this group, send email to open-iscsi@googlegroups.com.
Visit this group at http://groups.google.com/group/open-iscsi.
For more options, visit https://groups.google.com/d/optout.


Re: Best way to create multiple TCP flows on 10 Gbps link

2014-08-26 Thread Michael Christie

On Aug 26, 2014, at 3:11 PM, Learner learner.st...@gmail.com wrote:

 Another related observation and some questions;
 
 I am using open iscsi on init with IET on trgt over a single 10gbps link
 
 There are three ip aliases on each side
 
 I have 3 ramdisks exported by IET to init
 
 I do  iscsi login 3 times, once using each underlying ip address and notice 
 that each iscsi session sees all 3 disks.
 
 Is it possible to restrict such that each init only sees one separate disk?
 

There is no iscsi initiator or target setting for this. The default is to show 
all paths (each /dev/sdx is a path to the same device)..

You would have to manually delete some paths by doing

echo 1  /sys/block/sdX/device/remove

 When I run fio on each mounted disk, I see that only two underlying tcp 
 sessions are being used - that limits the perf.
 Any ideas on how to overcome this?

How are you matching sessions with devices? It should just be a matter of 
running fio on the right devices. If you run:

iscsiadm -m session -P 3

you can see how the sdXs match up with sessions/connections. If you run fio to 
a /dev/sdX from each session, you should be seeing IO to all 3 sessions.




 
 Thanks!
 
 
 Sent from my iPhone
 
 On Aug 26, 2014, at 12:53 PM, Mark Lehrer m...@knm.org wrote:
 
 On Tue, 26 Aug 2014 08:58:46 -0400 Alvin Starr al...@iplink.net wrote:
 
 I am trying to achieve10Gbps in my single initiator/single target
 env. (open-iscsi and IET)
 
 On a semi-related note, are there any good guides out there to
 tuning Linux for maximum single-socket performance?  On my 40 gigabit
 
 You are likely getting hit by the bandwidth-delay product.
 Take a look at http://en.wikipedia.org/wiki/Bandwidth-delay_product
 and http://www.kehlet.cx/articles/99.html
 
 Thanks that helped get my netcat transfer up over 500MB/sec using IPoIB. 
 Unfortunately that is still only about 10% of the available bandwidth.
 
 I'll keep on tweaking and see how far I can take it.
 
 Thanks,
 Mark
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 open-iscsi group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to open-iscsi+unsubscr...@googlegroups.com.
 To post to this group, send email to open-iscsi@googlegroups.com.
 Visit this group at http://groups.google.com/group/open-iscsi.
 For more options, visit https://groups.google.com/d/optout.
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 open-iscsi group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to open-iscsi+unsubscr...@googlegroups.com.
 To post to this group, send email to open-iscsi@googlegroups.com.
 Visit this group at http://groups.google.com/group/open-iscsi.
 For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To post to this group, send email to open-iscsi@googlegroups.com.
Visit this group at http://groups.google.com/group/open-iscsi.
For more options, visit https://groups.google.com/d/optout.


Re: Best way to create multiple TCP flows on 10 Gbps link

2014-08-26 Thread Learner Study
Hi Mike,

Thanks for suggestions

I think you meant,

echo 1  /sys/block/sdX/device/delete

I don't see /sys/block/sdX/device/remove in my setup.

How do following FIO options look?

[default]
rw=read
size=4g
bs=1m
ioengine=libaio
direct=1
numjobs=1
filename=/dev/sda
runtime=360
iodepth=256

Thanks for your time!


On Tue, Aug 26, 2014 at 4:49 PM, Michael Christie micha...@cs.wisc.edu wrote:

 On Aug 26, 2014, at 3:11 PM, Learner learner.st...@gmail.com wrote:

 Another related observation and some questions;

 I am using open iscsi on init with IET on trgt over a single 10gbps link

 There are three ip aliases on each side

 I have 3 ramdisks exported by IET to init

 I do  iscsi login 3 times, once using each underlying ip address and notice 
 that each iscsi session sees all 3 disks.

 Is it possible to restrict such that each init only sees one separate disk?


 There is no iscsi initiator or target setting for this. The default is to 
 show all paths (each /dev/sdx is a path to the same device)..

 You would have to manually delete some paths by doing

 echo 1  /sys/block/sdX/device/remove

 When I run fio on each mounted disk, I see that only two underlying tcp 
 sessions are being used - that limits the perf.
 Any ideas on how to overcome this?

 How are you matching sessions with devices? It should just be a matter of 
 running fio on the right devices. If you run:

 iscsiadm -m session -P 3

 you can see how the sdXs match up with sessions/connections. If you run fio 
 to a /dev/sdX from each session, you should be seeing IO to all 3 sessions.





 Thanks!


 Sent from my iPhone

 On Aug 26, 2014, at 12:53 PM, Mark Lehrer m...@knm.org wrote:

 On Tue, 26 Aug 2014 08:58:46 -0400 Alvin Starr al...@iplink.net wrote:

 I am trying to achieve10Gbps in my single initiator/single target
 env. (open-iscsi and IET)

 On a semi-related note, are there any good guides out there to
 tuning Linux for maximum single-socket performance?  On my 40 gigabit

 You are likely getting hit by the bandwidth-delay product.
 Take a look at http://en.wikipedia.org/wiki/Bandwidth-delay_product
 and http://www.kehlet.cx/articles/99.html

 Thanks that helped get my netcat transfer up over 500MB/sec using IPoIB. 
 Unfortunately that is still only about 10% of the available bandwidth.

 I'll keep on tweaking and see how far I can take it.

 Thanks,
 Mark

 --
 You received this message because you are subscribed to the Google Groups 
 open-iscsi group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to open-iscsi+unsubscr...@googlegroups.com.
 To post to this group, send email to open-iscsi@googlegroups.com.
 Visit this group at http://groups.google.com/group/open-iscsi.
 For more options, visit https://groups.google.com/d/optout.

 --
 You received this message because you are subscribed to the Google Groups 
 open-iscsi group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to open-iscsi+unsubscr...@googlegroups.com.
 To post to this group, send email to open-iscsi@googlegroups.com.
 Visit this group at http://groups.google.com/group/open-iscsi.
 For more options, visit https://groups.google.com/d/optout.

 --
 You received this message because you are subscribed to the Google Groups 
 open-iscsi group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to open-iscsi+unsubscr...@googlegroups.com.
 To post to this group, send email to open-iscsi@googlegroups.com.
 Visit this group at http://groups.google.com/group/open-iscsi.
 For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To post to this group, send email to open-iscsi@googlegroups.com.
Visit this group at http://groups.google.com/group/open-iscsi.
For more options, visit https://groups.google.com/d/optout.


Re: Best way to create multiple TCP flows on 10 Gbps link

2014-08-26 Thread Michael Christie

On Aug 26, 2014, at 6:49 PM, Michael Christie micha...@cs.wisc.edu wrote:

 
 On Aug 26, 2014, at 3:11 PM, Learner learner.st...@gmail.com wrote:
 
 Another related observation and some questions;
 
 I am using open iscsi on init with IET on trgt over a single 10gbps link
 
 There are three ip aliases on each side
 
 I have 3 ramdisks exported by IET to init
 
 I do  iscsi login 3 times, once using each underlying ip address and notice 
 that each iscsi session sees all 3 disks.
 
 Is it possible to restrict such that each init only sees one separate disk?
 
 
 There is no iscsi initiator or target setting for this. The default is to 
 show all paths (each /dev/sdx is a path to the same device)..
 
 You would have to manually delete some paths by doing
 
 echo 1  /sys/block/sdX/device/remove
 
 When I run fio on each mounted disk, I see that only two underlying tcp 
 sessions are being used - that limits the perf.
 Any ideas on how to overcome this?
 
 How are you matching sessions with devices? It should just be a matter of 
 running fio on the right devices. If you run:
 
 iscsiadm -m session -P 3
 
 you can see how the sdXs match up with sessions/connections. If you run fio 
 to a /dev/sdX from each session, you should be seeing IO to all 3 sessions.
 

How are you determining if a session is being used or not? Are you running the 
iscsiadm -m session --stats command, watching with wireshark/tcpdump or 
something else?

If you have all three IPs on the same subnet, then it is going to be a little 
more complicated than what I described above.

-- 
You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To post to this group, send email to open-iscsi@googlegroups.com.
Visit this group at http://groups.google.com/group/open-iscsi.
For more options, visit https://groups.google.com/d/optout.


Re: Best way to create multiple TCP flows on 10 Gbps link

2014-08-26 Thread Learner
I am monitoring with netstat -a...looking at sendq and recvq there for the 
three iscsi/tcp sessions.

Also checked with tcpdump.

Thanks!

Sent from my iPhone

On Aug 26, 2014, at 9:46 PM, Michael Christie micha...@cs.wisc.edu wrote:

 
 On Aug 26, 2014, at 6:49 PM, Michael Christie micha...@cs.wisc.edu wrote:
 
 
 On Aug 26, 2014, at 3:11 PM, Learner learner.st...@gmail.com wrote:
 
 Another related observation and some questions;
 
 I am using open iscsi on init with IET on trgt over a single 10gbps link
 
 There are three ip aliases on each side
 
 I have 3 ramdisks exported by IET to init
 
 I do  iscsi login 3 times, once using each underlying ip address and notice 
 that each iscsi session sees all 3 disks.
 
 Is it possible to restrict such that each init only sees one separate disk?
 
 
 There is no iscsi initiator or target setting for this. The default is to 
 show all paths (each /dev/sdx is a path to the same device)..
 
 You would have to manually delete some paths by doing
 
 echo 1  /sys/block/sdX/device/remove
 
 When I run fio on each mounted disk, I see that only two underlying tcp 
 sessions are being used - that limits the perf.
 Any ideas on how to overcome this?
 
 How are you matching sessions with devices? It should just be a matter of 
 running fio on the right devices. If you run:
 
 iscsiadm -m session -P 3
 
 you can see how the sdXs match up with sessions/connections. If you run fio 
 to a /dev/sdX from each session, you should be seeing IO to all 3 sessions.
 
 
 How are you determining if a session is being used or not? Are you running 
 the iscsiadm -m session --stats command, watching with wireshark/tcpdump or 
 something else?
 
 If you have all three IPs on the same subnet, then it is going to be a little 
 more complicated than what I described above.
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 open-iscsi group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to open-iscsi+unsubscr...@googlegroups.com.
 To post to this group, send email to open-iscsi@googlegroups.com.
 Visit this group at http://groups.google.com/group/open-iscsi.
 For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To post to this group, send email to open-iscsi@googlegroups.com.
Visit this group at http://groups.google.com/group/open-iscsi.
For more options, visit https://groups.google.com/d/optout.


Re: Best way to create multiple TCP flows on 10 Gbps link

2014-08-25 Thread Donald Williams
I find upping some of the default Linux network params helps with
throughput


Edit /etc/sysctl.conf, then update the system using #sysctl –p

# Increase network buffer sizes net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.ipv4.tcp_rmem = 8192 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
net.core.wmem_default = 262144
net.core.rmem_default = 262144
net.core.wmem_default = 262144
net.core.rmem_default = 262144

I also find that increasing the disk read ahead really helps with
sequential read loads.

blockdev –setra X device name

 i.e. #blockdev –setra 4096 /dev/sda or /dev/mapper/mpath1


Also some small tweaks to iscsid.conf can yield some improvements.

#/etc/iscsi/iscsid.conf

node.session.cmds_max = 1024  --- Default is 128
node.session.queue_depth = 128  --- Default is
node.conn[0].iscsi.MaxRecvDataSegmentLength = 131072 --- try 64K-512K





On Mon, Aug 25, 2014 at 2:58 PM, Mark Lehrer m...@knm.org wrote:

 I am trying to achieve10Gbps in my single initiator/single target
 env. (open-iscsi and IET)


 On a semi-related note, are there any good guides out there to tuning
 Linux for maximum single-socket performance?  On my 40 gigabit setup, I
 seem to hit a wall around 3 gigabits when doing a single TCP socket.  To go
 far above that I need to do multipath, initiator-side RAID, or RDMA.

 Thanks,
 Mark


 --
 You received this message because you are subscribed to the Google Groups
 open-iscsi group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to open-iscsi+unsubscr...@googlegroups.com.
 To post to this group, send email to open-iscsi@googlegroups.com.
 Visit this group at http://groups.google.com/group/open-iscsi.
 For more options, visit https://groups.google.com/d/optout.


-- 
You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To post to this group, send email to open-iscsi@googlegroups.com.
Visit this group at http://groups.google.com/group/open-iscsi.
For more options, visit https://groups.google.com/d/optout.


Re: Best way to create multiple TCP flows on 10 Gbps link

2014-08-25 Thread Mike Christie
On 08/25/2014 04:40 PM, Mark Lehrer wrote:
 On Mon, 25 Aug 2014 15:48:02 -0500
  Mike Christie micha...@cs.wisc.edu wrote:
 On 08/25/2014 03:31 PM, Donald Williams wrote:
 On a semi-related note, are there any good guides out there to
 tuning  Linux for maximum single-socket performance?

 What kernel are you using? Are you doing IO to one LU or multiple?
 
 Single socket from one machine to another; my current test platform is
 using either 3.13 or 3.15.
 
 I guess the main question I'm trying to answer is:  is it reasonable to
 expect to get 2GB/sec over a single TCP socket, or should I start

Were you using 10 Gb or 40 Gb? 10 right? You meant 2 Gigabit/Gb above
then right? If so, then no. I can get around 5 or 6 Gb for writes using
the defaults and scst and ramdisks for the LUs.

The only think I have had to do in some recent kernels is turn
tcp_autocorking off. I am working on a patch for that now.

How are you running fio?

-- 
You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To post to this group, send email to open-iscsi@googlegroups.com.
Visit this group at http://groups.google.com/group/open-iscsi.
For more options, visit https://groups.google.com/d/optout.


Re: Best way to create multiple TCP flows on 10 Gbps link

2014-08-23 Thread Redwood Hyd
Thanks Mike - That  helped

On Saturday, August 23, 2014 2:41:01 AM UTC+5:30, Mike Christie wrote:


 On Aug 22, 2014, at 12:07 PM, Redwood Hyd redwo...@gmail.com 
 javascript: wrote:

 Hi All,
 I am trying to achieve10Gbps in my single initiator/single target env. 
 (open-iscsi and IET)

 I exported 3 Ramdisks, via 3 different IP aliases to initator, did three 
 iscsi logins , 3 mounts points and then 3 fio jobs in parallel (256K block 
 size each).

 Question 1) Is above a real use case where from same iscsi initiator i did 
 3 iscsi logins to same target (via different IP addresses) ? Anything 
 pros/cons with this.


 This seems normal.


 Question 2) What are the other best ways to create parallel TCP flows 
 (because it seems open-iscsi does'nt have MC/S support)


 Multiple sessions to different portals then use dm-multipath over all 
 those paths/sessions to the LU.

 Question 3) In this scenario can I use dm-multipath - can someone suggest 
 most common way so that at TCP level i get multiple flows.


 What you described above, when you run

 /sbin/multipath
 /sbin/multipath -ll

 Do you see each device having 3 paths? Did you set it up to do round robin 
 for dm multipath path selection? If so, each path is going to be a 
 different tcp socket connection which the iscsi initiator and dm-multipath 
 will use to send IO on.

 At my last job, fusion-io/sandisk, we sold a high performance target, and 
 to get the highest throughput when using linux we had to create extra 
 sessions/connections to avoid some bottlenecks in the linux block/scsi 
 layer.

 Above you would have a session to each target portal/ip. We would set 
 node.session.nr_sessions 
 in iscsid.conf to greater than one so each portal would have nr_sessions 
 sessions/connections. When you run iscsiadm -m node -T target -p ip -l, 
 iscsiadm would then create nr_session to that portal. iscsiadm -m session 
 would show the extra sessions when logged in and multipath -ll should show 
 the extra paths.

 You can also just do

 iscsiadm -m session -R SID -o new

 to dynamically add another session/connection.



-- 
You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To post to this group, send email to open-iscsi@googlegroups.com.
Visit this group at http://groups.google.com/group/open-iscsi.
For more options, visit https://groups.google.com/d/optout.


Re: Best way to create multiple TCP flows on 10 Gbps link

2014-08-22 Thread Michael Christie

On Aug 22, 2014, at 12:07 PM, Redwood Hyd redwood...@gmail.com wrote:

 Hi All,
 I am trying to achieve10Gbps in my single initiator/single target env. 
 (open-iscsi and IET)
 
 I exported 3 Ramdisks, via 3 different IP aliases to initator, did three 
 iscsi logins , 3 mounts points and then 3 fio jobs in parallel (256K block 
 size each).
 
 Question 1) Is above a real use case where from same iscsi initiator i did 3 
 iscsi logins to same target (via different IP addresses) ? Anything pros/cons 
 with this.

This seems normal.

 
 Question 2) What are the other best ways to create parallel TCP flows 
 (because it seems open-iscsi does'nt have MC/S support)

Multiple sessions to different portals then use dm-multipath over all those 
paths/sessions to the LU.

 Question 3) In this scenario can I use dm-multipath - can someone suggest 
 most common way so that at TCP level i get multiple flows.
 

What you described above, when you run

/sbin/multipath
/sbin/multipath -ll

Do you see each device having 3 paths? Did you set it up to do round robin for 
dm multipath path selection? If so, each path is going to be a different tcp 
socket connection which the iscsi initiator and dm-multipath will use to send 
IO on.

At my last job, fusion-io/sandisk, we sold a high performance target, and to 
get the highest throughput when using linux we had to create extra 
sessions/connections to avoid some bottlenecks in the linux block/scsi layer.

Above you would have a session to each target portal/ip. We would set 
node.session.nr_sessions in iscsid.conf to greater than one so each portal 
would have nr_sessions sessions/connections. When you run iscsiadm -m node -T 
target -p ip -l, iscsiadm would then create nr_session to that portal. iscsiadm 
-m session would show the extra sessions when logged in and multipath -ll 
should show the extra paths.

You can also just do

iscsiadm -m session -R SID -o new

to dynamically add another session/connection.

-- 
You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To post to this group, send email to open-iscsi@googlegroups.com.
Visit this group at http://groups.google.com/group/open-iscsi.
For more options, visit https://groups.google.com/d/optout.