Re: FreeBSD iscsi target

2014-07-06 Thread Edward Tomasz Napierała
On 0703T1615, Craig Rodrigues wrote:
 On Tue, Jul 1, 2014 at 2:12 AM, Edward Tomasz Napierała tr...@freebsd.org
 wrote:
 
  In 10-STABLE there is a way to control access based on initiator
  name and IP address.
 
 
 Edward,
 
 Out of curiousity, what kinds of interop testing do you do when you
 implement
 the iSCSI code in FreeBSD?

As for the target, I wrote a script to test it against both old and new
FreeBSD initiators, Linux initiator (Open-iSCSI) and Solaris one; you
can find it at tools/regression/iscsi/.  I also did manual testing with
Windows XP and Windows Vista.  I don't remember if I actually succeeded
to do any testing with ESX (trying to run ESX under Fusion is not such
a good idea, it turns out), but I got a 3rd party report that it worked
correctly.

As for the initiator, I did manual testing against istgt, LIO (Linux)
and COMSTAR (Solaris).

 I work on FreeNAS at iXsystems, and we have
 found
 that iSCSI is a complex protocol,

No kidding :-)

 and there are interop issues, especially
 with VMWare ESX.
 Luckily I see that Alexander Motin has been working with you to commit
 fixes to the iSCSI code, which help.
 
 I've rolled an experimental FreeNAS image based on FreeBSD 10 at svn
 revision r268201 if you want to give it a try:
 
 http://download.freenas.org/nightlies/10.0.0/ALPHA/20140703/

For now I'm swamped with work on autofs, but I'd definitely want to redo
all the testing before 10.1 - last time I did it was just before 10.0.

___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org

Re: FreeBSD iscsi target

2014-07-06 Thread Craig Rodrigues
On Sun, Jul 6, 2014 at 12:08 PM, Edward Tomasz Napierała tr...@freebsd.org
wrote:


 As for the target, I wrote a script to test it against both old and new
 FreeBSD initiators, Linux initiator (Open-iSCSI) and Solaris one; you
 can find it at tools/regression/iscsi/.  I also did manual testing with
 Windows XP and Windows Vista.  I don't remember if I actually succeeded
 to do any testing with ESX (trying to run ESX under Fusion is not such
 a good idea, it turns out), but I got a 3rd party report that it worked
 correctly.

 As for the initiator, I did manual testing against istgt, LIO (Linux)
 and COMSTAR (Solaris).


 For now I'm swamped with work on autofs, but I'd definitely want to redo
 all the testing before 10.1 - last time I did it was just before 10.0.


Good stuff!When you have more time, I highly recommend that you
collaborate with iXsystems (contact d...@ixsystems.com )
, since they can help you with doing interop testing with VMWare ESX and
Windows Server
environments.  It is a lot of work to set up those environments and do the
testing,
but those environments are what are most heavily used in the real world
data centers,
and a number of customers are interested in using FreeNAS/TrueNAS (FreeBSD)
as a backend storage
to those operating systems.

--
Craig
___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org

Re: FreeBSD iscsi target

2014-07-04 Thread Slawa Olhovchenkov
On Thu, Jul 03, 2014 at 08:39:42PM -0700, Kevin Oberman wrote:

 
  In real world Reality is quite different than it actually is.
 
  http://www.cisco.com/c/en/us/products/collateral/switches/catalyst-6500-series-switches/white_paper_c11-696669.html
 
  See Packet Path Theory of Operation. Ingress Mode.
 
 
 Yep. It is really crappy LAGG (fixed three-tupple hash... yuck!) and is
 really nothing but 4 10G Ethernet ports using a 40G PHY in yhe 4x10G form.
 
 Note that they don't make any claim of 802.3ba compliance. It only states
 that 40 Gigabit Ethernet is now part of the IEEE 802.3ba standard. So it
 is, but this device almost certainly predates the completion of the
 standard to get a product for which there was great demand. It's a data
 center product and for typical cases of large numbers of small flow, it
 should do the trick. Probably does not interoperate with true 80-2.3ba
 hardware, either.
 
 My boss at the time I retired last November was on the committee that wrote
 802.3ba. He would be a good authority on whether the standard has any vague
 wording that would allow this, but he retired 5 month after I did and I
 have no contact information for him. But I'm pretty sure that there is no
 way that this is legitimate 40G Ethernet.

802.3ba describe only end point of ethernet.
ASIC, internal details of implemetations NICs, switches, fabrics --
out of standart scope.
Bottleneck can be in any point of packet flow.
In first pappers of netmap test demonstarated NIC can't do saturation
of 10G in one stream 64 bytes packet -- need use multiple rings for
transmit.

I think need use general rule: one flow transfer can hit performance
limitation.
___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org


Re: FreeBSD iscsi target

2014-07-04 Thread Luigi Rizzo
On Fri, Jul 4, 2014 at 12:16 PM, Slawa Olhovchenkov s...@zxy.spb.ru wrote:

 On Thu, Jul 03, 2014 at 08:39:42PM -0700, Kevin Oberman wrote:

  
   In real world Reality is quite different than it actually is.
  
  
 http://www.cisco.com/c/en/us/products/collateral/switches/catalyst-6500-series-switches/white_paper_c11-696669.html
  
   See Packet Path Theory of Operation. Ingress Mode.
  
  
  Yep. It is really crappy LAGG (fixed three-tupple hash... yuck!) and is
  really nothing but 4 10G Ethernet ports using a 40G PHY in yhe 4x10G
 form.
 
  Note that they don't make any claim of 802.3ba compliance. It only states
  that 40 Gigabit Ethernet is now part of the IEEE 802.3ba standard. So
 it
  is, but this device almost certainly predates the completion of the
  standard to get a product for which there was great demand. It's a data
  center product and for typical cases of large numbers of small flow, it
  should do the trick. Probably does not interoperate with true 80-2.3ba
  hardware, either.
 
  My boss at the time I retired last November was on the committee that
 wrote
  802.3ba. He would be a good authority on whether the standard has any
 vague
  wording that would allow this, but he retired 5 month after I did and I
  have no contact information for him. But I'm pretty sure that there is no
  way that this is legitimate 40G Ethernet.

 802.3ba describe only end point of ethernet.
 ASIC, internal details of implemetations NICs, switches, fabrics --
 out of standart scope.
 Bottleneck can be in any point of packet flow.
 In first pappers of netmap test demonstarated NIC can't do saturation
 of 10G in one stream 64 bytes packet -- need use multiple rings for
 transmit.


​that was actually just a configuration issue which since then
has been ​resolved. The 82599 can do 14.88 Mpps on a single ring
(and is the only 10G nic i have encountered who can do so).
Besides, performance with short packets has nothing to do with the case
you were discussing, namely throughput for a single large flow.


 I think need use general rule: one flow transfer can hit performance
 limitation.


​This is neither a useful nor it is restricted to a single flow.

Everything can underperform depending
on the hw/sw configuration, but not necessarily has to.

cheers
luigi
___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org

Re: FreeBSD iscsi target

2014-07-04 Thread Slawa Olhovchenkov
On Fri, Jul 04, 2014 at 12:25:35PM +0200, Luigi Rizzo wrote:

 On Fri, Jul 4, 2014 at 12:16 PM, Slawa Olhovchenkov s...@zxy.spb.ru wrote:
 
  On Thu, Jul 03, 2014 at 08:39:42PM -0700, Kevin Oberman wrote:
 
   
In real world Reality is quite different than it actually is.
   
   
  http://www.cisco.com/c/en/us/products/collateral/switches/catalyst-6500-series-switches/white_paper_c11-696669.html
   
See Packet Path Theory of Operation. Ingress Mode.
   
   
   Yep. It is really crappy LAGG (fixed three-tupple hash... yuck!) and is
   really nothing but 4 10G Ethernet ports using a 40G PHY in yhe 4x10G
  form.
  
   Note that they don't make any claim of 802.3ba compliance. It only states
   that 40 Gigabit Ethernet is now part of the IEEE 802.3ba standard. So
  it
   is, but this device almost certainly predates the completion of the
   standard to get a product for which there was great demand. It's a data
   center product and for typical cases of large numbers of small flow, it
   should do the trick. Probably does not interoperate with true 80-2.3ba
   hardware, either.
  
   My boss at the time I retired last November was on the committee that
  wrote
   802.3ba. He would be a good authority on whether the standard has any
  vague
   wording that would allow this, but he retired 5 month after I did and I
   have no contact information for him. But I'm pretty sure that there is no
   way that this is legitimate 40G Ethernet.
 
  802.3ba describe only end point of ethernet.
  ASIC, internal details of implemetations NICs, switches, fabrics --
  out of standart scope.
  Bottleneck can be in any point of packet flow.
  In first pappers of netmap test demonstarated NIC can't do saturation
  of 10G in one stream 64 bytes packet -- need use multiple rings for
  transmit.
 
 
 ?that was actually just a configuration issue which since then
 has been ?resolved. The 82599 can do 14.88 Mpps on a single ring
 (and is the only 10G nic i have encountered who can do so).

Thanks for clarification.

 Besides, performance with short packets has nothing to do with the case
 you were discussing, namely throughput for a single large flow.

This is only illustration about hardware limitation.
Perforamnce may be not only bandwidth limited, but interrupt/pps (per
flow) limited.

  I think need use general rule: one flow transfer can hit performance
  limitation.
 
 
 ?This is neither a useful nor it is restricted to a single flow.
 
 Everything can underperform depending
 on the hw/sw configuration, but not necessarily has to.

Yes. And estimate to ideal hw/sw configuration and enviroment -- bad
think.

___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org


Re: FreeBSD iscsi target

2014-07-03 Thread Nikolay Denev
On Thu, Jul 3, 2014 at 12:06 AM, Kevin Oberman rkober...@gmail.com wrote:
 On Wed, Jul 2, 2014 at 1:36 PM, Slawa Olhovchenkov s...@zxy.spb.ru wrote:

 On Wed, Jul 02, 2014 at 12:51:59PM -0700, Kevin Oberman wrote:

  On Wed, Jul 2, 2014 at 4:26 AM, Slawa Olhovchenkov s...@zxy.spb.ru
 wrote:
 
   On Tue, Jul 01, 2014 at 10:43:08PM -0700, Kevin Oberman wrote:
  
On Tue, Jul 1, 2014 at 4:13 PM, Slawa Olhovchenkov s...@zxy.spb.ru
   wrote:
   
 On Tue, Jul 01, 2014 at 11:12:52AM +0200, Edward Tomasz Napierala
   wrote:

  Hi.  I've replied in private, but just for the record:
 
  On 0627T0927, Sreenivasa Honnur wrote:
   Does freebsd iscsi target supports:
   1. ACL (access control lists)
 
  In 10-STABLE there is a way to control access based on initiator
  name and IP address.
 
   2. iSNS
 
  No; it's one of the iSCSI features that seem to only be used
  for marketing purposes :-)
 
   3. Multiple connections per session
 
  No; see above.

 I think this is help for 40G links.

   
I assume that you are looking at transfer of large amounts of data
 over
   40G
links. Assuming that tis is the case, yes, multiple connections per
   session
  
   Yes, this case. As I know, single transfer over 40G link limited by
   10G.
  
  ??? No, not at all. Getting 40G performance over TCP is not easy, but
 there
  is no 10G limitation.

 As I know (may be wrong) 40G is bundled 4x10G link.
 For prevent packet reordering (when run over diferrent link) all
 packets from one sessoin must be routed to same link.
 Same issuse for Etherchannel.


 No, 40G Ethernet is  single channel from the interface perspective.. What
 my be confusing you is that they may use lanes which, for 40G,  are
 10.3125G. But, unlike the case with Etherchannel, these lanes are hidden
 from the MAC. The interface deals with a single stream and parcels it out
 over the 10G (or 25G) lanes. All 100G optical links use multiple lanes
 (4x25G or 10x10G), but 40G my use either a single 40G lane for distances of
 up to 2km or 4x10G for longer runs.

 Since, in most cases, 40G is used within a data center or to connect to
 wave gear for DWDM transmission over very long distances, most runs are
 under 2km, so a single 40G lane may be used. When 4 lanes are used, a
 ribbon cable is required to assure that all optical or copper paths are
 exactly the same length. Since the PMD is designed to know about and use
 these lanes for a single channel, the issue of packet re-ordering is not
 present and the protocol layers above the physical are unaware of how many
 lanes are used.

 Wikipedia has a fairly good discussion under the unfortunate title of 100
 Gigabit Ethernet https://en.wikipedia.org/wiki/100_Gigabit_Ethernet.
 Regardless of the title, the article covers both 40 and 100 Gigabit
 specifications as both were specified on the same standard, 802.3ba.

 --
 R. Kevin Oberman, Network Engineer, Retired
 E-mail: rkober...@gmail.com
 ___
 freebsd-current@freebsd.org mailing list
 http://lists.freebsd.org/mailman/listinfo/freebsd-current
 To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org

I found this white paper useful in understanding how this works :
http://www.cisco.com/c/en/us/products/collateral/switches/nexus-3000-series-switches/white_paper_c11-726674.pdf

--Nikolay
___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org


Re: FreeBSD iscsi target

2014-07-03 Thread Slawa Olhovchenkov
On Thu, Jul 03, 2014 at 09:31:45AM +0100, Nikolay Denev wrote:

 On Thu, Jul 3, 2014 at 12:06 AM, Kevin Oberman rkober...@gmail.com wrote:
  On Wed, Jul 2, 2014 at 1:36 PM, Slawa Olhovchenkov s...@zxy.spb.ru wrote:
 
  On Wed, Jul 02, 2014 at 12:51:59PM -0700, Kevin Oberman wrote:
 
   On Wed, Jul 2, 2014 at 4:26 AM, Slawa Olhovchenkov s...@zxy.spb.ru
  wrote:
  
On Tue, Jul 01, 2014 at 10:43:08PM -0700, Kevin Oberman wrote:
   
 On Tue, Jul 1, 2014 at 4:13 PM, Slawa Olhovchenkov s...@zxy.spb.ru
wrote:

  On Tue, Jul 01, 2014 at 11:12:52AM +0200, Edward Tomasz Napierala
wrote:
 
   Hi.  I've replied in private, but just for the record:
  
   On 0627T0927, Sreenivasa Honnur wrote:
Does freebsd iscsi target supports:
1. ACL (access control lists)
  
   In 10-STABLE there is a way to control access based on initiator
   name and IP address.
  
2. iSNS
  
   No; it's one of the iSCSI features that seem to only be used
   for marketing purposes :-)
  
3. Multiple connections per session
  
   No; see above.
 
  I think this is help for 40G links.
 

 I assume that you are looking at transfer of large amounts of data
  over
40G
 links. Assuming that tis is the case, yes, multiple connections per
session
   
Yes, this case. As I know, single transfer over 40G link limited by
10G.
   
   ??? No, not at all. Getting 40G performance over TCP is not easy, but
  there
   is no 10G limitation.
 
  As I know (may be wrong) 40G is bundled 4x10G link.
  For prevent packet reordering (when run over diferrent link) all
  packets from one sessoin must be routed to same link.
  Same issuse for Etherchannel.
 
 
  No, 40G Ethernet is  single channel from the interface perspective.. What
  my be confusing you is that they may use lanes which, for 40G,  are
  10.3125G. But, unlike the case with Etherchannel, these lanes are hidden
  from the MAC. The interface deals with a single stream and parcels it out
  over the 10G (or 25G) lanes. All 100G optical links use multiple lanes
  (4x25G or 10x10G), but 40G my use either a single 40G lane for distances of
  up to 2km or 4x10G for longer runs.
 
  Since, in most cases, 40G is used within a data center or to connect to
  wave gear for DWDM transmission over very long distances, most runs are
  under 2km, so a single 40G lane may be used. When 4 lanes are used, a
  ribbon cable is required to assure that all optical or copper paths are
  exactly the same length. Since the PMD is designed to know about and use
  these lanes for a single channel, the issue of packet re-ordering is not
  present and the protocol layers above the physical are unaware of how many
  lanes are used.
 
  Wikipedia has a fairly good discussion under the unfortunate title of 100
  Gigabit Ethernet https://en.wikipedia.org/wiki/100_Gigabit_Ethernet.
  Regardless of the title, the article covers both 40 and 100 Gigabit
  specifications as both were specified on the same standard, 802.3ba.
 
  --
  R. Kevin Oberman, Network Engineer, Retired
  E-mail: rkober...@gmail.com
  ___
  freebsd-current@freebsd.org mailing list
  http://lists.freebsd.org/mailman/listinfo/freebsd-current
  To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org
 
 I found this white paper useful in understanding how this works :
 http://www.cisco.com/c/en/us/products/collateral/switches/nexus-3000-series-switches/white_paper_c11-726674.pdf

In real world Reality is quite different than it actually is.
http://www.cisco.com/c/en/us/products/collateral/switches/catalyst-6500-series-switches/white_paper_c11-696669.html

See Packet Path Theory of Operation. Ingress Mode.

___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org


Re: FreeBSD iscsi target

2014-07-03 Thread Nikolay Denev
On Thu, Jul 3, 2014 at 10:13 AM, Slawa Olhovchenkov s...@zxy.spb.ru wrote:
 On Thu, Jul 03, 2014 at 09:31:45AM +0100, Nikolay Denev wrote:

 On Thu, Jul 3, 2014 at 12:06 AM, Kevin Oberman rkober...@gmail.com wrote:
  On Wed, Jul 2, 2014 at 1:36 PM, Slawa Olhovchenkov s...@zxy.spb.ru wrote:
 
  On Wed, Jul 02, 2014 at 12:51:59PM -0700, Kevin Oberman wrote:
 
   On Wed, Jul 2, 2014 at 4:26 AM, Slawa Olhovchenkov s...@zxy.spb.ru
  wrote:
  
On Tue, Jul 01, 2014 at 10:43:08PM -0700, Kevin Oberman wrote:
   
 On Tue, Jul 1, 2014 at 4:13 PM, Slawa Olhovchenkov s...@zxy.spb.ru
wrote:

  On Tue, Jul 01, 2014 at 11:12:52AM +0200, Edward Tomasz Napierala
wrote:
 
   Hi.  I've replied in private, but just for the record:
  
   On 0627T0927, Sreenivasa Honnur wrote:
Does freebsd iscsi target supports:
1. ACL (access control lists)
  
   In 10-STABLE there is a way to control access based on initiator
   name and IP address.
  
2. iSNS
  
   No; it's one of the iSCSI features that seem to only be used
   for marketing purposes :-)
  
3. Multiple connections per session
  
   No; see above.
 
  I think this is help for 40G links.
 

 I assume that you are looking at transfer of large amounts of data
  over
40G
 links. Assuming that tis is the case, yes, multiple connections per
session
   
Yes, this case. As I know, single transfer over 40G link limited by
10G.
   
   ??? No, not at all. Getting 40G performance over TCP is not easy, but
  there
   is no 10G limitation.
 
  As I know (may be wrong) 40G is bundled 4x10G link.
  For prevent packet reordering (when run over diferrent link) all
  packets from one sessoin must be routed to same link.
  Same issuse for Etherchannel.
 
 
  No, 40G Ethernet is  single channel from the interface perspective.. What
  my be confusing you is that they may use lanes which, for 40G,  are
  10.3125G. But, unlike the case with Etherchannel, these lanes are hidden
  from the MAC. The interface deals with a single stream and parcels it out
  over the 10G (or 25G) lanes. All 100G optical links use multiple lanes
  (4x25G or 10x10G), but 40G my use either a single 40G lane for distances of
  up to 2km or 4x10G for longer runs.
 
  Since, in most cases, 40G is used within a data center or to connect to
  wave gear for DWDM transmission over very long distances, most runs are
  under 2km, so a single 40G lane may be used. When 4 lanes are used, a
  ribbon cable is required to assure that all optical or copper paths are
  exactly the same length. Since the PMD is designed to know about and use
  these lanes for a single channel, the issue of packet re-ordering is not
  present and the protocol layers above the physical are unaware of how many
  lanes are used.
 
  Wikipedia has a fairly good discussion under the unfortunate title of 100
  Gigabit Ethernet https://en.wikipedia.org/wiki/100_Gigabit_Ethernet.
  Regardless of the title, the article covers both 40 and 100 Gigabit
  specifications as both were specified on the same standard, 802.3ba.
 
  --
  R. Kevin Oberman, Network Engineer, Retired
  E-mail: rkober...@gmail.com
  ___
  freebsd-current@freebsd.org mailing list
  http://lists.freebsd.org/mailman/listinfo/freebsd-current
  To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org

 I found this white paper useful in understanding how this works :
 http://www.cisco.com/c/en/us/products/collateral/switches/nexus-3000-series-switches/white_paper_c11-726674.pdf

 In real world Reality is quite different than it actually is.
 http://www.cisco.com/c/en/us/products/collateral/switches/catalyst-6500-series-switches/white_paper_c11-696669.html

 See Packet Path Theory of Operation. Ingress Mode.


Interesting, however this seems like implementation specific detail,
and not limitation of native 40Gbit ethernet.
Still, it's something that one must be aware of (esp when dealing with
Cisco gear :) )

I wonder why they are not doing something like this :
http://blog.ipspace.net/2011/04/brocade-vcs-fabric-has-almost-perfect.html

--Nikolay
___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org


Re: FreeBSD iscsi target

2014-07-03 Thread Slawa Olhovchenkov
On Thu, Jul 03, 2014 at 10:35:55AM +0100, Nikolay Denev wrote:

  I found this white paper useful in understanding how this works :
  http://www.cisco.com/c/en/us/products/collateral/switches/nexus-3000-series-switches/white_paper_c11-726674.pdf
 
  In real world Reality is quite different than it actually is.
  http://www.cisco.com/c/en/us/products/collateral/switches/catalyst-6500-series-switches/white_paper_c11-696669.html
 
  See Packet Path Theory of Operation. Ingress Mode.
 
 
 Interesting, however this seems like implementation specific detail,
 and not limitation of native 40Gbit ethernet.

I see some perfomance tests on solaris and 40G link.
In this test perfomance limited about 10Gbit per flow.
May be I found links to this test.

May be some NIC's implementation specific detail also limited
performance per flow.

 Still, it's something that one must be aware of (esp when dealing with
 Cisco gear :) )
 
 I wonder why they are not doing something like this :
 http://blog.ipspace.net/2011/04/brocade-vcs-fabric-has-almost-perfect.html
 
 --Nikolay
___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org


Re: FreeBSD iscsi target

2014-07-03 Thread Adrian Chadd
Which NIC?


-a


On 3 July 2014 03:29, Slawa Olhovchenkov s...@zxy.spb.ru wrote:
 On Thu, Jul 03, 2014 at 10:35:55AM +0100, Nikolay Denev wrote:

  I found this white paper useful in understanding how this works :
  http://www.cisco.com/c/en/us/products/collateral/switches/nexus-3000-series-switches/white_paper_c11-726674.pdf
 
  In real world Reality is quite different than it actually is.
  http://www.cisco.com/c/en/us/products/collateral/switches/catalyst-6500-series-switches/white_paper_c11-696669.html
 
  See Packet Path Theory of Operation. Ingress Mode.
 

 Interesting, however this seems like implementation specific detail,
 and not limitation of native 40Gbit ethernet.

 I see some perfomance tests on solaris and 40G link.
 In this test perfomance limited about 10Gbit per flow.
 May be I found links to this test.

 May be some NIC's implementation specific detail also limited
 performance per flow.

 Still, it's something that one must be aware of (esp when dealing with
 Cisco gear :) )

 I wonder why they are not doing something like this :
 http://blog.ipspace.net/2011/04/brocade-vcs-fabric-has-almost-perfect.html

 --Nikolay
 ___
 freebsd-current@freebsd.org mailing list
 http://lists.freebsd.org/mailman/listinfo/freebsd-current
 To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org
___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org


Re: FreeBSD iscsi target

2014-07-03 Thread Slawa Olhovchenkov
On Thu, Jul 03, 2014 at 10:28:19AM -0700, Adrian Chadd wrote:

 Which NIC?

I am can't find again this forum posts (last time I find -- year ago).
May be this http://hardforum.com/showthread.php?t=1662769
In this case -- Mellanox QDR ConnectX2 Infiniband.


 On 3 July 2014 03:29, Slawa Olhovchenkov s...@zxy.spb.ru wrote:
  On Thu, Jul 03, 2014 at 10:35:55AM +0100, Nikolay Denev wrote:
 
   I found this white paper useful in understanding how this works :
   http://www.cisco.com/c/en/us/products/collateral/switches/nexus-3000-series-switches/white_paper_c11-726674.pdf
  
   In real world Reality is quite different than it actually is.
   http://www.cisco.com/c/en/us/products/collateral/switches/catalyst-6500-series-switches/white_paper_c11-696669.html
  
   See Packet Path Theory of Operation. Ingress Mode.
  
 
  Interesting, however this seems like implementation specific detail,
  and not limitation of native 40Gbit ethernet.
 
  I see some perfomance tests on solaris and 40G link.
  In this test perfomance limited about 10Gbit per flow.
  May be I found links to this test.
 
  May be some NIC's implementation specific detail also limited
  performance per flow.
 
  Still, it's something that one must be aware of (esp when dealing with
  Cisco gear :) )
 
  I wonder why they are not doing something like this :
  http://blog.ipspace.net/2011/04/brocade-vcs-fabric-has-almost-perfect.html
 
  --Nikolay
  ___
  freebsd-current@freebsd.org mailing list
  http://lists.freebsd.org/mailman/listinfo/freebsd-current
  To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org
___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org


Re: FreeBSD iscsi target

2014-07-03 Thread Craig Rodrigues
On Tue, Jul 1, 2014 at 2:12 AM, Edward Tomasz Napierała tr...@freebsd.org
wrote:

 In 10-STABLE there is a way to control access based on initiator
 name and IP address.


Edward,

Out of curiousity, what kinds of interop testing do you do when you
implement
the iSCSI code in FreeBSD?  I work on FreeNAS at iXsystems, and we have
found
that iSCSI is a complex protocol, and there are interop issues, especially
with VMWare ESX.
Luckily I see that Alexander Motin has been working with you to commit
fixes to the iSCSI code, which help.

I've rolled an experimental FreeNAS image based on FreeBSD 10 at svn
revision r268201 if you want to give it a try:

http://download.freenas.org/nightlies/10.0.0/ALPHA/20140703/

--
Craig
___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org

Re: FreeBSD iscsi target

2014-07-03 Thread Kevin Oberman
On Thu, Jul 3, 2014 at 2:13 AM, Slawa Olhovchenkov s...@zxy.spb.ru wrote:

 On Thu, Jul 03, 2014 at 09:31:45AM +0100, Nikolay Denev wrote:

  On Thu, Jul 3, 2014 at 12:06 AM, Kevin Oberman rkober...@gmail.com
 wrote:
   On Wed, Jul 2, 2014 at 1:36 PM, Slawa Olhovchenkov s...@zxy.spb.ru
 wrote:
  
   On Wed, Jul 02, 2014 at 12:51:59PM -0700, Kevin Oberman wrote:
  
On Wed, Jul 2, 2014 at 4:26 AM, Slawa Olhovchenkov s...@zxy.spb.ru
   wrote:
   
 On Tue, Jul 01, 2014 at 10:43:08PM -0700, Kevin Oberman wrote:

  On Tue, Jul 1, 2014 at 4:13 PM, Slawa Olhovchenkov 
 s...@zxy.spb.ru
 wrote:
 
   On Tue, Jul 01, 2014 at 11:12:52AM +0200, Edward Tomasz
 Napierala
 wrote:
  
Hi.  I've replied in private, but just for the record:
   
On 0627T0927, Sreenivasa Honnur wrote:
 Does freebsd iscsi target supports:
 1. ACL (access control lists)
   
In 10-STABLE there is a way to control access based on
 initiator
name and IP address.
   
 2. iSNS
   
No; it's one of the iSCSI features that seem to only be used
for marketing purposes :-)
   
 3. Multiple connections per session
   
No; see above.
  
   I think this is help for 40G links.
  
 
  I assume that you are looking at transfer of large amounts of
 data
   over
 40G
  links. Assuming that tis is the case, yes, multiple connections
 per
 session

 Yes, this case. As I know, single transfer over 40G link limited
 by
 10G.

??? No, not at all. Getting 40G performance over TCP is not easy,
 but
   there
is no 10G limitation.
  
   As I know (may be wrong) 40G is bundled 4x10G link.
   For prevent packet reordering (when run over diferrent link) all
   packets from one sessoin must be routed to same link.
   Same issuse for Etherchannel.
  
  
   No, 40G Ethernet is  single channel from the interface perspective..
 What
   my be confusing you is that they may use lanes which, for 40G,  are
   10.3125G. But, unlike the case with Etherchannel, these lanes are
 hidden
   from the MAC. The interface deals with a single stream and parcels it
 out
   over the 10G (or 25G) lanes. All 100G optical links use multiple lanes
   (4x25G or 10x10G), but 40G my use either a single 40G lane for
 distances of
   up to 2km or 4x10G for longer runs.
  
   Since, in most cases, 40G is used within a data center or to connect to
   wave gear for DWDM transmission over very long distances, most runs are
   under 2km, so a single 40G lane may be used. When 4 lanes are used, a
   ribbon cable is required to assure that all optical or copper paths are
   exactly the same length. Since the PMD is designed to know about and
 use
   these lanes for a single channel, the issue of packet re-ordering is
 not
   present and the protocol layers above the physical are unaware of how
 many
   lanes are used.
  
   Wikipedia has a fairly good discussion under the unfortunate title of
 100
   Gigabit Ethernet https://en.wikipedia.org/wiki/100_Gigabit_Ethernet.
   Regardless of the title, the article covers both 40 and 100 Gigabit
   specifications as both were specified on the same standard, 802.3ba.
  
   --
   R. Kevin Oberman, Network Engineer, Retired
   E-mail: rkober...@gmail.com
   ___
   freebsd-current@freebsd.org mailing list
   http://lists.freebsd.org/mailman/listinfo/freebsd-current
   To unsubscribe, send any mail to 
 freebsd-current-unsubscr...@freebsd.org
 
  I found this white paper useful in understanding how this works :
 
 http://www.cisco.com/c/en/us/products/collateral/switches/nexus-3000-series-switches/white_paper_c11-726674.pdf

 In real world Reality is quite different than it actually is.

 http://www.cisco.com/c/en/us/products/collateral/switches/catalyst-6500-series-switches/white_paper_c11-696669.html

 See Packet Path Theory of Operation. Ingress Mode.


Yep. It is really crappy LAGG (fixed three-tupple hash... yuck!) and is
really nothing but 4 10G Ethernet ports using a 40G PHY in yhe 4x10G form.

Note that they don't make any claim of 802.3ba compliance. It only states
that 40 Gigabit Ethernet is now part of the IEEE 802.3ba standard. So it
is, but this device almost certainly predates the completion of the
standard to get a product for which there was great demand. It's a data
center product and for typical cases of large numbers of small flow, it
should do the trick. Probably does not interoperate with true 80-2.3ba
hardware, either.

My boss at the time I retired last November was on the committee that wrote
802.3ba. He would be a good authority on whether the standard has any vague
wording that would allow this, but he retired 5 month after I did and I
have no contact information for him. But I'm pretty sure that there is no
way that this is legitimate 40G Ethernet.
-- 
R. Kevin Oberman, Network Engineer, Retired
E-mail

Re: FreeBSD iscsi target

2014-07-02 Thread Slawa Olhovchenkov
On Tue, Jul 01, 2014 at 10:43:08PM -0700, Kevin Oberman wrote:

 On Tue, Jul 1, 2014 at 4:13 PM, Slawa Olhovchenkov s...@zxy.spb.ru wrote:
 
  On Tue, Jul 01, 2014 at 11:12:52AM +0200, Edward Tomasz Napierala wrote:
 
   Hi.  I've replied in private, but just for the record:
  
   On 0627T0927, Sreenivasa Honnur wrote:
Does freebsd iscsi target supports:
1. ACL (access control lists)
  
   In 10-STABLE there is a way to control access based on initiator
   name and IP address.
  
2. iSNS
  
   No; it's one of the iSCSI features that seem to only be used
   for marketing purposes :-)
  
3. Multiple connections per session
  
   No; see above.
 
  I think this is help for 40G links.
 
 
 I assume that you are looking at transfer of large amounts of data over 40G
 links. Assuming that tis is the case, yes, multiple connections per session

Yes, this case. As I know, single transfer over 40G link limited by
10G.
___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org


Re: FreeBSD iscsi target

2014-07-02 Thread Navdeep Parhar
On Wed, Jul 02, 2014 at 03:26:09PM +0400, Slawa Olhovchenkov wrote:
 On Tue, Jul 01, 2014 at 10:43:08PM -0700, Kevin Oberman wrote:
 
  On Tue, Jul 1, 2014 at 4:13 PM, Slawa Olhovchenkov s...@zxy.spb.ru wrote:
  
   On Tue, Jul 01, 2014 at 11:12:52AM +0200, Edward Tomasz Napierala wrote:
  
Hi.  I've replied in private, but just for the record:
   
On 0627T0927, Sreenivasa Honnur wrote:
 Does freebsd iscsi target supports:
 1. ACL (access control lists)
   
In 10-STABLE there is a way to control access based on initiator
name and IP address.
   
 2. iSNS
   
No; it's one of the iSCSI features that seem to only be used
for marketing purposes :-)
   
 3. Multiple connections per session
   
No; see above.
  
   I think this is help for 40G links.
  
  
  I assume that you are looking at transfer of large amounts of data over 40G
  links. Assuming that tis is the case, yes, multiple connections per session
 
 Yes, this case. As I know, single transfer over 40G link limited by
 10G.

This is not correct.  A 40Gb link does not limit a single transfer to
10G.  For example, on FreeBSD all common bandwidth benchmarks reach
40GbE line rate with a single TCP connection at mtu 1500.  If a single
transfer were limited to 10G you'd need 4 connections to get there.

The physical signalling is over four lanes so it's easy to split a 40G
link into four separate 10G links.  But when running as a 40GbE (this is
the usual case) the hardware will combine all the lanes into a single
40G data stream, and you get to use all of the bandwidth.

Regards,
Navdeep
___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org


Re: FreeBSD iscsi target

2014-07-02 Thread Kevin Oberman
On Wed, Jul 2, 2014 at 4:26 AM, Slawa Olhovchenkov s...@zxy.spb.ru wrote:

 On Tue, Jul 01, 2014 at 10:43:08PM -0700, Kevin Oberman wrote:

  On Tue, Jul 1, 2014 at 4:13 PM, Slawa Olhovchenkov s...@zxy.spb.ru
 wrote:
 
   On Tue, Jul 01, 2014 at 11:12:52AM +0200, Edward Tomasz Napierala
 wrote:
  
Hi.  I've replied in private, but just for the record:
   
On 0627T0927, Sreenivasa Honnur wrote:
 Does freebsd iscsi target supports:
 1. ACL (access control lists)
   
In 10-STABLE there is a way to control access based on initiator
name and IP address.
   
 2. iSNS
   
No; it's one of the iSCSI features that seem to only be used
for marketing purposes :-)
   
 3. Multiple connections per session
   
No; see above.
  
   I think this is help for 40G links.
  
 
  I assume that you are looking at transfer of large amounts of data over
 40G
  links. Assuming that tis is the case, yes, multiple connections per
 session

 Yes, this case. As I know, single transfer over 40G link limited by
 10G.

??? No, not at all. Getting 40G performance over TCP is not easy, but there
is no 10G limitation.

I might also suggest looking at Luigi Rizzo's netmap. It is NOT a drop-in
replacement for the TCP stack, but a tool that works with many high-speed
Ethernet devices to allow very efficient bulk data transfers. You will see
lots of discussion of it on net@. It is available for both FreeBSD and
Linux. It has become very popular for this sort of thing, but it does
require software customization. Normal network operatipns will continue
to use the standard network stack.
-- 
R. Kevin Oberman, Network Engineer, Retired
E-mail: rkober...@gmail.com
___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org


Re: FreeBSD iscsi target

2014-07-02 Thread Slawa Olhovchenkov
On Wed, Jul 02, 2014 at 12:51:59PM -0700, Kevin Oberman wrote:

 On Wed, Jul 2, 2014 at 4:26 AM, Slawa Olhovchenkov s...@zxy.spb.ru wrote:
 
  On Tue, Jul 01, 2014 at 10:43:08PM -0700, Kevin Oberman wrote:
 
   On Tue, Jul 1, 2014 at 4:13 PM, Slawa Olhovchenkov s...@zxy.spb.ru
  wrote:
  
On Tue, Jul 01, 2014 at 11:12:52AM +0200, Edward Tomasz Napierala
  wrote:
   
 Hi.  I've replied in private, but just for the record:

 On 0627T0927, Sreenivasa Honnur wrote:
  Does freebsd iscsi target supports:
  1. ACL (access control lists)

 In 10-STABLE there is a way to control access based on initiator
 name and IP address.

  2. iSNS

 No; it's one of the iSCSI features that seem to only be used
 for marketing purposes :-)

  3. Multiple connections per session

 No; see above.
   
I think this is help for 40G links.
   
  
   I assume that you are looking at transfer of large amounts of data over
  40G
   links. Assuming that tis is the case, yes, multiple connections per
  session
 
  Yes, this case. As I know, single transfer over 40G link limited by
  10G.
 
 ??? No, not at all. Getting 40G performance over TCP is not easy, but there
 is no 10G limitation.

As I know (may be wrong) 40G is bundled 4x10G link.
For prevent packet reordering (when run over diferrent link) all
packets from one sessoin must be routed to same link.
Same issuse for Etherchannel.
___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org


Re: FreeBSD iscsi target

2014-07-02 Thread Kevin Oberman
On Wed, Jul 2, 2014 at 1:36 PM, Slawa Olhovchenkov s...@zxy.spb.ru wrote:

 On Wed, Jul 02, 2014 at 12:51:59PM -0700, Kevin Oberman wrote:

  On Wed, Jul 2, 2014 at 4:26 AM, Slawa Olhovchenkov s...@zxy.spb.ru
 wrote:
 
   On Tue, Jul 01, 2014 at 10:43:08PM -0700, Kevin Oberman wrote:
  
On Tue, Jul 1, 2014 at 4:13 PM, Slawa Olhovchenkov s...@zxy.spb.ru
   wrote:
   
 On Tue, Jul 01, 2014 at 11:12:52AM +0200, Edward Tomasz Napierala
   wrote:

  Hi.  I've replied in private, but just for the record:
 
  On 0627T0927, Sreenivasa Honnur wrote:
   Does freebsd iscsi target supports:
   1. ACL (access control lists)
 
  In 10-STABLE there is a way to control access based on initiator
  name and IP address.
 
   2. iSNS
 
  No; it's one of the iSCSI features that seem to only be used
  for marketing purposes :-)
 
   3. Multiple connections per session
 
  No; see above.

 I think this is help for 40G links.

   
I assume that you are looking at transfer of large amounts of data
 over
   40G
links. Assuming that tis is the case, yes, multiple connections per
   session
  
   Yes, this case. As I know, single transfer over 40G link limited by
   10G.
  
  ??? No, not at all. Getting 40G performance over TCP is not easy, but
 there
  is no 10G limitation.

 As I know (may be wrong) 40G is bundled 4x10G link.
 For prevent packet reordering (when run over diferrent link) all
 packets from one sessoin must be routed to same link.
 Same issuse for Etherchannel.


No, 40G Ethernet is  single channel from the interface perspective.. What
my be confusing you is that they may use lanes which, for 40G,  are
10.3125G. But, unlike the case with Etherchannel, these lanes are hidden
from the MAC. The interface deals with a single stream and parcels it out
over the 10G (or 25G) lanes. All 100G optical links use multiple lanes
(4x25G or 10x10G), but 40G my use either a single 40G lane for distances of
up to 2km or 4x10G for longer runs.

Since, in most cases, 40G is used within a data center or to connect to
wave gear for DWDM transmission over very long distances, most runs are
under 2km, so a single 40G lane may be used. When 4 lanes are used, a
ribbon cable is required to assure that all optical or copper paths are
exactly the same length. Since the PMD is designed to know about and use
these lanes for a single channel, the issue of packet re-ordering is not
present and the protocol layers above the physical are unaware of how many
lanes are used.

Wikipedia has a fairly good discussion under the unfortunate title of 100
Gigabit Ethernet https://en.wikipedia.org/wiki/100_Gigabit_Ethernet.
Regardless of the title, the article covers both 40 and 100 Gigabit
specifications as both were specified on the same standard, 802.3ba.

-- 
R. Kevin Oberman, Network Engineer, Retired
E-mail: rkober...@gmail.com
___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org


Re: FreeBSD iscsi target

2014-07-01 Thread Edward Tomasz Napierała
Hi.  I've replied in private, but just for the record:

On 0627T0927, Sreenivasa Honnur wrote:
 Does freebsd iscsi target supports:
 1. ACL (access control lists)

In 10-STABLE there is a way to control access based on initiator
name and IP address.

 2. iSNS 

No; it's one of the iSCSI features that seem to only be used
for marketing purposes :-)

 3. Multiple connections per session

No; see above.

 4. Dynamic Lun allocation/resize

Yes.

 5. Target redirection

It's in Perforce; I'll try to get it into 11-HEAD shortly.

___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org


Re: FreeBSD iscsi target

2014-07-01 Thread Slawa Olhovchenkov
On Tue, Jul 01, 2014 at 11:12:52AM +0200, Edward Tomasz Napierala wrote:

 Hi.  I've replied in private, but just for the record:
 
 On 0627T0927, Sreenivasa Honnur wrote:
  Does freebsd iscsi target supports:
  1. ACL (access control lists)
 
 In 10-STABLE there is a way to control access based on initiator
 name and IP address.
 
  2. iSNS 
 
 No; it's one of the iSCSI features that seem to only be used
 for marketing purposes :-)
 
  3. Multiple connections per session
 
 No; see above.

I think this is help for 40G links.

___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org


Re: FreeBSD iscsi target

2014-07-01 Thread Kevin Oberman
On Tue, Jul 1, 2014 at 4:13 PM, Slawa Olhovchenkov s...@zxy.spb.ru wrote:

 On Tue, Jul 01, 2014 at 11:12:52AM +0200, Edward Tomasz Napierala wrote:

  Hi.  I've replied in private, but just for the record:
 
  On 0627T0927, Sreenivasa Honnur wrote:
   Does freebsd iscsi target supports:
   1. ACL (access control lists)
 
  In 10-STABLE there is a way to control access based on initiator
  name and IP address.
 
   2. iSNS
 
  No; it's one of the iSCSI features that seem to only be used
  for marketing purposes :-)
 
   3. Multiple connections per session
 
  No; see above.

 I think this is help for 40G links.


I assume that you are looking at transfer of large amounts of data over 40G
links. Assuming that tis is the case, yes, multiple connections per session
can help you. If you have not done so, you should also look at
http://fasterdata.es.net. It is a bit linux-centric these days, but still
provides a lot of information on moving data efficiently over fast links.
IIt is evolving continually, but did not discuss iSCSI last I knew.

ESnet has an Nx100G backbone and many users (most notably the LHC at CERN)
who regularly move increasingly large volumes of data, often around the
clock, making efficient use of available bandwidth is critical. The network
researchers there have put in significant efforts in determining how to
best optimize such operations.
-- 
R. Kevin Oberman, Network Engineer, Retired
E-mail: rkober...@gmail.com
___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org


FreeBSD iscsi target

2014-06-27 Thread Sreenivasa Honnur
Does freebsd iscsi target supports:
1. ACL (access control lists)
2. iSNS 
3. Multiple connections per session
4. Dynamic Lun allocation/resize
5. Target redirection

___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org