Re: Open-FCoE on linux-scsi

2008-01-31 Thread James Smart



Chris Leech wrote:

In thinking about how FC should be represented, it seems to me that in
order to provide good interfaces at multiple levels of functionality
we have to make sure the we have the right data structures at each
level.  At the highest level there's scsi_cmd, then there's sequence
based interfaces that would need some sort of a sequence structure
with a scatter gather list, and at the lowest level interfaces work
directly with FC frames.


I think the only thing that will actually talk frames will either be
a fc mac, which we haven't seen yet, or a FCOE entity.  Consider the
latter to be the predominant case.


I'd like to talk about how we should go about representing FC frames.
Currently, our libfc code introduces an fc_frame struct but allows the
LLDD to provide an allocation function and control how the fc_frames
are allocated.  The fcoe module uses this capability to map the data
buffer of an fc_frame to that of an sk_buff.  As someone coming from a
networking background, and interested in FCoE which ends up sending
frames via an Ethernet driver, I tend to think this is overly complex
and just want to use sk_buffs directly.


If the predominant user is fcoe, then I think describing the frame in
the context of a sk_buff is fine.


Would SCSI/FC developers be opposed to dealing with sk_buffs for frame
level interfaces, or do we need to keep a seperate fc_frame structure
around?  I'd argue that skbs do a fine job of representing all sorts
of frame structures, that any device that supports IP over FC has to
deal with skbs in its driver anyway, and that at the frame level FC is
just another network.  But then again, I am biased as skbs seem
friendly and familiar to me as I venture further into the alien
landscape that is scsi.

- Chris


-- james s
-
To unsubscribe from this list: send the line unsubscribe linux-scsi in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: Open-FCoE on linux-scsi

2008-01-22 Thread Love, Robert W


-Original Message-
From: James Smart [mailto:[EMAIL PROTECTED]
Sent: Tuesday, January 15, 2008 2:19 PM
To: Love, Robert W
Cc: Stefan Richter; Dev, Vasu; FUJITA Tomonori; [EMAIL PROTECTED]; Zou, Yi;
Leech, Christopher; linux-scsi@vger.kernel.org; James Smart
Subject: Re: Open-FCoE on linux-scsi

Love, Robert W wrote:
 The interconnect layer could be split further:
 SCSI command set layer -- SCSI core -- SCSI transport layer (FCP) --
 Fibre Channel core -- Fibre Channel card drivers, FCoE drivers.

 This is how I see the comparison. ('/' indicates 'or')

 You suggest  Open-FCoE
 SCSI-ml  SCSI-ml
 scsi_transport_fc.h  scsi_tranport_fc.h
 scsi_transport_fc.c (FC core) / HBA  openfc / HBA
 fcoe / HBA   fcoe / HBA

From what I can see the layering is roughly the same with the main
 difference being that we should be using more of (and putting more
into)
 scsi_transport_fc.h. Also we should make the FCP implementation
(openfc)
 fit in a bit nicer as scsi_transport_fc.c. We're going to look into
 making better use of scsi_transport_fc.h as a first step.

I don't know what the distinction is between scsi_transport_fc.h vs
scsi_transport_fc.c is. They're all one and the same - the fc
transport.
One contains the data structures and api between LLD and transport,
the other (the .c) contains the code to implement the api, transport
objects
and sysfs handlers.

 From my point of view, the fc transport is an assist library for the
FC
LLDDs.
Currently, it interacts with the midlayer only around some scan and
block/unblock
functions. Excepting a small helper function used by the LLDD, it does
not
get
involved in the i/o path.

So my view of the layering for a normal FC driver is:
SCSI-ml
LLDD - FC transport
bus code (e.g. pci)

Right now, the assists provided in the FC transport are:
- Presentation of transport objects into the sysfs tree, and thus sysfs
   attribute handling around those objects. This effectively is the FC
   management interface.
- Remote Port Object mgmt - interaction with the midlayer.
Specifically:
   - Manages the SCSI target id bindings for the remote port
   - Knows when the rport is present or not.
 On new connectivity:
   Kicks off scsi scans, restarts blocked i/o.
 On connectivity loss:
   Insulates midlayer from temporary disconnects by block of
 the target/luns, and manages the timer for the allowed period
of
 disconnect.
   Assists in knowing when/how to terminate pending i/o after a
 connectivity loss (fast fail, or wait).
   - Provides consistent error codes for i/o path and error handlers
via
 helpers that are used by LLDD.

Note that the above does not contain the FC login state machine, etc.
We have discussed this in the past. Given the 4 FC LLDDs we had, there
was
a wide difference on who did what where. LSI did all login and FC ELS
handling in their firmware. Qlogic did the initiation of the login in
the
driver, but the ELS handling in the firmware. Emulex did the ELS
handling
in the driver. IBM/zfcp runs a hybrid of login/ELS handling over it's
pseudo
hba interface. Knowing how much time we spend constantly debugging
login/ELS
handling and the fact that we have to interject adapter resource
allocation
steps into the statemachine, I didn't want to go to a common library
until
there was a very clear and similar LLDD.  Well, you can't get much
clearer
than a full software-based login/ELS state machine that FCOE needs. It
makes
sense to at least try to library-ize the login/ELS handling if
possible.

Here's what I have in mind for FCOE layering. Keep in mind, that one of
the
goals here is to support a lot of different implementations which may
range
from s/w layers on a simple Ethernet packet pusher, to more and more
levels
of offload on an FCOE adapter. The goal is to create the s/w layers
such
that
different LLDD's can pick and choose the layer(s) (or level) they want
to
integrate into. At a minimum, they should/must integrate with the base
mgmt
objects.

For FC transport, we'd have the following layers or api sections :
Layer 0: rport and vport objects   (current functionality)
Layer 1: Port login and ELS handling
Layer 2: Fabric login, PT2PT login, CT handling, and discovery/RSCN
Layer 3: FCP I/O Assist
Layer 4: FC2 - Exchange and Sequence handling
Layer 5: FCOE encap/decap
Layer 6: FCOE FLOGI handler

Layer 1 would work with an api to the LLDD based on a send/receive ELS
interface
   coupled with a login/logout to address interface. The code within
layer
1
   would make calls to layer 0 to instantiate the different objects. If
layer 1
   needs to track additional rport data, it should specify dd_data on
the
   rport_add call. (Note: all of the LLDDs today have their own node
structure
   that is independent from the rport struct. I

Re: Open-FCoE on linux-scsi

2008-01-15 Thread James Smart

Love, Robert W wrote:

The interconnect layer could be split further:
SCSI command set layer -- SCSI core -- SCSI transport layer (FCP) --
Fibre Channel core -- Fibre Channel card drivers, FCoE drivers.


This is how I see the comparison. ('/' indicates 'or')

You suggest Open-FCoE
SCSI-ml SCSI-ml
scsi_transport_fc.h scsi_tranport_fc.h
scsi_transport_fc.c (FC core) / HBA openfc / HBA
fcoe / HBA  fcoe / HBA


From what I can see the layering is roughly the same with the main

difference being that we should be using more of (and putting more into)
scsi_transport_fc.h. Also we should make the FCP implementation (openfc)
fit in a bit nicer as scsi_transport_fc.c. We're going to look into
making better use of scsi_transport_fc.h as a first step.


I don't know what the distinction is between scsi_transport_fc.h vs
scsi_transport_fc.c is. They're all one and the same - the fc transport.
One contains the data structures and api between LLD and transport,
the other (the .c) contains the code to implement the api, transport objects
and sysfs handlers.

From my point of view, the fc transport is an assist library for the FC LLDDs.
Currently, it interacts with the midlayer only around some scan and 
block/unblock
functions. Excepting a small helper function used by the LLDD, it does not get
involved in the i/o path.

So my view of the layering for a normal FC driver is:
   SCSI-ml
   LLDD - FC transport
   bus code (e.g. pci)

Right now, the assists provided in the FC transport are:
- Presentation of transport objects into the sysfs tree, and thus sysfs
  attribute handling around those objects. This effectively is the FC
  management interface.
- Remote Port Object mgmt - interaction with the midlayer. Specifically:
  - Manages the SCSI target id bindings for the remote port
  - Knows when the rport is present or not.
On new connectivity:
  Kicks off scsi scans, restarts blocked i/o.
On connectivity loss:
  Insulates midlayer from temporary disconnects by block of
the target/luns, and manages the timer for the allowed period of
disconnect.
  Assists in knowing when/how to terminate pending i/o after a
connectivity loss (fast fail, or wait).
  - Provides consistent error codes for i/o path and error handlers via
helpers that are used by LLDD.

Note that the above does not contain the FC login state machine, etc.
We have discussed this in the past. Given the 4 FC LLDDs we had, there was
a wide difference on who did what where. LSI did all login and FC ELS
handling in their firmware. Qlogic did the initiation of the login in the
driver, but the ELS handling in the firmware. Emulex did the ELS handling
in the driver. IBM/zfcp runs a hybrid of login/ELS handling over it's pseudo
hba interface. Knowing how much time we spend constantly debugging login/ELS
handling and the fact that we have to interject adapter resource allocation
steps into the statemachine, I didn't want to go to a common library until
there was a very clear and similar LLDD.  Well, you can't get much clearer
than a full software-based login/ELS state machine that FCOE needs. It makes
sense to at least try to library-ize the login/ELS handling if possible.

Here's what I have in mind for FCOE layering. Keep in mind, that one of the
goals here is to support a lot of different implementations which may range
from s/w layers on a simple Ethernet packet pusher, to more and more levels
of offload on an FCOE adapter. The goal is to create the s/w layers such that
different LLDD's can pick and choose the layer(s) (or level) they want to
integrate into. At a minimum, they should/must integrate with the base mgmt
objects.

For FC transport, we'd have the following layers or api sections :
   Layer 0: rport and vport objects   (current functionality)
   Layer 1: Port login and ELS handling
   Layer 2: Fabric login, PT2PT login, CT handling, and discovery/RSCN
   Layer 3: FCP I/O Assist
   Layer 4: FC2 - Exchange and Sequence handling
   Layer 5: FCOE encap/decap
   Layer 6: FCOE FLOGI handler

Layer 1 would work with an api to the LLDD based on a send/receive ELS interface
  coupled with a login/logout to address interface. The code within layer 1
  would make calls to layer 0 to instantiate the different objects. If layer 1
  needs to track additional rport data, it should specify dd_data on the
  rport_add call. (Note: all of the LLDDs today have their own node structure
  that is independent from the rport struct. I wish we could kill this, but for
  now, Layer 1 could do the same (but don't name it so similarly like openfc 
did)).
  You could also specify login types, so that it knows to do FC4-specific login
  steps such as PRLI's for FCP.

Layer 2 work work with an api to the LLDD based on a send/receive ELS/CT coupled
  with a fabric or pt2pt 

RE: Open-FCoE on linux-scsi

2008-01-14 Thread Love, Robert W
-Original Message-
From: Stefan Richter [mailto:[EMAIL PROTECTED]
Sent: Friday, January 04, 2008 4:10 PM
To: Dev, Vasu
Cc: FUJITA Tomonori; Love, Robert W; [EMAIL PROTECTED]; Zou, Yi; Leech,
Christopher; linux-scsi@vger.kernel.org
Subject: Re: Open-FCoE on linux-scsi

Stefan Richter wrote:
 I.e. you have SCSI command set layer -- SCSI core -- SCSI transport
 layer -- interconnect layer.

The interconnect layer could be split further:
SCSI command set layer -- SCSI core -- SCSI transport layer (FCP) --
Fibre Channel core -- Fibre Channel card drivers, FCoE drivers.

This is how I see the comparison. ('/' indicates 'or')

You suggest Open-FCoE
SCSI-ml SCSI-ml
scsi_transport_fc.h scsi_tranport_fc.h
scsi_transport_fc.c (FC core) / HBA openfc / HBA
fcoe / HBA  fcoe / HBA

From what I can see the layering is roughly the same with the main
difference being that we should be using more of (and putting more into)
scsi_transport_fc.h. Also we should make the FCP implementation (openfc)
fit in a bit nicer as scsi_transport_fc.c. We're going to look into
making better use of scsi_transport_fc.h as a first step.

I'm a little confused though; in a prior mail it seemed that you were
clubbing openfc and fcoe together, and at one point Fujita's stack
showed a libfcoe and fcoe fitting directly under scsi_transport_fc. I
think the layering is nicer at this point in the thread, where SCSI only
knows that it's using FC and the SW implementation of FCP knows the
transport. It's closer to my understanding of Open-iSCSI.

Open-iSCSI  Open-FCoE
scsi_transport_iscsi.c  scsi_transport_fc.c
iscsi_tcp.c fcoe

I'm curious how aware you think scsi_transport_fc.h should be of FCoE?


But this would only really make sense if anybody would implement
additional FC-4 drivers besides FCP, e.g. RFC 2625, which would also
sit
on top of Fibre Channel core.
--
Stefan Richter
-=-==--- ---= --=-=
http://arcgraph.de/sr/
-
To unsubscribe from this list: send the line unsubscribe linux-scsi in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Open-FCoE on linux-scsi

2008-01-05 Thread Christoph Hellwig
On Sat, Jan 05, 2008 at 01:21:28AM +0100, Stefan Richter wrote:
 PS: There is already an RFC 2625 implementation in Linux, but only for
 LSIFC9xx.

There has also been one for interphace cards which was removed because
the driver was entirely unmaintained.  qlogic also has/had an out of
tree driver.

Now doing IP over FC over Ethernet sounds like a lot of useless fun :)
-
To unsubscribe from this list: send the line unsubscribe linux-scsi in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Open-FCoE on linux-scsi

2008-01-05 Thread Vladislav Bolkhovitin

FUJITA Tomonori wrote:

What's the general opinion on this? Duplicate code vs. more kernel code?
I can see that you're already starting to clean up the code that you
ported. Does that mean the duplicate code isn't an issue to you? When we
fix bugs in the initiator they're not going to make it into your tree
unless you're diligent about watching the list.


It's hard to convince the kernel maintainers to merge something into
mainline that which can be implemented in user space. I failed twice
(with two iSCSI target implementations).


Tomonori and the kernel maintainers,

In fact, almost all of the kernel can be done in user space, including 
all the drivers, networking, I/O management with block/SCSI initiator 
subsystem and disk cache manager. But does it mean that currently kernel 
is bad and all the above should be (re)done in user space instead? I 
think, not. Linux isn't a microkernel for very pragmatic reasons: 
simplicity and performance.


1. Simplicity.

For SCSI target, especially with hardware target card, data are come 
from kernel and eventually served by kernel doing actual I/O or 
getting/putting data from/to cache. Dividing the requests processing job 
between user and kernel space creates unnecessary interface layer(s) and 
effectively makes the requests processing job distributed with all its 
complexity and reliability problems. As the example, what will currently 
happen in STGT if the user space part suddenly dies? Will the kernel 
part gracefully recover from it? How much effort will be needed to 
implement that?


Another example is the mentioned above code duplication. Is it good? 
What will it bring? Or you care only about amount of the kernel's code 
and don't care about the overall amount of code? If so, you should 
(re)read what Linus Torvalds thinks about that: 
http://lkml.org/lkml/2007/4/24/364 (I don't consider myself as an 
authoritative in this question)


I agree that some of the processing, which can be clearly separated, can 
and should be done in user space. The good example of such approach is 
connection negotiation and management in the way, how it's done in 
open-iscsi. But I don't agree that this idea should be driven to the 
absolute. It might look good, but it's unpractical, it will only make 
things more complicated and harder for maintainership.


2. Performance.

Modern SCSI transports, e.g. Infiniband, have as low link latency as 
1(!) microsecond. For comparison, the inter-thread context switch time 
on a modern system is about the same, syscall time - about 0.1 
microsecond. So, only ten empty syscalls or one context switch add the 
same latency as the link. Even 1Gbps Ethernet has less, than 100 
microseconds of round-trip latency.


You, most likely, know, that QLogic target driver for SCST allows 
commands being executed either directly from soft IRQ, or from the 
corresponding thread. There is a steady 5% difference in IOPS between 
those modes on 512 bytes reads on nullio using 4Gbps link. So, a single 
additional inter-kernel-thread context switch costs 5% of IOPS.


Another source of additional unavoidable with the user space approach 
latency is data copy to/from cache. With the fully kernel space 
approach, cache can be used directly, so no extra copy will be needed.


So, putting code in the user space you should accept the extra latency 
it adds. Many, if not most, real-life workloads more or less latency, 
not throughput, bound, so you shouldn't be surprised that single stream 
dd if=/dev/sdX of=/dev/null on initiator gives too low values. Such 
benchmark isn't less important and practical, than all the 
multithreaded latency insensitive benchmarks, which people like running.


You may object me that the backstorage's latency is a lot more, than 1 
microsecond, but that is true only if data are read/written from/to the 
actual backstorage media, not from the cache, even from the backstorage 
device's cache. Nothing prevents a target from having 8 or even 64GB of 
cache, so most even random accesses could be served by it. This is 
especially important for sync. writes.


Thus, I believe, that partial user space, partial kernel space approach 
for building SCSI targets is the move in the wrong direction, because it 
brings practically nothing, but costs a lot.


Vlad
-
To unsubscribe from this list: send the line unsubscribe linux-scsi in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: Open-FCoE on linux-scsi

2008-01-05 Thread FUJITA Tomonori
On Fri, 4 Jan 2008 14:07:28 -0800
Dev, Vasu [EMAIL PROTECTED] wrote:

 
 
  _If_ there will indeed be dedicated FCoE HBAs in the future, the
  following stack could exist in addition to the one above:
 
- SCSI core,
  scsi_transport_fc
- FCoE HBA driver(s)
 
 Agreed. My FCoE initiator design would be something like:
 
 scsi-ml
 fcoe initiator driver
 libfcoe
 fc_transport_class (inclusing fcoe support)
 
 And FCoE HBA LLDs work like:
 
 scsi-ml
 FCoE HBA LLDs (some of them might use libfcoe)
 fc_transport_class (inclusing fcoe support)
 
 
 That's the way that other transport classes do, I think. For me, the
 current code tries to invent another fc class. For example, the code
 newly defines:
 
 struct fc_remote_port {
  struct list_head rp_list;   /* list under fc_virt_fab */
  struct fc_virt_fab *rp_vf;  /* virtual fabric */
  fc_wwn_trp_port_wwn;/* remote port world wide name
 */
  fc_wwn_trp_node_wwn;/* remote node world wide name
 */
  fc_fid_trp_fid; /* F_ID for remote_port if known
 */
  atomic_trp_refcnt;  /* reference count */
  u_int   rp_disc_ver;/* discovery instance */
  u_int   rp_io_limit;/* limit on outstanding I/Os */
  u_int   rp_io_count;/* count of outstanding I/Os */
  u_int   rp_fcp_parm;/* remote FCP service parameters
 */
  u_int   rp_local_fcp_parm; /* local FCP service
 parameters */
  void*rp_client_priv; /* HBA driver private data */
  void*rp_fcs_priv;   /* FCS driver private data */
  struct sa_event_list *rp_events; /* event list */
  struct sa_hash_link rp_fid_hash_link;
  struct sa_hash_link rp_wwpn_hash_link;
 
  /*
   * For now, there's just one session per remote port.
   * Eventually, for multipathing, there will be more.
   */
  u_char  rp_sess_ready;  /* session ready to be used */
  struct fc_sess  *rp_sess;   /* session */
  void*dns_lookup;/* private dns lookup */
  int dns_lookup_count; /* number of attempted lookups
 */
 };
 
 /*
  * remote ports are created and looked up by WWPN.
  */
 struct fc_remote_port *fc_remote_port_create(struct fc_virt_fab *,
 fc_wwn_t);
 struct fc_remote_port *fc_remote_port_lookup(struct fc_virt_fab *,
   fc_fid_t, fc_wwn_t wwpn);
 struct fc_remote_port *fc_remote_port_lookup_create(struct fc_virt_fab
 *,
  fc_fid_t,
  fc_wwn_t wwpn,
  fc_wwn_t wwnn);
 
 
 The FCoE LLD needs to exploit the exsting struct fc_rport and APIs.
 
 The openfc is software implementation of FC services such as FC login
 and target discovery and it is already using/exploiting  existing fc
 transport class including fc_rport struct. You can see openfc using
 fc_rport in openfc_queuecommand() and using fc transport API
 fc_port_remote_add() for fc_rport.

You just calls fc_remote_port_add. I don't think that reinventing the
whole rport management like reference counting doesn't mean exploiting
the exsting struct fc_rport and APIs.


 The fcoe module is just a first example of possible openfc transport but
 openfc can be used with other transports or HW HBAs also.
 
 The openfc does provide generic transport interface using fcdev which is
 currently used by FCoE module.
 
 One can certainly implement partly or fully  openfc and fcoe modules in
 FCoE HBA.

As pointed out in other mails, I believe that the similar job has done
in other transport classes using scsi transport class infrastructure
and the FCoE needs to follow the existing examples.
-
To unsubscribe from this list: send the line unsubscribe linux-scsi in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Open-FCoE on linux-scsi

2008-01-05 Thread FUJITA Tomonori
On Sat, 05 Jan 2008 00:41:05 +0100
Stefan Richter [EMAIL PROTECTED] wrote:

 Dev, Vasu wrote:
 [FUJITA Tomonori wrote:]
  Agreed. My FCoE initiator design would be something like:
 
  scsi-ml
  fcoe initiator driver
  libfcoe
  fc_transport_class (inclusing fcoe support)
 
  And FCoE HBA LLDs work like:
 
  scsi-ml
  FCoE HBA LLDs (some of them might use libfcoe)
  fc_transport_class (inclusing fcoe support)
 
 Wouldn't it make more sense to think of fc_transport_class as a FCP
 layer, sitting between scsi-ml and the various FC interconnect drivers
 (among them Openfc and maybe more FCoE drivers)?  I.e. you have SCSI
 command set layer -- SCSI core -- SCSI transport layer -- interconnect
 layer.¹

Oops, I should have depicted:

scsi-ml
fc_transport_class (inclusing fcoe support)
FCoE HBA LLDs (some of them might use libfcoe)

As you pointed out, that's the correct layering from the perspective
of SCSI architecture. I put FCoE HBA LLDs over fc_transport_class just
because LLDs directly interact with scsi-ml to perform the main work,
queuecommand/done (as you explained in 1).


 I am not familiar with FCP/ FCoE/ FC-DA et al, but I guess the FCoE
 support in the FCP transport layer should then go to the extent of
 target discovery, login, lifetime management and representation of
 remote ports and so on as far as it pertains to FCP (the SCSI transport
 protocol, FC-4 layer) independently of the interconnect (FC-3...FC-0
 layers).²
 
 [...]
  The FCoE LLD needs to exploit the exsting struct fc_rport and APIs.
  
  The openfc is software implementation of FC services such as FC login
  and target discovery and it is already using/exploiting  existing fc
  transport class including fc_rport struct. You can see openfc using
  fc_rport in openfc_queuecommand() and using fc transport API
  fc_port_remote_add() for fc_rport.
 
 Hence, aren't there interconnect independent parts of target discovery
 and login which should be implemented in fc_transport_class?  The
 interconnect dependent parts would then live in LLD methods to be
 provided in struct fc_function_template.

Agreed. Then FCoE helper functions that aren't useful for all the FCoE
LLDs would go libfcoe like iscsi class does (and sas class also does,
I guess).


 I.e. not only make full use of the API of fc_transport_class, also think
 about changing the API _if_ necessary to become a more useful
 implementation of the interface below FC-4.
 
 ---
 ¹) The transport classes are of course not layers in such a sense that
 they would completely hide SCSI core from interconnect drivers.  They
 don't really have to; they nevertheless live at a higher level of
 abstraction than LLDs and a lower level of abstraction than SCSI core.
 
 (One obvious example that SCSI core is less hidden than it possibly
 could be can be seen by the struct fc_function_template methods having
 struct scsi_target * and struct Scsi_Host * arguments, instead of struct
 fc_xyz * arguments.)
 
 ²) I'm using the term interconnect from the SCSI perspective, not from
 the FC perspective.
 -- 
 Stefan Richter
 -=-==--- ---= --=-=
 http://arcgraph.de/sr/
 -
 To unsubscribe from this list: send the line unsubscribe linux-scsi in
 the body of a message to [EMAIL PROTECTED]
 More majordomo info at  http://vger.kernel.org/majordomo-info.html
-
To unsubscribe from this list: send the line unsubscribe linux-scsi in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Open-FCoE on linux-scsi

2008-01-04 Thread Stefan Richter
On 1/3/2008 10:58 PM, Love, Robert W wrote:
[FUJITA Tomonori wrote]
I would add one TODO item, better integration with scsi_transport_fc.
If we have HW FCoE HBAs in the future, we need FCoE support in the fc
transport class (you could use its netlink mechanism for event
notification).
 
 What do you have in mind in particular? Our layers are, 
 
 SCSI
 Openfc
 FCoE
 net_devive
 NIC driver
 
 So, it makes sense to me that we fit under scsi_transport_fc. I like our
 layering- we clearly have SCSI on our top edge and net_dev at our bottom
 edge. My initial reaction would be to resist merging openfc and fcoe and
 creating a scsi_transport_fcoe.h interface.

AFAIU the stack should be:

  - SCSI core,
scsi_transport_fc
  - Openfc (an FCoE implementation)
  - net_device
  - NIC driver

_If_ there will indeed be dedicated FCoE HBAs in the future, the
following stack could exist in addition to the one above:

  - SCSI core,
scsi_transport_fc
  - FCoE HBA driver(s)

-- 
Stefan Richter
-=-==--- ---= --=--
http://arcgraph.de/sr/
-
To unsubscribe from this list: send the line unsubscribe linux-scsi in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: Open-FCoE on linux-scsi

2008-01-04 Thread FUJITA Tomonori
On Thu, 3 Jan 2008 13:58:29 -0800
Love, Robert W [EMAIL PROTECTED] wrote:

 Talking about stability is a bit premature, I think. The first thing
 to do is finding a design that can be accepted into mainline.
 
 How can we get this started? We've provided our current solution, but
 need feedback to guide us in the right direction. We've received little
 quips about libsa and libcrc and now it looks like we should look at
 what we can move to userspace (see below), but that's all the feedback
 we've got so far. Can you tell us what you think about our current
 architecture? Then we could discuss your concerns... 

I think that you have got littel feedback since few people have read
the code. Hopefully, this discussion gives some information.

My main concern is transport class integration. But they are just
mine. The SCSI maintainer and FC people might have different opinions.


  2) Abstractions- We consider libsa a big bug, which we're trying to
  strip down piece by piece. Vasu took out the LOG_SA code and I'm
 looking
  into changing the ASSERTs to BUG_ON/WARN_ONs. That isn't all of it,
 but
  that's how we're breaking it down.
 
 Agreed, libsa (and libcrc) should be removed.
 
 
  3) Target- The duplicate code of the target is too much. I want to
  integrate the target into our -upstream tree. Without doing that,
 fixes
  to the -upstream tree won't benefit the target and it will get into
  worse shape than it already is, unless someone is porting those
 patches
  to the target too. I think that ideally we'd want to reduce the
 target's
  profile and move it to userspace under tgt.
 
  4) Userspace/Kernel interaction- It's our belief that netlink is the
  preferred mechanism for kernel/userspace interaction. Yi has
 converted
  the FCoE ioctl code to netlink and is looking into openfc next.
 
 There are other options and I'm not sure that netlink is the best. I
 think that there is no general consensus about the best mechanism for
 kernel/userspace interaction. Even ioctl is still accepted into
 mainline (e.g. kvm).
 
 I expect you get an idea to use netlink from open-iscsi, but unlike
 open-iscsi, for now the FCoE code does just configuration with
 kernel/userspace interaction. open-iscsi has non-data path in user
 space so the kernel need to send variable-length data (PDUs, event,
 etc) to user space via netlink. So open-iscsi really needs netlink.
 If you have the FCoE non-data path in user space, netlink would work
 well for you.
 
 We definitely got the netlink direction from open-iscsi. Combining your
 comment that It's hard to convince the kernel maintainers to merge
 something into mainline that which can be implemented in user space
 with 
 If you have the FCoE non-data path in user space, netlink would work
 well for you, makes it sound like this is an architectural change we
 should consider.

I think they are different topics (though they are related).

It's hard to convince the kernel maintainers to merge something into
mainline that which can be implemented in user space applies to the
target driver.

You can fully implement FCoE target software in user space, right? So
if so, it's hard to push it into kernel.

The trend to push the non-data path to user space applies to the
initiator driver. Initiator drivers are expected to run in kernel
space but open-iscsi driver was split and the non-data part was moved
to user space. The kernel space and user-space parts work
together. It's completely different from iSCSI target drivers that can
be implemented fully in user space.


 I'm not sure how strong the trend is though. Is moving
 non data-path code to userspace a requirement? (you might have answered
 me already by saying you had 2x failed upstream attempts)

I don't know. You need to ask James.


 I would add one TODO item, better integration with scsi_transport_fc.
 If we have HW FCoE HBAs in the future, we need FCoE support in the fc
 transport class (you could use its netlink mechanism for event
 notification).
 
 What do you have in mind in particular? Our layers are, 
 
 SCSI
 Openfc
 FCoE
 net_devive
 NIC driver
 
 So, it makes sense to me that we fit under scsi_transport_fc. I like our
 layering- we clearly have SCSI on our top edge and net_dev at our bottom
 edge. My initial reaction would be to resist merging openfc and fcoe and
 creating a scsi_transport_fcoe.h interface.

As I wrote in another mail, this part is the major issue for me.


 BTW, I think that the name 'openfc' is a bit strange. Surely, the
 mainline iscsi initiator driver is called 'open-iscsi' but it doesn't
 have any functions or files called 'open*'. It's just the project
 name.
 
 Understood, but open-iscsi doesn't have the layering scheme that we do.
 Since we're providing a Fibre Channel protocol processing layer that
 different transport types can register with I think the generic name is
 appropriate. Anyway, I don't think anyone here is terribly stuck on the
 name; it's not a high priority at this time.


Re: Open-FCoE on linux-scsi

2008-01-04 Thread FUJITA Tomonori
On Fri, 04 Jan 2008 12:45:45 +0100
Stefan Richter [EMAIL PROTECTED] wrote:

 On 1/3/2008 10:58 PM, Love, Robert W wrote:
 [FUJITA Tomonori wrote]
 I would add one TODO item, better integration with scsi_transport_fc.
 If we have HW FCoE HBAs in the future, we need FCoE support in the fc
 transport class (you could use its netlink mechanism for event
 notification).
  
  What do you have in mind in particular? Our layers are, 
  
  SCSI
  Openfc
  FCoE
  net_devive
  NIC driver
  
  So, it makes sense to me that we fit under scsi_transport_fc. I like our
  layering- we clearly have SCSI on our top edge and net_dev at our bottom
  edge. My initial reaction would be to resist merging openfc and fcoe and
  creating a scsi_transport_fcoe.h interface.
 
 AFAIU the stack should be:
 
   - SCSI core,
 scsi_transport_fc
   - Openfc (an FCoE implementation)
   - net_device
   - NIC driver
 
 _If_ there will indeed be dedicated FCoE HBAs in the future, the
 following stack could exist in addition to the one above:
 
   - SCSI core,
 scsi_transport_fc
   - FCoE HBA driver(s)

Agreed. My FCoE initiator design would be something like:

scsi-ml
fcoe initiator driver
libfcoe
fc_transport_class (inclusing fcoe support)

And FCoE HBA LLDs work like:

scsi-ml
FCoE HBA LLDs (some of them might use libfcoe)
fc_transport_class (inclusing fcoe support)


That's the way that other transport classes do, I think. For me, the
current code tries to invent another fc class. For example, the code
newly defines:

struct fc_remote_port {
struct list_head rp_list;   /* list under fc_virt_fab */
struct fc_virt_fab *rp_vf;  /* virtual fabric */
fc_wwn_trp_port_wwn;/* remote port world wide name */
fc_wwn_trp_node_wwn;/* remote node world wide name */
fc_fid_trp_fid; /* F_ID for remote_port if known */
atomic_trp_refcnt;  /* reference count */
u_int   rp_disc_ver;/* discovery instance */
u_int   rp_io_limit;/* limit on outstanding I/Os */
u_int   rp_io_count;/* count of outstanding I/Os */
u_int   rp_fcp_parm;/* remote FCP service parameters */
u_int   rp_local_fcp_parm; /* local FCP service parameters */
void*rp_client_priv; /* HBA driver private data */
void*rp_fcs_priv;   /* FCS driver private data */
struct sa_event_list *rp_events; /* event list */
struct sa_hash_link rp_fid_hash_link;
struct sa_hash_link rp_wwpn_hash_link;

/*
 * For now, there's just one session per remote port.
 * Eventually, for multipathing, there will be more.
 */
u_char  rp_sess_ready;  /* session ready to be used */
struct fc_sess  *rp_sess;   /* session */
void*dns_lookup;/* private dns lookup */
int dns_lookup_count; /* number of attempted lookups */
};

/*
 * remote ports are created and looked up by WWPN.
 */
struct fc_remote_port *fc_remote_port_create(struct fc_virt_fab *, fc_wwn_t);
struct fc_remote_port *fc_remote_port_lookup(struct fc_virt_fab *,
 fc_fid_t, fc_wwn_t wwpn);
struct fc_remote_port *fc_remote_port_lookup_create(struct fc_virt_fab *,
fc_fid_t,
fc_wwn_t wwpn,
fc_wwn_t wwnn);


The FCoE LLD needs to exploit the exsting struct fc_rport and APIs.
-
To unsubscribe from this list: send the line unsubscribe linux-scsi in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Open-FCoE on linux-scsi

2008-01-04 Thread Mike Christie

FUJITA Tomonori wrote:

Understood, but open-iscsi doesn't have the layering scheme that we do.
Since we're providing a Fibre Channel protocol processing layer that
different transport types can register with I think the generic name is
appropriate. Anyway, I don't think anyone here is terribly stuck on the
name; it's not a high priority at this time.


open-iscsi provides the proper abstraction. It can handles different
transport types, tcp and RDMA (iSER). It supports software iSCSI
drivers and HW iSCSI HBAs drivers. They are done via iscsi transport
class (and libiscsi).


I think I hinted to this offlist, but the bnx2i branch in my iscsi git 
tree is best to look at for this. The upstream stuff is best the model 
where we support only HW iscsi hbas drivers (all offload) and SW iscsi 
drivers (all software). The bnx2i branch modifies the class and lib so 
it also supports a model in between the two, so pretty much everything 
is covered.

-
To unsubscribe from this list: send the line unsubscribe linux-scsi in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: Open-FCoE on linux-scsi

2008-01-04 Thread Dev, Vasu


 _If_ there will indeed be dedicated FCoE HBAs in the future, the
 following stack could exist in addition to the one above:

   - SCSI core,
 scsi_transport_fc
   - FCoE HBA driver(s)

Agreed. My FCoE initiator design would be something like:

scsi-ml
fcoe initiator driver
libfcoe
fc_transport_class (inclusing fcoe support)

And FCoE HBA LLDs work like:

scsi-ml
FCoE HBA LLDs (some of them might use libfcoe)
fc_transport_class (inclusing fcoe support)


That's the way that other transport classes do, I think. For me, the
current code tries to invent another fc class. For example, the code
newly defines:

struct fc_remote_port {
   struct list_head rp_list;   /* list under fc_virt_fab */
   struct fc_virt_fab *rp_vf;  /* virtual fabric */
   fc_wwn_trp_port_wwn;/* remote port world wide name
*/
   fc_wwn_trp_node_wwn;/* remote node world wide name
*/
   fc_fid_trp_fid; /* F_ID for remote_port if known
*/
   atomic_trp_refcnt;  /* reference count */
   u_int   rp_disc_ver;/* discovery instance */
   u_int   rp_io_limit;/* limit on outstanding I/Os */
   u_int   rp_io_count;/* count of outstanding I/Os */
   u_int   rp_fcp_parm;/* remote FCP service parameters
*/
   u_int   rp_local_fcp_parm; /* local FCP service
parameters */
   void*rp_client_priv; /* HBA driver private data */
   void*rp_fcs_priv;   /* FCS driver private data */
   struct sa_event_list *rp_events; /* event list */
   struct sa_hash_link rp_fid_hash_link;
   struct sa_hash_link rp_wwpn_hash_link;

   /*
* For now, there's just one session per remote port.
* Eventually, for multipathing, there will be more.
*/
   u_char  rp_sess_ready;  /* session ready to be used */
   struct fc_sess  *rp_sess;   /* session */
   void*dns_lookup;/* private dns lookup */
   int dns_lookup_count; /* number of attempted lookups
*/
};

/*
 * remote ports are created and looked up by WWPN.
 */
struct fc_remote_port *fc_remote_port_create(struct fc_virt_fab *,
fc_wwn_t);
struct fc_remote_port *fc_remote_port_lookup(struct fc_virt_fab *,
fc_fid_t, fc_wwn_t wwpn);
struct fc_remote_port *fc_remote_port_lookup_create(struct fc_virt_fab
*,
   fc_fid_t,
   fc_wwn_t wwpn,
   fc_wwn_t wwnn);


The FCoE LLD needs to exploit the exsting struct fc_rport and APIs.

The openfc is software implementation of FC services such as FC login
and target discovery and it is already using/exploiting  existing fc
transport class including fc_rport struct. You can see openfc using
fc_rport in openfc_queuecommand() and using fc transport API
fc_port_remote_add() for fc_rport.

The fcoe module is just a first example of possible openfc transport but
openfc can be used with other transports or HW HBAs also.

The openfc does provide generic transport interface using fcdev which is
currently used by FCoE module.

One can certainly implement partly or fully  openfc and fcoe modules in
FCoE HBA.
-
To unsubscribe from this list: send the line unsubscribe linux-scsi in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Open-FCoE on linux-scsi

2008-01-04 Thread Stefan Richter
Dev, Vasu wrote:
[FUJITA Tomonori wrote:]
 Agreed. My FCoE initiator design would be something like:

 scsi-ml
 fcoe initiator driver
 libfcoe
 fc_transport_class (inclusing fcoe support)

 And FCoE HBA LLDs work like:

 scsi-ml
 FCoE HBA LLDs (some of them might use libfcoe)
 fc_transport_class (inclusing fcoe support)

Wouldn't it make more sense to think of fc_transport_class as a FCP
layer, sitting between scsi-ml and the various FC interconnect drivers
(among them Openfc and maybe more FCoE drivers)?  I.e. you have SCSI
command set layer -- SCSI core -- SCSI transport layer -- interconnect
layer.¹

I am not familiar with FCP/ FCoE/ FC-DA et al, but I guess the FCoE
support in the FCP transport layer should then go to the extent of
target discovery, login, lifetime management and representation of
remote ports and so on as far as it pertains to FCP (the SCSI transport
protocol, FC-4 layer) independently of the interconnect (FC-3...FC-0
layers).²

[...]
 The FCoE LLD needs to exploit the exsting struct fc_rport and APIs.
 
 The openfc is software implementation of FC services such as FC login
 and target discovery and it is already using/exploiting  existing fc
 transport class including fc_rport struct. You can see openfc using
 fc_rport in openfc_queuecommand() and using fc transport API
 fc_port_remote_add() for fc_rport.

Hence, aren't there interconnect independent parts of target discovery
and login which should be implemented in fc_transport_class?  The
interconnect dependent parts would then live in LLD methods to be
provided in struct fc_function_template.

I.e. not only make full use of the API of fc_transport_class, also think
about changing the API _if_ necessary to become a more useful
implementation of the interface below FC-4.

---
¹) The transport classes are of course not layers in such a sense that
they would completely hide SCSI core from interconnect drivers.  They
don't really have to; they nevertheless live at a higher level of
abstraction than LLDs and a lower level of abstraction than SCSI core.

(One obvious example that SCSI core is less hidden than it possibly
could be can be seen by the struct fc_function_template methods having
struct scsi_target * and struct Scsi_Host * arguments, instead of struct
fc_xyz * arguments.)

²) I'm using the term interconnect from the SCSI perspective, not from
the FC perspective.
-- 
Stefan Richter
-=-==--- ---= --=-=
http://arcgraph.de/sr/
-
To unsubscribe from this list: send the line unsubscribe linux-scsi in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Open-FCoE on linux-scsi

2008-01-04 Thread Stefan Richter
Stefan Richter wrote:
 The interconnect layer could be split further:
 SCSI command set layer -- SCSI core -- SCSI transport layer (FCP) --
 Fibre Channel core -- Fibre Channel card drivers, FCoE drivers.
 
 But this would only really make sense if anybody would implement
 additional FC-4 drivers besides FCP, e.g. RFC 2625, which would also sit
 on top of Fibre Channel core.

PS: There is already an RFC 2625 implementation in Linux, but only for
LSIFC9xx.

PPS: RFC 2625 is superseded by RFC 4338.
-- 
Stefan Richter
-=-==--- ---= --=-=
http://arcgraph.de/sr/
-
To unsubscribe from this list: send the line unsubscribe linux-scsi in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Open-FCoE on linux-scsi

2008-01-04 Thread Stefan Richter
Stefan Richter wrote:
 I.e. you have SCSI command set layer -- SCSI core -- SCSI transport
 layer -- interconnect layer.

The interconnect layer could be split further:
SCSI command set layer -- SCSI core -- SCSI transport layer (FCP) --
Fibre Channel core -- Fibre Channel card drivers, FCoE drivers.

But this would only really make sense if anybody would implement
additional FC-4 drivers besides FCP, e.g. RFC 2625, which would also sit
on top of Fibre Channel core.
-- 
Stefan Richter
-=-==--- ---= --=-=
http://arcgraph.de/sr/
-
To unsubscribe from this list: send the line unsubscribe linux-scsi in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: Open-FCoE on linux-scsi

2008-01-03 Thread FUJITA Tomonori
From: Love, Robert W [EMAIL PROTECTED]
Subject: RE: Open-FCoE on linux-scsi
Date: Mon, 31 Dec 2007 08:34:38 -0800

  Hello SCSI mailing list,
 
 I'd just like to introduce ourselves a bit before we get
  started. My name is Robert Love and I'm joined by a team of engineers
  including Vasu Dev, Chris Leech and Yi Zou. We are committed to
  maintaining the Open-FCoE project. Aside from Intel engineers we
 expect
  engineers from other companies to contribute to Open-FCoE.
 
 Our goal is to get the initiator code upstream. We have a lot of
  working code but recognize that we're early in this project's
  development. We're looking for direction from you, the experts, on
 what
  this project should grow into.
 
 I've just added a new fcoe target driver to tgt:
 
 http://stgt.berlios.de/
 
 That's great; we'll check it out as soon as everyone is back from the
 holidays.

It's still an experiment. Patches are welcome.


 The driver runs in user space unlike your target mode driver (I just
 modified your FCoE code to run it in user space).
 
 There seems to be a trend to move non-data-path code userspace, however,

Implementing FCoE target drive in user space has no connection with a
trend to move non-data-path code user space. It does all the data-path
in user space.

The examples of the trend to move non-data-path code userspace are
open-iscsi, multi-path, etc, I think.


 I don't like having so much duplicate code. We were going to investigate
 if we could redesign the target code to have less of a profile and just
 depend on the initiator modules instead of recompiling openfc as
 openfc_tgt.
 
 What's the general opinion on this? Duplicate code vs. more kernel code?
 I can see that you're already starting to clean up the code that you
 ported. Does that mean the duplicate code isn't an issue to you? When we
 fix bugs in the initiator they're not going to make it into your tree
 unless you're diligent about watching the list.

It's hard to convince the kernel maintainers to merge something into
mainline that which can be implemented in user space. I failed twice
(with two iSCSI target implementations).

Yeah, duplication is not good but the user space code has some
great advantages. Both approaches have the pros and cons.


 The initiator driver succeeded to log in a target, see logical units,
 and perform some I/Os. It's still very unstable but it would be
 useful for FCoE developers.
 
 
 I would like to help you push the Open-FCoE initiator to mainline
 too. What are on your todo list and what you guys working on now?
 
 We would really appreciate the help! The best way I could come up with
 to coordinate this effort was through the BZ-
 http://open-fcoe.org/bugzilla. I was going to write a BZ wiki entry to
 help new contributors, but since I haven't yet, here's the bottom line.
 Sign-up to the BZ, assign bugs to yourself from my name (I'm the default
 assignee now) and also file bugs as you find them. I don't want to
 impose much process, but this will allow all of us to know what everyone
 else is working on.
 
 The main things that I think need to be fixed are (in no particular
 order)-
 
 1) Stability- Just straight up bug fixing. This is ongoing and everyone
 is looking at bugs.

Talking about stability is a bit premature, I think. The first thing
to do is finding a design that can be accepted into mainline.


 2) Abstractions- We consider libsa a big bug, which we're trying to
 strip down piece by piece. Vasu took out the LOG_SA code and I'm looking
 into changing the ASSERTs to BUG_ON/WARN_ONs. That isn't all of it, but
 that's how we're breaking it down.

Agreed, libsa (and libcrc) should be removed.


 3) Target- The duplicate code of the target is too much. I want to
 integrate the target into our -upstream tree. Without doing that, fixes
 to the -upstream tree won't benefit the target and it will get into
 worse shape than it already is, unless someone is porting those patches
 to the target too. I think that ideally we'd want to reduce the target's
 profile and move it to userspace under tgt.
 
 4) Userspace/Kernel interaction- It's our belief that netlink is the
 preferred mechanism for kernel/userspace interaction. Yi has converted
 the FCoE ioctl code to netlink and is looking into openfc next.

There are other options and I'm not sure that netlink is the best. I
think that there is no general consensus about the best mechanism for
kernel/userspace interaction. Even ioctl is still accepted into
mainline (e.g. kvm).

I expect you get an idea to use netlink from open-iscsi, but unlike
open-iscsi, for now the FCoE code does just configuration with
kernel/userspace interaction. open-iscsi has non-data path in user
space so the kernel need to send variable-length data (PDUs, event,
etc) to user space via netlink. So open-iscsi really needs netlink.
If you have the FCoE non-data path in user space, netlink would work
well for you.

I would add one TODO item, better integration

RE: Open-FCoE on linux-scsi

2008-01-03 Thread Love, Robert W
From: Love, Robert W [EMAIL PROTECTED]
Subject: RE: Open-FCoE on linux-scsi
Date: Mon, 31 Dec 2007 08:34:38 -0800

  Hello SCSI mailing list,
 
I'd just like to introduce ourselves a bit before we get
  started. My name is Robert Love and I'm joined by a team of
engineers
  including Vasu Dev, Chris Leech and Yi Zou. We are committed to
  maintaining the Open-FCoE project. Aside from Intel engineers we
 expect
  engineers from other companies to contribute to Open-FCoE.
 
Our goal is to get the initiator code upstream. We have a lot of
  working code but recognize that we're early in this project's
  development. We're looking for direction from you, the experts, on
 what
  this project should grow into.
 
 I've just added a new fcoe target driver to tgt:
 
 http://stgt.berlios.de/
 
 That's great; we'll check it out as soon as everyone is back from the
 holidays.

It's still an experiment. Patches are welcome.


 The driver runs in user space unlike your target mode driver (I just
 modified your FCoE code to run it in user space).
 
 There seems to be a trend to move non-data-path code userspace,
however,

Implementing FCoE target drive in user space has no connection with a
trend to move non-data-path code user space. It does all the data-path
in user space.

The examples of the trend to move non-data-path code userspace are
open-iscsi, multi-path, etc, I think.


 I don't like having so much duplicate code. We were going to
investigate
 if we could redesign the target code to have less of a profile and
just
 depend on the initiator modules instead of recompiling openfc as
 openfc_tgt.

 What's the general opinion on this? Duplicate code vs. more kernel
code?
 I can see that you're already starting to clean up the code that you
 ported. Does that mean the duplicate code isn't an issue to you? When
we
 fix bugs in the initiator they're not going to make it into your tree
 unless you're diligent about watching the list.

It's hard to convince the kernel maintainers to merge something into
mainline that which can be implemented in user space. I failed twice
(with two iSCSI target implementations).

Yeah, duplication is not good but the user space code has some
great advantages. Both approaches have the pros and cons.


 The initiator driver succeeded to log in a target, see logical
units,
 and perform some I/Os. It's still very unstable but it would be
 useful for FCoE developers.
 
 
 I would like to help you push the Open-FCoE initiator to mainline
 too. What are on your todo list and what you guys working on now?

 We would really appreciate the help! The best way I could come up
with
 to coordinate this effort was through the BZ-
 http://open-fcoe.org/bugzilla. I was going to write a BZ wiki entry
to
 help new contributors, but since I haven't yet, here's the bottom
line.
 Sign-up to the BZ, assign bugs to yourself from my name (I'm the
default
 assignee now) and also file bugs as you find them. I don't want to
 impose much process, but this will allow all of us to know what
everyone
 else is working on.

 The main things that I think need to be fixed are (in no particular
 order)-

 1) Stability- Just straight up bug fixing. This is ongoing and
everyone
 is looking at bugs.

Talking about stability is a bit premature, I think. The first thing
to do is finding a design that can be accepted into mainline.

How can we get this started? We've provided our current solution, but
need feedback to guide us in the right direction. We've received little
quips about libsa and libcrc and now it looks like we should look at
what we can move to userspace (see below), but that's all the feedback
we've got so far. Can you tell us what you think about our current
architecture? Then we could discuss your concerns... 



 2) Abstractions- We consider libsa a big bug, which we're trying to
 strip down piece by piece. Vasu took out the LOG_SA code and I'm
looking
 into changing the ASSERTs to BUG_ON/WARN_ONs. That isn't all of it,
but
 that's how we're breaking it down.

Agreed, libsa (and libcrc) should be removed.


 3) Target- The duplicate code of the target is too much. I want to
 integrate the target into our -upstream tree. Without doing that,
fixes
 to the -upstream tree won't benefit the target and it will get into
 worse shape than it already is, unless someone is porting those
patches
 to the target too. I think that ideally we'd want to reduce the
target's
 profile and move it to userspace under tgt.

 4) Userspace/Kernel interaction- It's our belief that netlink is the
 preferred mechanism for kernel/userspace interaction. Yi has
converted
 the FCoE ioctl code to netlink and is looking into openfc next.

There are other options and I'm not sure that netlink is the best. I
think that there is no general consensus about the best mechanism for
kernel/userspace interaction. Even ioctl is still accepted into
mainline (e.g. kvm).

I expect you get an idea to use netlink from open-iscsi, but unlike
open

Re: Open-FCoE on linux-scsi

2007-11-27 Thread FUJITA Tomonori
On Tue, 27 Nov 2007 15:40:05 -0800
Love, Robert W [EMAIL PROTECTED] wrote:

 Hello SCSI mailing list,
 
   I'd just like to introduce ourselves a bit before we get
 started. My name is Robert Love and I'm joined by a team of engineers
 including Vasu Dev, Chris Leech and Yi Zou. We are committed to
 maintaining the Open-FCoE project. Aside from Intel engineers we expect
 engineers from other companies to contribute to Open-FCoE. 
 
   Our goal is to get the initiator code upstream. We have a lot of
 working code but recognize that we're early in this project's
 development. We're looking for direction from you, the experts, on what
 this project should grow into.

A quick start guide to setup initiator and target and connect them is
available?
-
To unsubscribe from this list: send the line unsubscribe linux-scsi in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: Open-FCoE on linux-scsi

2007-11-27 Thread Love, Robert W
On Tue, 27 Nov 2007 15:40:05 -0800
Love, Robert W [EMAIL PROTECTED] wrote:

 Hello SCSI mailing list,

  I'd just like to introduce ourselves a bit before we get
 started. My name is Robert Love and I'm joined by a team of engineers
 including Vasu Dev, Chris Leech and Yi Zou. We are committed to
 maintaining the Open-FCoE project. Aside from Intel engineers we
expect
 engineers from other companies to contribute to Open-FCoE.

  Our goal is to get the initiator code upstream. We have a lot of
 working code but recognize that we're early in this project's
 development. We're looking for direction from you, the experts, on
what
 this project should grow into.

A quick start guide to setup initiator and target and connect them is
available?

Yeah, there's a page in our wiki. It's mentioned the first post on
www.Open-FCoE.org. Here's a direct link-
http://www.open-fcoe.org/openfc/wiki/index.php/Quickstart. Unfortunately
it's a bit rocky to get everything working even with the quickstart, I'm
working on improving that right now.

-
To unsubscribe from this list: send the line unsubscribe linux-scsi
in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
-
To unsubscribe from this list: send the line unsubscribe linux-scsi in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html