Re: [PATCH 2/2] RFC: The be2iscsi driver support for bsg

2010-03-23 Thread FUJITA Tomonori
On Mon, 22 Mar 2010 11:16:31 -0400
James Smart james.sm...@emulex.com wrote:

  About the implementation, I think that it's better to have the common
  library code rather than just copying the fs bsg code into iscsi.
 
 Note: I tried to library-ize the transport implementation on the first pass 
 of 
 the RFC. But it was making things more complex.  I tried to explain this, 
 perhaps not very well (http://marc.info/?l=linux-scsim=125725904205688w=2).

Ah, I overlooked it. I'll think about it later. Maybe Mike already has
some ideas about it. Anyway, library-izing is not urgent at all. We
can work on it later.

-- 
You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.



Re: [PATCH 2/2] RFC: The be2iscsi driver support for bsg

2010-03-22 Thread FUJITA Tomonori
On Fri, 19 Mar 2010 08:56:30 -0400
James Smart james.sm...@emulex.com wrote:

  I still want to know why vendors can't do this via the existing
  netlink interface. open-iscsi uses the netlink interface for some pdu
  so I guess that having a different channel for management might be a
  good idea.
 
 Separate this request into two needs:
 
 The first need is for the iscsi driver to have some kind of entry point to 
 kick off a vendor specific thing - primarily diagnostics and internal f/w and 
 flash mgmt items. Here, using the same mechanism that we had on the FC side, 
 which also supports dma payloads, made a lot of sense. I like and prefer the 
 symmetry.
 
 The second need is for common iscsi link/stack mgmt. All vendors would be 
 expected to implement the same items the same way - thus the formalization of 
 the api in the transport.  It also makes sense that all use of these common 
 interfaces comes via the open-iscsi mgmt utilities.  Given the data set, it 
 could be done by netlink or bsg.  I gave some pros/cons on the interfaces in 
 (http://marc.info/?l=linux-scsim=124811693510903w=2). In my mind, the main 
 reason these settings ended up in bsg vs netlink is - the functionality is 
 typically migrating from a vendor-specific ioctl set, which maps rather 
 easily 
 to the bsg model. Not that netlink is that more difficult (although to NLA_ 
 or 
 not may confuse some of the contributors). And, if you already had the bsg 
 infrastructure for the first need, you had to add very little to support it.
 
 Thus, the main reason they are together is one of expediency. The first had 
 to 
 be done, so it was very easy to use the same methodology for the second.

If vendors use the common data structures via bsg, it's totally fine
by me. I see why bsg is preferable. The only thing that I care about
is managing any iSCSI HBA with iscsiadm instead of various vendor
specific utilities.

About the implementation, I think that it's better to have the common
library code rather than just copying the fs bsg code into iscsi.

-- 
You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.



Re: [PATCH 2/2] RFC: The be2iscsi driver support for bsg

2010-03-19 Thread FUJITA Tomonori
On Thu, 18 Mar 2010 16:02:52 -0500
Mike Christie micha...@cs.wisc.edu wrote:

 On 03/18/2010 08:58 AM, FUJITA Tomonori wrote:
 
  - You invent your hardware specific data structure for the simplest
 operation such as setting IP address.
 
 I think this is what Jay is not trying to do. I think the patch has some 
 extra code like the ISCSI_BSG_HST_VENDOR parts that makes it confusing - 
 it got me too. The ISCSI_BSG_HST_VENDOR code in be2iscsi looks like it 
 is basically disabled (should remove for a formal patch when he sends 
 for merging).
 
 It looks like there is a common struct iscsi_bsg_common_format that is 
 getting passed around, and then in be2iscsi the driver is using that 
 info to make a be2iscsi specific command. So scsi_transport_iscsi / 
 ISCSI_SET_IP_ADDR / iscsi_bsg_common_format  gets translated by b2iscsi 
 to b2iscsi / OPCODE_COMMON_ISCSI_NTWK_MODIFY_IP_ADDR / be_modify_ip_addr.

Yeah, seems you are right. But looks like this patchset also adds the
vendor specific message support (ISCSI_BSG_HST_VENDOR)?

I still want to know why vendors can't do this via the existing
netlink interface. open-iscsi uses the netlink interface for some pdu
so I guess that having a different channel for management might be a
good idea.

-- 
You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.



Re: [PATCH 2/2] RFC: The be2iscsi driver support for bsg

2010-03-18 Thread FUJITA Tomonori
On Wed, 17 Mar 2010 23:37:07 +0530
Jayamohan Kallickal jayamoh...@serverengines.com wrote:

   This patch contains the necessary changes to support
 the bsg interface
 
 Signed-off-by: Jayamohan Kallickal jayamoh...@serverengines.com
 ---
  drivers/scsi/be2iscsi/be_cmds.h  |  137 ---
  drivers/scsi/be2iscsi/be_iscsi.c |3 +-
  drivers/scsi/be2iscsi/be_main.c  |   99 ++--
  drivers/scsi/be2iscsi/be_main.h  |6 +-
  drivers/scsi/be2iscsi/be_mgmt.c  |  230 
 +-
  drivers/scsi/be2iscsi/be_mgmt.h  |  107 ++
  include/scsi/scsi_bsg_iscsi.h|1 +
  7 files changed, 547 insertions(+), 36 deletions(-)
 
 diff --git a/drivers/scsi/be2iscsi/be_cmds.h b/drivers/scsi/be2iscsi/be_cmds.h
 index 49fcc78..d5c14cf 100644
 --- a/drivers/scsi/be2iscsi/be_cmds.h
 +++ b/drivers/scsi/be2iscsi/be_cmds.h
 @@ -18,6 +18,7 @@
  #ifndef BEISCSI_CMDS_H
  #define BEISCSI_CMDS_H
  
 +#include scsi/scsi_bsg_iscsi.h
  /**
   * The driver sends configuration and managements command requests to the
   * firmware in the BE. These requests are communicated to the processor
 @@ -162,6 +163,13 @@ struct be_mcc_mailbox {
  #define OPCODE_COMMON_ISCSI_CFG_POST_SGL_PAGES   2
  #define OPCODE_COMMON_ISCSI_CFG_REMOVE_SGL_PAGES3
  #define OPCODE_COMMON_ISCSI_NTWK_GET_NIC_CONFIG  7
 +#define OPCODE_COMMON_ISCSI_NTWK_SET_VLAN14
 +#define OPCODE_COMMON_ISCSI_NTWK_CONFIGURE_STATELESS_IP_ADDR 17
 +#define OPCODE_COMMON_ISCSI_NTWK_MODIFY_IP_ADDR  21
 +#define OPCODE_COMMON_ISCSI_NTWK_GET_DEFAULT_GATEWAY 22
 +#define OPCODE_COMMON_ISCSI_NTWK_MODIFY_DEFAULT_GATEWAY 23
 +#define OPCODE_COMMON_ISCSI_NTWK_GET_ALL_IF_ID   24
 +#define OPCODE_COMMON_ISCSI_NTWK_GET_IF_INFO 25

So this patchset adds the user-kernel space interface for management
via bsg, right?

Then I have two questions.

- open-iscsi already has the user-kernel space interface for
  management via netlink. Why do you need another via bsg? IOW, why
  can't you do this via the existing netlink interface?

- You invent your hardware specific data structure for the simplest
  operation such as setting IP address. It means that every vendor
  invents their hardware specific data structure for management. Users
  have to use the vendor specific management software or open-iscsi
  management tool (iscsiadm) needs all the vendor specific code to
  build the vendor specific data structure (and parse the response). I
  think that it is exactly what open-iscsi tries to avoid. iscsiadm
  builds the generic management data structure and
  scsi_transport_iscsi.c passes it to the drivers. Then the drivers
  have to handle it. That's the way open-iscsi works now.

-- 
You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.



Re: iSCSI initiator implementation question

2009-06-18 Thread FUJITA Tomonori

On Thu, 18 Jun 2009 12:12:49 +0200
Hannes Reinecke h...@suse.de wrote:

 
 Hi all,
 
 Joachim Worringen wrote:
  On Jun 18, 11:19 am, Boaz Harrosh bharr...@panasas.com wrote:
  On 06/18/2009 10:56 AM, Joachim Worringen wrote:
 
  Greetings,
  I tried to use Open-iSCSI with a non-tcp socket type and failed
  (timeout after connection has been established).
  Looking at the source, the reason is obvious: for sending data
  (iscsi_send()), the function pointers from sock-sk are used via the
  kernel socket API. This works well with non-tcp sockets. Howver, for
  reading data (see callback handler iscsi_tcp_data_ready()),
  tcp_read_sock() is used instead of the related kernel socket API call.
  Sounds good. Could you test out your solution and send a patch?
  If it tests out, I don't see why not.
  
  We don't have a solution yet, just a problem...
  
  Is there a specific reason for this? iSCSI would surely benefit from
  using high-performance, non-tcp sockets if available (I'm talking
  about SuperSockets in this case, 
  seehttp://www.dolphinics.com/products/dolphin-supersockets.html).
  Sure sounds nice. If it is a simple and compatible change it sounds very 
  good.
  
  I haven't looked into it enough to claim it'll be a simple change. I
  figure the iscsi recv code will change completely, but would be
  simplified as we don't need to deal with the tcp buffer details etc.
  when using a simple socket recv call, but it remains to be seen how
  this affects compatibility, i.e. conc. partial or interrupted
  transfer.
  
  It is not decided whether we allocate resources for this.
  
 Well, the obvious solution here is to implement another transport module (eg
 iscsi_supersockets.c) much like it's done for iSER.
 The rest of the code should be sufficiently abstracted to handle it properly.
 
 There is a reason why it's called iscsi_tcp.c ...

Yeah, I guess that all niche high performance interconnect technology
(such as Dolphin, Myrinet, etc) for HPC support RDMA. They could do
something like iSER. They support upport kinda socket interface but
socket interface is not optimal for them. So modifying iscsi_tcp for
them in a strange way doesn't make sense much.

Well, anyway seems that 10GbE is slowly killing all niche high
performace interconnect technology.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



Re: iSCSI initiator implementation question

2009-06-18 Thread FUJITA Tomonori

On Thu, 18 Jun 2009 03:53:47 -0700 (PDT)
Joachim Worringen worrin...@googlemail.com wrote:

 
 
 
 On Jun 18, 12:37 pm, FUJITA Tomonori fujita.tomon...@lab.ntt.co.jp
 wrote:
  Yeah, I guess that all niche high performance interconnect technology
  (such as Dolphin, Myrinet, etc) for HPC support RDMA. They could do
  something like iSER. They support upport kinda socket interface but
  socket interface is not optimal for them. So modifying iscsi_tcp for
  them in a strange way doesn't make sense much.
 
 As Hannes proposed, deriving a iscsi_ssocks.c from iscsi_tcp.c is the
 most obvious and painless solution.
 
 I just wonder why iscsi_tcp.c calls tcp-functions directly for
 receiving data although there's a simpler way (which is used for
 sending data) for sockets.

Because it's much faster.


  Well, anyway seems that 10GbE is slowly killing all niche high
  performace interconnect technology.
 
 Of course Ethernet always wins conc. market share. But 10GigE
 doesn't help you if you need really low latency; it does hardly give
 you an advantage over 1GigE in this respect.

Seems that the latency diffence with 10GbE is smaller than one with
1GbE. There were lots of niche high performance interconnect companies
in 1GbE era but now there are not many. Yes, the latency diffence
still matters for some people but the market is getting smaller.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



Re: iSCSI initiator implementation question

2009-06-18 Thread FUJITA Tomonori

On Thu, 18 Jun 2009 13:33:42 +0200
Bart Van Assche bart.vanass...@gmail.com wrote:

 
 On Thu, Jun 18, 2009 at 1:14 PM, FUJITA
 Tomonorifujita.tomon...@lab.ntt.co.jp wrote:
  On Thu, 18 Jun 2009 03:53:47 -0700 (PDT)
  Joachim Worringen worrin...@googlemail.com wrote:
  On Jun 18, 12:37 pm, FUJITA Tomonori fujita.tomon...@lab.ntt.co.jp
  wrote:
   Well, anyway seems that 10GbE is slowly killing all niche high
   performance interconnect technology.
 
  Of course Ethernet always wins conc. market share. But 10GigE
  doesn't help you if you need really low latency; it does hardly give
  you an advantage over 1GigE in this respect.
 
  Seems that the latency diffence with 10GbE is smaller than one with
  1GbE. There were lots of niche high performance interconnect companies
  in 1GbE era but now there are not many. Yes, the latency diffence
  still matters for some people but the market is getting smaller.
 
 Numbers please. From the last numbers I have seen the latency
 difference between 10 GbE and fast non-Ethernet networks is still
 significant.

http://74.125.153.132/search?q=cache:lWqSaJ-iJ50J:www.chelsio.com/poster.html%3FsearchText%3Dcybermedia+latency+chelsio+osaka+10.49+microsecondscd=2hl=jact=clnkgl=jp

I think that 10GbE vendors are happy to give you lots of results.

Maybe it's still significant for you but it's not for some, which I
had worked with.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



Re: Crash tgtd and tgtadm with more than 112 volumes.

2009-03-10 Thread FUJITA Tomonori

On Tue, 10 Mar 2009 23:23:30 +0100
Tomasz Chmielewski man...@wpkg.org wrote:

 Konrad Rzeszutek schrieb:
  On Tue, Mar 10, 2009 at 02:52:41PM -0700, Ben Greear wrote:
  I wrote a script to create lots of iscsi volumes on loop devices.
 
  Seems to run fine up to 111, and then tgtd crashes and tgtadm
  gets a buffer overflow.
 
  The script I used to create the problem and a capture of the
  crash is attached.
 
  I'm using standard RPMs on Fedora 10, kernel
  Linux iSCSi-test 2.6.27.15-170.2.24.fc10.x86_64 #1 SMP Wed Feb 11 23:14:31 
  EST 2009 x86_64 x86_64 x86_64 GNU/Linux
 
  Let me know if I can provide any additional info.
  
  You are on the wrong mailing list. This is for the Open-iSCSI _initiator_. 
  The mailing
  list you want is for the Open-iSCSI _target_, which is: 
  iscsitarget-de...@lists.sourceforge.net.
 
 The address is s...@vger.kernel.org for a year or so, the 
 iscsitarget-de...@lists.sourceforge.net is an old address which is not 
 used any more.

This part is not correct.

iscsitarget-de...@lists.sourceforge.net is the mailing list for iSCSI
Enterprise Target project (IET), which is not tgt project. I had
maintained IET but we have never used IET mailing list for tgt.

FYI, our old mailing list is stgt-de...@lists.berlios.de.


 CC: s...@vger.kernel.org

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



Re: Limiting the number of ISCSI-Sessions per target

2008-10-17 Thread FUJITA Tomonori

On Fri, 17 Oct 2008 11:32:41 +0200
Dr. Volker Jaenisch [EMAIL PROTECTED] wrote:

 
 Hi Mike!
 
 Thank you for the fast reply.
 
 Mike Christie schrieb:
  I am not sure if this is what you are looking for but you can control 
  which sessions get made. 
 If I get you right your solution is not excactly what I want.
  Do you have multiple portals per target and so 
  we can create a session through each portal to the same target? 
  Something like this:
 
  # iscsiadm -m node
  20.15.0.12:3260,1 iqn.2001-04.com.home:meanminna
  20.15.0.100:3260,1 iqn.2001-04.com.home:meanminna
 
  And right now we create two sessions (one through each portal on the 
  target)? If so you can run
 
  iscsiadm -m node -T target -p ip:port,tpgt -o update -n node.startup -v 
  manual

 This means I can set the initiator on one specific node not to start the
 session automatically?
 
 It would be possible to use this approach, but it seems to me no robust
 solution. The handling of the manual sessions
 have to be driven by the HA-System (heartbeat). I case of split brain
 this will give a session to both machines - and booom!
 
 I think the a robust solution have to be a session counting instance and
 thus at best located
 at the target side.
 
 Is there really no Target-side appoach? 

Some target implementations support such feature, limits the maximum
number of sessions. Ask on the mailing list of a target implementation
that you use.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---