Dr. Volker Jaenisch wrote:
> Hello Mike! 
> 
> Thanks for the fast reply!
> 
> Mike Christie schrieb:
>> volkerJaenisch wrote:
>>   
>>> Aug 20 01:17:52 hades kernel: [45569.481501] iser:
>>> iser_cma_handler:event: 3, error: -110
>>>     
>> On the target side are you running tgt with iscsi and iser at the same 
>> time? Did you make sure to make it so both targets have the proper 
>> access lists?
>>   
> Is there a magical switch to give different ACLs for iser and TCP transport?
> 
> How can I seperate the iscsi and the iser target ACLs? They are accessed
> via the same IP (IPoIB)????

I think you do not have to. See below.

> As I understood (poorly) the matter the initiiator open-iscsi at first
> tries to discover the target(s) via IP.
> Then if it is told to (via -t iser) it tries to negotiate via iser
> transport. If this fails it tries TCP transport.
>> Or what tgtadm commands did you use to create the iser target or did you 
>> use the config script?
>>   
> I used the following commands:
> 
> tgtadm --lld iscsi --op new --mode target --tid 1 -T
> de.inqbus.poseidon:disk1
> tgtadm --lld iscsi --op bind --mode target --tid 1 -I 10.6.0.1
> tgtadm --lld iscsi --op new --mode logicalunit --tid 1 --lun 1 -b
> /dev/vg1/test

Could you try a
tgtadm --lld fcoe --op bind --mode target --tid 1 -I ALL

This would allow any initiator to log in. If that works then you would 
want to check out the tgt README.iscsi (I do not know where debian 
installs READMEs for its packages so I attached the upstream one for you 
and I think the commands will be the same) to see how to set the acls 
based on whaterver you want to do. I do remember how to set other values 
off the top of my head.

> 
> The initiator IP is 10.6.0.1.
> 
> Best Regards
> 
> Volker
> 


--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~----------~----~----~----~------~----~------~--~---

Preface
-------------
This show a simple example to set up some targets.


Starting the daemon
-------------
Try the following commands:

host:~/tgt$ su
host:~/tgt# ./usr/tgtd


Configuration
-------------
Everyting is configured via the tgtadm management tool.

The following example creates a target with id 1 (the iqn is
iqn.2001-04.com.example:storage.disk2.amiens.sys1.xyz) and adds a
logical unit (backed by /dev/hdc1) with lun 1.

Let's create one target devce.

host:~/tgt$ su
host:~/tgt# ./usr/tgtadm --lld iscsi --op new --mode target --tid 1 -T 
iqn.2001-04.com.example:storage.disk2.amiens.sys1.xyz

You can get the current configuration:

host:~/tgt# ./usr/tgtadm --lld iscsi --op show --mode target
Target 1: iqn.2001-04.com.example:storage.disk2.amiens.sys1.xyz
    System information:
        Driver: iscsi
        Status: running
    I_T nexus information:
    LUN information:
        LUN: 0
            Type: controller
            SCSI ID: deadbeaf1:0
            SCSI SN: beaf10
            Size: 0
            Backing store: No backing store
    Account information:
    ACL information:


The controller device for management with lun 0 was created
automatically. You can't remove it.

Now it's time to add a logical unit to the target:

host:~/tgt# ./usr/tgtadm --lld iscsi --op new --mode logicalunit --tid 1 --lun 
1 -b /dev/hdc1

You can get the current configuration:

host:~/tgt# ./usr/tgtadm --lld iscsi --op show --mode target
Target 1: iqn.2001-04.com.example:storage.disk2.amiens.sys1.xyz
    System information:
        Driver: iscsi
        Status: running
    I_T nexus information:
    LUN information:
        LUN: 0
            Type: controller
            SCSI ID: deadbeaf1:0
            SCSI SN: beaf10
            Size: 0
            Backing store: No backing store
        LUN: 1
            Type: disk
            SCSI ID: deadbeaf1:1
            SCSI SN: beaf11
            Size: 55G
            Backing store: /dev/hdc1
    Account information:
    ACL information:


If you don't need to configure this target any more, enable the target
to accept any initiators:

host:~/tgt# ./usr/tgtadm --lld iscsi --op bind --mode target --tid 1 -I ALL
host:~/tgt# ./usr/tgtadm --lld iscsi --op show --mode target
Target 1: iqn.2001-04.com.example:storage.disk2.amiens.sys1.xyz
    System information:
        Driver: iscsi
        Status: running
    I_T nexus information:
    LUN information:
        LUN: 0
            Type: controller
            SCSI ID: deadbeaf1:0
            SCSI SN: beaf10
            Size: 0
            Backing store: No backing store
        LUN: 1
            Type: disk
            SCSI ID: deadbeaf1:1
            SCSI SN: beaf11
            Size: 55G
            Backing store: /dev/hdc1
    Account information:
    ACL information:
        ALL


Note "ACL information" section. "ALL" means that this target accepts
any initiators. The ACL feature also provides the access control based
on initiators' addresses.

First, let's remove "ALL" options:

host:~/tgt# ./usr/tgtadm --lld iscsi --op unbind --mode target --tid 1 -I ALL
host:~/tgt# ./usr/tgtadm --lld iscsi --op show --mode target
Target 1: iqn.2001-04.com.example:storage.disk2.amiens.sys1.xyz
    System information:
        Driver: iscsi
        Status: running
    I_T nexus information:
    LUN information:
        LUN: 0
            Type: controller
            SCSI ID: deadbeaf1:0
            SCSI SN: beaf10
            Size: 0
            Backing store: No backing store
        LUN: 1
            Type: disk
            SCSI ID: deadbeaf1:1
            SCSI SN: beaf11
            Size: 55G
            Backing store: /dev/hdc1
    Account information:
    ACL information:


Here are some examples:

host:~/tgt# ./usr/tgtadm --lld iscsi --op bind --mode target --tid 1 -I 
192.168.2.29
host:~/tgt# ./usr/tgtadm --lld iscsi --op bind --mode target --tid 1 -I 
192.168.11.0/24
host:~/tgt# ./usr/tgtadm --lld iscsi --op show --mode target
Target 1: iqn.2001-04.com.example:storage.disk2.amiens.sys1.xyz
    System information:
        Driver: iscsi
        Status: running
    I_T nexus information:
    LUN information:
        LUN: 0
            Type: controller
            SCSI ID: deadbeaf1:0
            SCSI SN: beaf10
            Size: 0
            Backing store: No backing store
        LUN: 1
            Type: disk
            SCSI ID: deadbeaf1:1
            SCSI SN: beaf11
            Size: 55G
            Backing store: /dev/hdc1
    Account information:
    ACL information:
        192.168.2.29
        192.168.11.0/24


You can add lots of logical units:

host:~/tgt# ./usr/tgtadm --lld iscsi --op new --mode logicalunit --tid 1 --lun 
2 -b /dev/hdd1
host:~/tgt# ./usr/tgtadm --lld iscsi --op show --mode target
Target 1: iqn.2001-04.com.example:storage.disk2.amiens.sys1.xyz
    System information:
        Driver: iscsi
        Status: running
    I_T nexus information:
    LUN information:
        LUN: 0
            Type: controller
            SCSI ID: deadbeaf1:0
            SCSI SN: beaf10
            Size: 0
            Backing store: No backing store
        LUN: 1
            Type: disk
            SCSI ID: deadbeaf1:1
            SCSI SN: beaf11
            Size: 55G
            Backing store: /dev/hdc1
        LUN: 2
            Type: disk
            SCSI ID: deadbeaf1:2
            SCSI SN: beaf12
            Size: 55G
            Backing store: /dev/hdd1
    Account information:
    ACL information:


You can get iSCSI parameters of the target:

host:~/tgt# ./usr/tgtadm --lld iscsi --op show --mode target --tid 1
MaxRecvDataSegmentLength=8192
MaxXmitDataSegmentLength=8192
HeaderDigest=None
DataDigest=None
InitialR2T=Yes
MaxOutstandingR2T=1
ImmediateData=Yes
FirstBurstLength=65536
MaxBurstLength=262144
DataPDUInOrder=Yes
DataSequenceInOrder=Yes
ErrorRecoveryLevel=0
IFMarker=No
OFMarker=No
DefaultTime2Wait=2
DefaultTime2Retain=20
OFMarkInt=Reject
IFMarkInt=Reject
MaxConnections=1


You can chage iSCSI parameters like the folloing (e.g. set
MaxRecvDataSegmentLength to 16384):

host:~/tgt# ./usr/tgtadm --lld iscsi --mode target --op update --tid 1 --name 
MaxRecvDataSegmentLength --value 16384

You can get iSCSI parameters again to see it change:

host:~/tgt# ./usr/tgtadm --lld iscsi --op show --mode target --tid 1
MaxRecvDataSegmentLength=16384
MaxXmitDataSegmentLength=8192
HeaderDigest=None
DataDigest=None
InitialR2T=Yes
MaxOutstandingR2T=1
ImmediateData=Yes
FirstBurstLength=65536
MaxBurstLength=262144
DataPDUInOrder=Yes
DataSequenceInOrder=Yes
ErrorRecoveryLevel=0
IFMarker=No
OFMarker=No
DefaultTime2Wait=2
DefaultTime2Retain=20
OFMarkInt=Reject
IFMarkInt=Reject
MaxConnections=1


The following is another example to enable header digest:

host:~/tgt# ./usr/tgtadm --lld iscsi --mode target --op update --tid 1 --name 
HeaderDigest --value CRC32C

host:~/tgt# ./usr/tgtadm --lld iscsi --op show --mode target --tid 1
MaxRecvDataSegmentLength=16384
MaxXmitDataSegmentLength=8192
HeaderDigest=CRC32C
DataDigest=None
InitialR2T=Yes
MaxOutstandingR2T=1
ImmediateData=Yes
FirstBurstLength=65536
MaxBurstLength=262144
DataPDUInOrder=Yes
DataSequenceInOrder=Yes
ErrorRecoveryLevel=0
IFMarker=No
OFMarker=No
DefaultTime2Wait=2
DefaultTime2Retain=20
OFMarkInt=Reject
IFMarkInt=Reject
MaxConnections=1

The target accepts CRC32C and None. Currently, there is no way to
configure a target to accept only CRC32C.


Authentication
-------------
Let's create a new account:

host:~/tgt# ./usr/tgtadm --lld iscsi --op new --mode account --user fujita 
--password tomo
host:~/tgt# ./usr/tgtadm --lld iscsi --op show --mode account
Account list:
    fujita

You can assign this account to any targets:

host:~/tgt# ./usr/tgtadm --lld iscsi --op bind --mode account --tid 1 --user 
fujita
host:~/tgt# ./usr/tgtadm --lld iscsi --op show --mode target
Target 1: iqn.2001-04.com.example:storage.disk2.amiens.sys1.xyz
    System information:
        Driver: iscsi
        Status: running
    I_T nexus information:
    LUN information:
        LUN: 0
            Type: controller
            SCSI ID: deadbeaf1:0
            SCSI SN: beaf10
            Size: 0
            Backing store: No backing store
        LUN: 1
            Type: disk
            SCSI ID: deadbeaf1:1
            SCSI SN: beaf11
            Size: 55G
            Backing store: /dev/hdc1
    Account information:
        fujita
    ACL information:
        ALL


You can set up an outgoing account. First, you need to create a new
account like the previous example:

host:~/tgt# ./usr/tgtadm --lld iscsi --op new --mode account --user hoge 
--password deadbeaf
host:~/tgt# ./usr/tgtadm --lld iscsi --op show --mode account
Account list:
    hoge
    fujita
host:~/tgt# ./usr/tgtadm --lld iscsi --op bind --mode account --tid 1 --user 
hoge --outgoing
host:~/tgt# ./usr/tgtadm --lld iscsi --op show --mode target
Target 1: iqn.2001-04.com.example:storage.disk2.amiens.sys1.xyz
    System information:
        Driver: iscsi
        Status: running
    I_T nexus information:
    LUN information:
        LUN: 0
            Type: controller
            SCSI ID: deadbeaf1:0
            SCSI SN: beaf10
            Size: 0
            Backing store: No backing store
        LUN: 1
            Type: disk
            SCSI ID: deadbeaf1:1
            SCSI SN: beaf11
            Size: 55G
            Backing store: /dev/hdc1
    Account information:
        fujita
        hoge (outgoing)
    ACL information:
        ALL


Initiator Information
-------------
After the target accepts initiators, the system information would be
something like the followings:

host:~/tgt# ./usr/tgtadm --lld iscsi --op show --mode target
Target 1: iqn.2001-04.com.example:storage.disk2.amiens.sys1.xyz
    System information:
        Driver: iscsi
        Status: running
    I_T nexus information:
        I_T nexus: 2
            Initiator: iqn.1987-05.com.cisco:01.4438aca09387
            Connection: 0
                IP Address: 192.168.11.5
    LUN information:
        LUN: 0
            Type: controller
            SCSI ID: deadbeaf1:0
            SCSI SN: beaf10
            Size: 0
            Backing store: No backing store
        LUN: 1
            Type: disk
            SCSI ID: deadbeaf1:1
            SCSI SN: beaf11
            Size: 55G
            Backing store: /dev/hdc1
    Account information:
    ACL information:
        ALL


One initiator (192.168.11.5) logs in now. Let's try again:

host:~/tgt# ./usr/tgtadm --lld iscsi --op show --mode target
Target 1: iqn.2001-04.com.example:storage.disk2.amiens.sys1.xyz
    System information:
        Driver: iscsi
        Status: running
    I_T nexus information:
        I_T nexus: 2
            Initiator: iqn.1987-05.com.cisco:01.4438aca09387
            Connection: 0
                IP Address: 192.168.11.5
        I_T nexus: 3
            Initiator: iqn.1991-05.com.microsoft:kernel
            Connection: 1
                IP Address: 192.168.11.133
    LUN information:
        LUN: 0
            Type: controller
            SCSI ID: deadbeaf1:0
            SCSI SN: beaf10
            Size: 0
            Backing store: No backing store
        LUN: 1
            Type: disk
            SCSI ID: deadbeaf1:1
            SCSI SN: beaf11
            Size: 55G
            Backing store: /dev/hdc1
    Account information:
    ACL information:
        ALL


Now we have two initiators. You can see the parameters that the target
and initiator negotiated (use the values follows "I_T nexus:"):

host:~/tgt# ./usr/tgtadm --lld iscsi --op show --mode session --tid 1 --sid 3
MaxRecvDataSegmentLength=8192
MaxXmitDataSegmentLength=8192
HeaderDigest=None
DataDigest=None
InitialR2T=Yes
MaxOutstandingR2T=1
ImmediateData=Yes
FirstBurstLength=65536
MaxBurstLength=262144
DataPDUInOrder=Yes
DataSequenceInOrder=Yes
ErrorRecoveryLevel=0
IFMarker=No
OFMarker=No
DefaultTime2Wait=2
DefaultTime2Retain=20
OFMarkInt=Reject
IFMarkInt=Reject
MaxConnections=1


Shutdown
-------------
host:~/tgt# killall -9 tgtd

We will support better methods later.


iSNS
-------------
iSNS support is still experimental.

host:~/tgt# ./usr/tgtadm --lld iscsi --op show --mode sys
iSNS:
    iSNS=Off
    iSNSServerIP=
    iSNSServerPort=3205
    iSNSAccessControl=Off

host:~/tgt# ./usr/tgtadm --op update --mode sys --name iSNSServerIP -v 
192.168.11.133

host:~/tgt# ./usr/tgtadm --op update --mode sys --name iSNS -v On

host:~/tgt# ./usr/tgtadm --lld iscsi --op show --mode sys
iSNS:
    iSNS=On
    iSNSServerIP=192.168.11.133
    iSNSServerPort=3205
    iSNSAccessControl=Off

Now you are ready to add targets. Now there are some limitations:

- once you enable iSNS, you can't disable it.
- you need to enable iSNS before adding targets.
iSCSI Extensions for RDMA (iSER)
================================

Copyright (C) 2007 Pete Wyckoff <[EMAIL PROTECTED]>

Background
----------

There is an IETF standards track RFC 5046 that extends the iSCSI protocol
to work on RDMA-capable networks as well as on traditional TCP/IP:

        Internet Small Computer System Interface (iSCSI) Extensions
        for Remote Direct Memory Access (RDMA), Mike Ko, October 2007.

RDMA stands for Remote Direct Memory Access, a way of accessing memory
of a remote node directly through the network without involving the
processor of that remote node.  Many network devices implement some form
of RDMA.  Two of the more popular network devices are InfiniBand (IB)
and iWARP.  IB uses its own physical and network layer, while iWARP sits
on top of TCP/IP (or SCTP).

Using these devices requires a new application programming interface
(API).  The Linux kernel has many components of the OpenFabrics software
stack, including APIs for access from user space and drivers for some
popular RDMA-capable NICs, including IB cards with the Mellanox chipset
and iWARP cards from NetEffect, Chelsio, and Ammasso.  Most Linux
distributions ship the user space libraries for device access and RDMA
connection management.


RDMA in tgtd
------------

The Linux kernel can act as a SCSI initiator on the iSER transport, but
not as a target.  tgtd is a user space target that supports multiple
transports, including iSCSI/TCP, and now iSER on RDMA devices.

The iSER code was written by researchers at the Ohio Supercomputer
Center in early 2007:

        Dennis Dalessandro <[EMAIL PROTECTED]>
        Ananth Devulapalli <[EMAIL PROTECTED]>
        Pete Wyckoff <[EMAIL PROTECTED]>

We wanted to use a faster transport to test the capabilities of an
object-based storage device (OSD) emulator we had previously written.
Our cluster has InfiniBand cards, and while running TCP/IP over IB is
possible, the performance is not nearly as good as using native IB
directly.

A report describing this implementation and some performance results
appears in IEEE conference proceedings as:

        Dennis Dalessandro, Ananth Devulapalli and Pete Wyckoff,
        iSER Storage Target for Object-based Storage Devices,
        Proceedings of MSST'07, SNAPI Workshop, San Diego, CA,
        September 2007.

and is available at:

        http://www.osc.edu/~pw/papers/iser-snapi07.pdf

Slides of the talk with more results and analysis are also available at:

        http://www.osc.edu/~pw/papers/wyckoff-iser-snapi07-talk.pdf

The code mostly lives in iscsi/iscsi_rdma.c, with a few places in
iscsi/iscsid.c that check if the transport is RDMA or not and behave
accordingly.  iSCSI already had the idea of a transport, with just the
single TCP one defined.  We added the RDMA transport and virtualized
some more functions where TCP and RDMA behave differently.


Design Issues
-------------

In general, a SCSI system includes two components, an initiator and a
target. The initiator submits commands and awaits responses.  The target
services commands from initiators and returns responses.  Data may flow
from the initiator, from the client, or both (bidirectional).  The iSER
specification requires all data transfers to be started by the target,
regardless of direction.  In a read operation, the target uses RDMA
Write to move data to the initiator, while a write operation uses RDMA
Read to fetch data from the initiator.


1. Memory registration

One of the most severe stumbling blocks in moving any application to
take advantage of RDMA features is memory registration.  Before using
RDMA, both the sending and receiving buffers must be registered with the
operating system.  This operation ensures that the underlying hardware
pages will not be modified during the transfer, and provides the
physical addresses of the buffers to the network card.  However, the
process itself is time consuming, and CPU intensive.  Previous
investigations have shown that for InfiniBand, with a nominal transfer
rate of 900 MB/s, the throughput drops to around 500 MB/s when memory
registration and deregistration are included in the critical path.

Our target implementation uses pre-registered buffers for RDMA
operations.  In general such a scheme is difficult to justify due to the
large per-connection resource requirements.  However, in this
application it may be appropriate.  Since the target always initiates
RDMA operations and never advertises RDMA buffers, it can securely use
one pool of buffers for multiple clients and can manage its memory
resources explicitly.  Also, the architecture of the code is such that
the iSCSI layer dictates incoming and outgoing buffer locations to the
storage device layer, so supplying a registered buffer is relatively
easy.


2. Event management

There is a mismatch between what the tgtd event framework assumes and
what the RDMA notification interface provides.  The existing TCP-based
iSCSI target code has one file descriptor per connection and it is
driven by readability or writeability of the socket.  A single poll
system call returns which sockets can be serviced, driving the TCP code
to read or write as appropriate.  The RDMA interface can be used in
accordance with this design by requesting interrupts from the network
card on work request completions.  Notifications appear on the file
descriptor that represents a completion queue to which all RDMA events
are delivered.

However, the existing sockets-based code goes beyond this and changes
the bitmask of requested events to control its code flow.  For instance,
after it finishes sending a response, it will modify the bitmask to only
look for readability.  Even if the socket is writeable, there is no data
to write, hence polling for that status is not useful.  The code also
disables new message arrival during command execution as a sort of
exclusion facility, again by modifying the bitmask.  We cannot do this
with the RDMA interface.  Hence we must maintain an active list of tasks
that have data to write and drive a progress engine to service them.
The need for progress is tracked by a counter, and the tgtd event loop
checks this counter and calls into the iSER-specific while the counter
is still non-zero.  tgtd will block in the poll call when it must wait
on network activity.  No dedicated thread is needed for iSER.


3. Padding

The iSCSI specification clearly states that all segments in the protocol
data unit (PDU) must be individually padded to four-byte boundaries.
However, the iSER specification remains mute on the subject of padding.
It is clear from an implementation perspective that padding data
segments is both unnecessary and would add considerable overhead to
implement.  (Possibly a memory copy or extra SG entry on the initiator
when sending directly from user memory.)   RDMA is used to move all
data, with byte granularity provided by the network.  The need for
padding in the TCP case was motivated by the optional marker support to
work around the limitations of the streaming mode of TCP.  IB and iWARP
are message-based networks and would never need markers.  And finally,
the Linux initiator does not add padding either.


Using iSER
----------

Compile tgtd with "make ISCSI=1 ISCSI_RDMA=1" to build iSCSI and iSER.
You'll need to have two libraries installed on your system:
libibverbs.so and librdmacm.so.  If they are installed in the normal
system paths (/usr/include and /usr/lib or /usr/lib64), they will be
found automatically.  Otherwise, edit CFLAGS and LIBS in usr/Makefile
near ISCSI_RDMA to specify the paths by hand, e.g., for a /usr/local
install, it should look like:

        ifneq ($(ISCSI_RDMA),)
        CFLAGS += -DISCSI_RDMA -I/usr/local/include
        TGTD_OBJS += iscsi/iscsi_rdma.o
        LIBS += -L/usr/local/lib -libverbs -lrdmacm
        endif

If these libraries are not in the normal system paths, you may
possibly also have to set, e.g., LD_LIBRARY_PATH=/usr/local/lib
in your environment to find the shared libraries at runtime.

The target will listen on all TCP interfaces (as usual), as well as all
RDMA devices.  Both use the same default iSCSI port, 3260.  Clients on
TCP or RDMA will connect to the same tgtd instance.

Start the daemon (as root):

        ./tgtd

It will send messages to syslog.  You can add "-d 9" to turn on debug
messages.

Configure the running target with one or more devices, using the tgtadm
program you just built (also as root).  Full information is in
doc/README.iscsi.  Here is a quick-start guide:

        dd if=/dev/zero bs=1k count=1 seek=1048575 of=/tmp/tid1lun1
        ./tgtadm --lld iscsi --mode target \
                 --op new --tid 1 --targetname $(hostname)
        ./tgtadm --lld iscsi --mode target \
                 --op bind --tid 1 --initiator-address ALL
        ./tgtadm --lld iscsi --mode logicalunit \
                 --op new --tid 1 --lun 1 --backing-store /tmp/tid1lun1

To make your initiator use RDMA, make sure the "ib_iser" module is
loaded in your kernel.  Then do discovery as usual, over TCP:

        iscsiadm -m discovery -t sendtargets -p $targetip

where $targetip is the ethernet address of your IPoIB device.  Discovery
traffic will use IPoIB, but login and full feature phase will use RDMA
natively.

Then do something like the following to change the transport type:

        iscsiadm -m node -p $targetip -T $targetname --op update \
            -n node.transport_name -v iser

Next, login as usual:

        iscsiadm -m node -p $targetip -T $targetname --login

And access the new block device, e.g. /dev/sdb.


Errata
------

There is a major bug in the mthca driver in linux kernels before 2.6.21.
This includes the popular rhel5 kernels, such as 2.6.18-8.1.6.el5 and
possibly later.  The critical commit is:

    608d8268be392444f825b4fc8fc7c8b509627129
    IB/mthca: Fix data corruption after FMR unmap on Sinai

If you use single-port memfree cards, SCSI read operations will
frequently result in randomly corrupted memory, leading to bad
application data or unexplainable kernel crashes.  Older kernels are
also missing some nice iSCSI changes that avoids crashes in some
situations where the target goes away.  Stock kernel.org linux
2.6.22-rc5 and 2.6.23-rc6 have been tested and are known to work.

The Linux kernel iSER initiator is currently lacking support for
bidirectional transfers, and for extended command descriptors (CDBs).
Progress toward adding this is being made, with patches frequently
appearing on the relevant mailing lists.

The Linux kernel iSER initiator uses a different header structure on its
packets than is in the iSER specification.  This is described in
an InfiniBand document and is required for that network, which only
supports for Zero-Based Addressing.  If you are using a non-IB initiator
that doesn't need this header extension, it won't work with tgtd.  There
may be some way to negotiate the header format.  Using iWARP hardware
devices with the Linux kernel iSER initiator also will not work due to
its reliance on fast memory registration (FMR), an InfiniBand-only feature.

The current code sizes its per-connection resource consumption based on
negotiatied parameters.  However, the Linux iSER initiator does not
support negotiation of MaxOutstandingUnexpectedPDUs, so that value is
hard-coded in the target.  Also, open-iscsi is hard-coded with a very
small value of TargetRecvDataSegmentLength, so even though the target
would be willing to accept a larger size, it cannot.  This may limit
performance of small transfers on high-speed networks:  transfers bigger
than 8 kB, but not large enough to amortize a round-trip for RDMA setup.

The data structures for connection management in the iSER code are
desgined to handle multiple devices, but have never been tested with
such hardware.

Reply via email to