Re: [PATCH] target: Fix for hang of Ordered task in TCM

2016-06-08 Thread Nicholas A. Bellinger
On Wed, 2016-06-08 at 14:43 -0500, Bryant G. Ly wrote:



> 
> Hi Nic,
> 
> So I was testing the ibmvscsi target driver and I ran into some problems
> again with UA stuff and it looks like you didnt remove the UA checks from
> target_setup_cmd_from_cdb? Was that intentional? I thought we agreed to move
> it completely to target_execute_cmd and not have both?
> 
> Let me know.
> 

Ah yes, the version of the patch I've been using for future target-core
developments was missing the removal in target_setup_cmd_from_cdb().

Applied the proper version to target-pending/master here:

https://git.kernel.org/cgit/linux/kernel/git/nab/target-pending.git/commit/?id=15ce22fa491aa3bab7acd89ac40a9fc27aeed915

Thanks for the heads up!

--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v2] scsi_debug: fix sleep in invalid context

2016-06-08 Thread Douglas Gilbert

On 2016-06-08 04:57 PM, Douglas Gilbert wrote:

On 2016-06-07 09:55 PM, Christoph Hellwig wrote:

+static int p_fill_from_dev_buffer(struct scsi_cmnd *scp, const void *arr,
+  int arr_len, unsigned int off_dst)
+{
+int act_len, n;
+struct scsi_data_buffer *sdb = scsi_in(scp);
+off_t skip = off_dst;


Why off_t which is a signed value instead of the unsigned in passed in?


Because off_t is the type of the last argument to sg_pcopy_from_buffer()
which is where the off_dst value goes. So there is potentially a size
of integer change plus signedness, IMO best expressed by defining a
new auto variable. I notice some projects have a uoff_t typedef for
offsets that make no sense when negative (like this case), but not the
Linux kernel.


+#define RL_BUCKET_ELEMS 8
+
  /* Even though each pseudo target has a REPORT LUNS "well known logical unit"
   * (W-LUN), the normal Linux scanning logic does not associate it with a
   * device (e.g. /dev/sg7). The following magic will make that association:
@@ -3285,12 +3315,14 @@ static int resp_report_luns(struct scsi_cmnd *scp,
  unsigned char select_report;
  u64 lun;
  struct scsi_lun *lun_p;
-u8 *arr;
+u8 arr[RL_BUCKET_ELEMS * sizeof(struct scsi_lun)];


just use an on-stack array of type struct scsi_lun here, e.g.:

struct scsi_lun arr[RL_BUCKET_ELEMS];

Which you can then use directly instead of lun_p later, but which can
also be passed p_fill_from_dev_buffer as that takes a void pointer.


Can do.


When I looked at doing this, it's not that simple. The first 8 byte
"slot" is not a LUN, it's the response header, which just happens
to be 8 bytes long. That is the reason for the comment above that loop:
  /* loops rely on sizeof response header same as sizeof lun (both 8) */

So IMO: 'struct scsi_lun arr[RL_BUCKET_ELEMS];' will work but
is deceptive. IMO the v2 patch should stand, unless someone has an
alternate patch. The original patch is also fine. This is a bug
fix, not a new feature.

Doug Gilbert


--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [RFC 0/2] nvme/loop: Add support for controllers-per-port model

2016-06-08 Thread Nicholas A. Bellinger
On Wed, 2016-06-08 at 14:14 +0200, Christoph Hellwig wrote:
> Hi Nic,
> 
> multiple ports have been on the todo list ever since we put that short
> cut in place, but your patches aren't really how this should work.
> 
> The host NQN is not available for the actual drivers - we need to be able
> to virtualize it for containers for example, that's why we need to pass
> it by pointer depending on the arguments.  Also there is no need to
> add another sysfs interface - just like for real drivers we can simply
> create the ports in configfs and assign an address to it, we just need
> to stop ignoring it.
> 

Not sure what you mean, but my patch does not propose 'to add another
sysfs interface'.

It simply follows what drivers/target/loopback/ already does for scsi,
and allocates a controller from nvmet/configfs-ng at subsystem NQN
port enable time.

I still don't see why a loopback controller on a port of an individual
subsystem NQN can't be created + deleted directly from configfs, and
operate independently of other loopback controllers on a port of a
different subsystem NQNs.

> Something like the patch below is the right way, it just needs a bit more
> fine tuning and proper test cases:

I don't see how adding a internal mutex for loopback ports, and doing
internal sequential list lookup holding the global nvmet_config_sem
whilst doing nvmet_fabric_ops->add_port() helps to scale at all..

AFAICT for loopback, neither of these is necessary.  Why can't a
loopback controller on a port for a individual subsystem NQN be created
+ deleted directly from configfs, independent of other subsystem NQN's
loopback controller ports..?

Now for rdma, just like with iscsi-target + iser with network portals,
we do need to keep an internal list of ports, given that ports can be
shared across different target endpoints.

That's the part that I'm working on next for nvmet/configfs-ng.

--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 16/21] fnic: Use time64_t to represent trace timestamps

2016-06-08 Thread Deepa Dinamani
trace timestamps use struct timespec and CURRENT_TIME which
are not y2038 safe.
These timestamps are only part of the trace log on the machine
and are not shared with the fnic.
Replace then with y2038 safe struct timespec64 and
ktime_get_real_ts64(), respectively.

Signed-off-by: Deepa Dinamani 
Cc: Hiral Patel 
Cc: Suma Ramars 
Cc: Brian Uchino 
Cc: "James E.J. Bottomley" 
Cc: "Martin K. Petersen" 
Cc: linux-scsi@vger.kernel.org
---
 drivers/scsi/fnic/fnic_trace.c | 4 ++--
 drivers/scsi/fnic/fnic_trace.h | 2 +-
 2 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/drivers/scsi/fnic/fnic_trace.c b/drivers/scsi/fnic/fnic_trace.c
index 4e15c4b..5a5fa01 100644
--- a/drivers/scsi/fnic/fnic_trace.c
+++ b/drivers/scsi/fnic/fnic_trace.c
@@ -613,7 +613,7 @@ int fnic_fc_trace_set_data(u32 host_no, u8 frame_type,
fc_trace_entries.rd_idx = 0;
}
 
-   fc_buf->time_stamp = CURRENT_TIME;
+   ktime_get_real_ts64(_buf->time_stamp);
fc_buf->host_no = host_no;
fc_buf->frame_type = frame_type;
 
@@ -740,7 +740,7 @@ void copy_and_format_trace_data(struct fc_trace_hdr *tdata,
 
len = *orig_len;
 
-   time_to_tm(tdata->time_stamp.tv_sec, 0, );
+   time64_to_tm(tdata->time_stamp.tv_sec, 0, );
 
fmt = "%02d:%02d:%04ld %02d:%02d:%02d.%09lu ns%8x   %c%8x\t";
len += snprintf(fnic_dbgfs_prt->buffer + len,
diff --git a/drivers/scsi/fnic/fnic_trace.h b/drivers/scsi/fnic/fnic_trace.h
index a8aa057..e375d0c 100644
--- a/drivers/scsi/fnic/fnic_trace.h
+++ b/drivers/scsi/fnic/fnic_trace.h
@@ -72,7 +72,7 @@ struct fnic_trace_data {
 typedef struct fnic_trace_data fnic_trace_data_t;
 
 struct fc_trace_hdr {
-   struct timespec time_stamp;
+   struct timespec64 time_stamp;
u32 host_no;
u8 frame_type;
u8 frame_len;
-- 
1.9.1

--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: NVMe over Fabrics target implementation

2016-06-08 Thread Nicholas A. Bellinger
On Wed, 2016-06-08 at 16:12 +0300, Sagi Grimberg wrote:
> >> *) Extensible to multiple types of backend drivers.
> >>
> >> nvme-target needs a way to absorb new backend drivers, that
> >> does not effect existing configfs group layout or attributes.
> >>
> >> Looking at the nvmet/configfs layout as-is, there are no multiple
> >> backend types defined, nor a way to control backend feature bits
> >> exposed to nvme namespaces at runtime.
> 
> Hey Nic,
> 
> As for different type of backends, I still don't see a big justification
> for adding the LIO backends pscsi (as it doesn't make sense),
> ramdisk (we have brd), or file (losetup).
> 

The configfs ABI should not dictate a single backend use-case.

In the target-core ecosystem today, there are just as many
people using FILEIO atop local file-systems as there are
using IBLOCK and submit_bio().

As mentioned, target_core_iblock.c has absorbed the io-cmd.c
improvements so existing scsi target drivers can benefit
too.  Plus, having interested folks focusing on a single set of
backends for FILEIO + IBLOCK means both scsi and nvme
target drivers benefit from further improvements.

As we've already got both, a target backend configfs ABI
and user ecosystem using /sys/kernel/config/target/core/, it's
a straight forward way to share common code, while still allowing
scsi and nvme to function using their own independent fabric
configfs ABI layouts.

Along with having common code and existing configfs
ABI, we also get a proper starting point for target-core
features that span across endpoints, and are defined for
both scsi and nvme.  PR APTPL immediately comes to mind.

Namely, for the ability of one backend device to interact
between both scsi target luns and nvme target namespaces,
as well as different backends across scsi and nvme exports.

> What kind of feature bits would you want to expose at runtime?

As for feature bits, basically everything currently or
in the future to be reported by ID_NS.  There is T10-PI, and another
example is copy-offload support, once the NVMe spec gets that far..

The main point is that we should be able to add new feature bits to
common code in target-core backend configfs ABI, without having to
change the individual scsi or nvme configfs ABIs.

> 
> > And that's very much intentional.  We have a very well working block
> > layer which we're going to use, no need to reivent it.  The block
> > layer supports NVMe pass through just fine in case we'll need it,
> > as I spent the last year preparing it for that.
> >
> >> Why does it ever make sense for $SUBSYSTEM_NQN_0 with $PORT_DRIVER_FOO
> >> to block operation of $SUBSYSTEM_NQN_1 with $PORT_DRIVER_BAR..?
> >
> > Because it keeps the code simple.  If you had actually participated
> > on our development list you might have seen that until not too long
> > ago we have very fine grainded locks here.  In the end Armen convinced
> > me that it's easier to maintain if we don't bother with fine grained
> > locking outside the fast path, especially as it significantly simplifies
> > the discovery implementation.   If if it ever turns out to be an
> > issue we can change it easily as the implementation is well encapsulated.
> 
> We did change that, and Nic is raising a valid point in terms of having
> a global mutex around all the ports. If the requirement of nvme
> subsystems and ports configuration is that it should happen fast enough
> and scale to the numbers that Nic is referring to, we'll need to change
> that back.
> 
> Having said that, I'm not sure this is a real hard requirement for RDMA
> and FC in the mid-term, because from what I've seen, the workloads Nic
> is referring to are more typical for iscsi/tcp where connections are
> cheaper and you need more to saturate a high-speed interconnects, so
> we'll probably see this when we have nvme over tcp working.

Yes.

Further, my objections to the proposed nvmet configfs ABI are:

  - Doesn't support multiple backend types.
  - Doesn't provide a way to control backend feature bits separate from 
fabric layout.
  - Doesn't provide a starting point between target features that span
both scsi and nvme.
  - Doesn't allow for concurrent parallel configfs create + delete 
operations of subsystem NQNs across ports and host acls.
  - Global synchronization of nvmet_fabric_ops->add_port()

--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] Add support for SCT Write Same

2016-06-08 Thread Martin K. Petersen
> "Shaun" == Shaun Tancheff  writes:

Shaun,

Shaun> SATA drives may support write same via SCT. This is useful for
Shaun> setting the drive contents to a specific pattern (0's).

index 428c03e..c5c8424 100644
--- a/drivers/scsi/sd.c
+++ b/drivers/scsi/sd.c
@@ -52,6 +52,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 #include 
 
No ATA stuff in sd.c, please.


@@ -2761,24 +2762,26 @@ static void sd_read_write_same(struct scsi_disk *sdkp, 
unsigned char *buffer)
 {
struct scsi_device *sdev = sdkp->device;
 
-   if (sdev->host->no_write_same) {
-   sdev->no_write_same = 1;
-
-   return;
-   }
-
if (scsi_report_opcode(sdev, buffer, SD_BUF_SIZE, INQUIRY) < 0) {
-   /* too large values might cause issues with arcmsr */
-   int vpd_buf_len = 64;
-
sdev->no_report_opcodes = 1;
 
/* Disable WRITE SAME if REPORT SUPPORTED OPERATION
 * CODES is unsupported and the device has an ATA
 * Information VPD page (SAT).
 */

The above comment tells you how to enable WRITE SAME in libata's
SCSI-ATA translation.


-   if (!scsi_get_vpd_page(sdev, 0x89, buffer, vpd_buf_len))
+   if (!scsi_get_vpd_page(sdev, 0x89, buffer, SD_BUF_SIZE))

That vpd_buf_len is intentional.

-- 
Martin K. Petersen  Oracle Linux Engineering
--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: NVMe over Fabrics target implementation

2016-06-08 Thread Nicholas A. Bellinger
On Wed, 2016-06-08 at 14:19 +0200, Christoph Hellwig wrote:
> On Tue, Jun 07, 2016 at 10:21:41PM -0700, Nicholas A. Bellinger wrote:
> > *) Extensible to multiple types of backend drivers.
> > 
> > nvme-target needs a way to absorb new backend drivers, that
> > does not effect existing configfs group layout or attributes.
> > 
> > Looking at the nvmet/configfs layout as-is, there are no multiple
> > backend types defined, nor a way to control backend feature bits
> > exposed to nvme namespaces at runtime.
> 
> And that's very much intentional.  We have a very well working block
> layer which we're going to use, no need to reivent it.  The block
> layer supports NVMe pass through just fine in case we'll need it,
> as I spent the last year preparing it for that.
> 
> > Why does it ever make sense for $SUBSYSTEM_NQN_0 with $PORT_DRIVER_FOO
> > to block operation of $SUBSYSTEM_NQN_1 with $PORT_DRIVER_BAR..?
> 
> Because it keeps the code simple.  If you had actually participated
> on our development list you might have seen that until not too long
> ago we have very fine grainded locks here.  In the end Armen convinced
> me that it's easier to maintain if we don't bother with fine grained
> locking outside the fast path, especially as it significantly simplifies
> the discovery implementation.   If if it ever turns out to be an
> issue we can change it easily as the implementation is well encapsulated.

Well, it's rather obvious from the design limitations highlighted
earlier that using a configfs model for nvmet was an after-thought.

You'll recall a prototype of nvmet using target_core_fabric_configfs.c,
which allowed nvmet to work out-of-the-box with LIO userspace, that fit
directly into target_core_pr.c, and other existing target-core features
that scsi and nvme share.

Since our discussions at LSF, I've taken the point that it makes sense
for nvmet to have it's own configfs layout using common configfs
backends, which is reflected in nvmet/configfs-ng RFC from monday.

However, I completely disagree with you that scsi (specifically iscsi)
and nvme models are somehow incompatible from a target subsystem
configfs perspective.

They are not.

Point being, if you took the exact same three top level configfs groups
in the nvmet implementation  using subsystem configfs symlinks into
ports and hosts, and did the same for iscsi by renaming them to:

/sys/kernel/config/iscsit/

subsystems -> iqns
ports -> network portals
hosts -> acls

You would have the exact same type of scale limitations with iscsi that
have already highlighted.  There is nothing iscsi or nvme specific about
these limitations.

The point is that it's not a question of if this configfs model is
required for a nvme target design, it's a bad model for any configfs
consumer design to follow.

To repeat, any time there are global locks and sequential lookups
across multiple top level configfs groups, you're doing configfs
wrong.

The whole point of configfs is to allow vfs to reference count data
structures using parent/child relationships, and let configfs give you
the reference counting for free.

And getting that reference counting for free in configfs is what allows
existing target-core backends + endpoints to scale creation, operation
and deletion independent from each other.

Any type of medium or larger service provider and hosting environment
requires the ability of it's target mode storage control plane to
function independent of individual tenants.

The limitations as-is of the nvmet/configfs design makes it rather
useless for these environments, and any type of upstream commitment to a
target mode configfs based ABI needs to be able to, at least, scale to
what we've already got in target_core_fabric_configfs.

Otherwise, it will eventually be thrown out for something that both fits
the needs of today's requirements, and can grow into the future while
never breaking user-space ABI compatibility.

We got the /sys/kernel/config/target/$FABRIC/ layout right the first
time back in 2008, have *never* had to break ABI compat, and we are
still able scale to the requirements of today.

I'd like to hope we'd be able to achieve the same goals for nvmet in
2024.

--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] sd: remove redundant check for BLK_DEF_MAX_SECTORS

2016-06-08 Thread Martin K. Petersen
> "Long" == Long Li  writes:

Long,

Long> The problem I'm trying to solve is that, I want to have lower
Long> layer driver to setup max_sectors bigger than
Long> BLK_DEF_MAX_SECTORS.

Capping at BLK_DEF_MAX_SECTORS unless a device has explicitly reported
requirements is intentional. We have not had good experiences with
making I/O requests too big in general. So BLK_DEF_MAX_SECTORS has
deliberately been kept small. However, it was recently bumped to 1MB and
change by default.

Long> n Hyper-v, we use 2MB max transfer I/O size, in future version the
Long> max transfer I/O size will increase to 8MB.

But presumably you provide a BLOCK LIMITS VPD for your virtual targets?

Long> The reason why I think it may not be necessary for sd.c to setup
Long> max_sectors, it's because this value may have already been setup
Long> twice before reaching the code in sd.c: 1. When this disk device
Long> is first scanned, or re-scanned (in scsi_scan.c), where it
Long> eventually calls __scsi_init_queue(), and use the max_sectors in
Long> the scsi_host_template.  2. in slave_configure of
Long> scsi_host_template, when the lower layer driver implements this
Long> function in its template and it can change this value there.

Those cause limits to be set for the controller. We won't know the
device limits until we hit revalidate.

blk_queue_max_hw_sectors() will also clamp the R/W max at
BLK_DEF_MAX_SECTORS, though.

-- 
Martin K. Petersen  Oracle Linux Engineering
--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 2/2] aacraid: remove wildcard for series 9 controllers

2016-06-08 Thread Don Brace
Depends on smartpqi driver adoption

Reviewed-by: Kevin Barnett 
Reviewed-by: Scott Teel 
Signed-off-by: Don Brace 
---
 drivers/scsi/aacraid/linit.c |2 --
 1 file changed, 2 deletions(-)

diff --git a/drivers/scsi/aacraid/linit.c b/drivers/scsi/aacraid/linit.c
index 79871f3..d5b26fa 100644
--- a/drivers/scsi/aacraid/linit.c
+++ b/drivers/scsi/aacraid/linit.c
@@ -160,7 +160,6 @@ static const struct pci_device_id aac_pci_tbl[] = {
{ 0x9005, 0x028b, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 62 }, /* Adaptec PMC 
Series 6 (Tupelo) */
{ 0x9005, 0x028c, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 63 }, /* Adaptec PMC 
Series 7 (Denali) */
{ 0x9005, 0x028d, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 64 }, /* Adaptec PMC 
Series 8 */
-   { 0x9005, 0x028f, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 65 }, /* Adaptec PMC 
Series 9 */
{ 0,}
 };
 MODULE_DEVICE_TABLE(pci, aac_pci_tbl);
@@ -239,7 +238,6 @@ static struct aac_driver_ident aac_drivers[] = {
{ aac_src_init, "aacraid", "ADAPTEC ", "RAID", 2, 
AAC_QUIRK_SRC }, /* Adaptec PMC Series 6 (Tupelo) */
{ aac_srcv_init, "aacraid", "ADAPTEC ", "RAID", 2, 
AAC_QUIRK_SRC }, /* Adaptec PMC Series 7 (Denali) */
{ aac_srcv_init, "aacraid", "ADAPTEC ", "RAID", 2, 
AAC_QUIRK_SRC }, /* Adaptec PMC Series 8 */
-   { aac_srcv_init, "aacraid", "ADAPTEC ", "RAID", 2, 
AAC_QUIRK_SRC } /* Adaptec PMC Series 9 */
 };
 
 /**

--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH V1 0/2] Series short description

2016-06-08 Thread Don Brace
These patches are based on Linus's tree

- add smartpqi to kernel.org
- remove PCI IDs from aacraid driver
  - Depends on adoption of smartpqi driver

Changes since initial upload
 - Forgot to give correct ownership to the author.

---

Don Brace (1):
  aacraid: remove wildcard for series 9 controllers

Kevin Barnett (1):
  smartpqi: initial commit of Microsemi smartpqi driver


 MAINTAINERS|   11 
 drivers/scsi/Kconfig   |1 
 drivers/scsi/Makefile  |1 
 drivers/scsi/aacraid/linit.c   |2 
 drivers/scsi/smartpqi/Kconfig  |   50 
 drivers/scsi/smartpqi/Makefile |3 
 drivers/scsi/smartpqi/smartpqi.h   | 1106 
 drivers/scsi/smartpqi/smartpqi_init.c  | 6817 
 drivers/scsi/smartpqi/smartpqi_sas_transport.c |  350 +
 drivers/scsi/smartpqi/smartpqi_sis.c   |  394 +
 drivers/scsi/smartpqi/smartpqi_sis.h   |   32 
 11 files changed, 8765 insertions(+), 2 deletions(-)
 create mode 100644 drivers/scsi/smartpqi/Kconfig
 create mode 100644 drivers/scsi/smartpqi/Makefile
 create mode 100644 drivers/scsi/smartpqi/smartpqi.h
 create mode 100644 drivers/scsi/smartpqi/smartpqi_init.c
 create mode 100644 drivers/scsi/smartpqi/smartpqi_sas_transport.c
 create mode 100644 drivers/scsi/smartpqi/smartpqi_sis.c
 create mode 100644 drivers/scsi/smartpqi/smartpqi_sis.h

--
Signature
--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH V1 2/2] aacraid: remove wildcard for series 9 controllers

2016-06-08 Thread Don Brace
Depends on smartpqi driver adoption

Reviewed-by: Kevin Barnett 
Reviewed-by: Scott Teel 
Signed-off-by: Don Brace 
---
 drivers/scsi/aacraid/linit.c |2 --
 1 file changed, 2 deletions(-)

diff --git a/drivers/scsi/aacraid/linit.c b/drivers/scsi/aacraid/linit.c
index 79871f3..d5b26fa 100644
--- a/drivers/scsi/aacraid/linit.c
+++ b/drivers/scsi/aacraid/linit.c
@@ -160,7 +160,6 @@ static const struct pci_device_id aac_pci_tbl[] = {
{ 0x9005, 0x028b, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 62 }, /* Adaptec PMC 
Series 6 (Tupelo) */
{ 0x9005, 0x028c, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 63 }, /* Adaptec PMC 
Series 7 (Denali) */
{ 0x9005, 0x028d, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 64 }, /* Adaptec PMC 
Series 8 */
-   { 0x9005, 0x028f, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 65 }, /* Adaptec PMC 
Series 9 */
{ 0,}
 };
 MODULE_DEVICE_TABLE(pci, aac_pci_tbl);
@@ -239,7 +238,6 @@ static struct aac_driver_ident aac_drivers[] = {
{ aac_src_init, "aacraid", "ADAPTEC ", "RAID", 2, 
AAC_QUIRK_SRC }, /* Adaptec PMC Series 6 (Tupelo) */
{ aac_srcv_init, "aacraid", "ADAPTEC ", "RAID", 2, 
AAC_QUIRK_SRC }, /* Adaptec PMC Series 7 (Denali) */
{ aac_srcv_init, "aacraid", "ADAPTEC ", "RAID", 2, 
AAC_QUIRK_SRC }, /* Adaptec PMC Series 8 */
-   { aac_srcv_init, "aacraid", "ADAPTEC ", "RAID", 2, 
AAC_QUIRK_SRC } /* Adaptec PMC Series 9 */
 };
 
 /**

--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] aic7xxx: fix wrong return values

2016-06-08 Thread Laurence Oberman


- Original Message -
> From: "Luis de Bethencourt" 
> To: linux-ker...@vger.kernel.org
> Cc: h...@suse.com, j...@linux.vnet.ibm.com, "martin petersen" 
> ,
> linux-scsi@vger.kernel.org, jav...@osg.samsung.com, "Luis de Bethencourt" 
> 
> Sent: Wednesday, June 8, 2016 6:23:02 PM
> Subject: [PATCH] aic7xxx: fix wrong return values
> 
> Convention of error codes says to return them as negative values.
> 
> Signed-off-by: Luis de Bethencourt 
> ---
>  drivers/scsi/aic7xxx/aic7770_osm.c |  6 +++---
>  drivers/scsi/aic7xxx/aic79xx_core.c| 24 
>  drivers/scsi/aic7xxx/aic79xx_osm.c |  8 
>  drivers/scsi/aic7xxx/aic79xx_osm_pci.c | 16 
>  drivers/scsi/aic7xxx/aic79xx_pci.c |  2 +-
>  drivers/scsi/aic7xxx/aic7xxx_core.c| 26 +-
>  drivers/scsi/aic7xxx/aic7xxx_osm.c |  8 
>  drivers/scsi/aic7xxx/aic7xxx_osm_pci.c | 12 ++--
>  drivers/scsi/aic7xxx/aic7xxx_pci.c |  4 ++--
>  9 files changed, 53 insertions(+), 53 deletions(-)
> 
> diff --git a/drivers/scsi/aic7xxx/aic7770_osm.c
> b/drivers/scsi/aic7xxx/aic7770_osm.c
> index 3d401d0..50f030d 100644
> --- a/drivers/scsi/aic7xxx/aic7770_osm.c
> +++ b/drivers/scsi/aic7xxx/aic7770_osm.c
> @@ -51,7 +51,7 @@ aic7770_map_registers(struct ahc_softc *ahc, u_int port)
>* Lock out other contenders for our i/o space.
>*/
>   if (!request_region(port, AHC_EISA_IOSIZE, "aic7xxx"))
> - return (ENOMEM);
> + return -ENOMEM;
>   ahc->tag = BUS_SPACE_PIO;
>   ahc->bsh.ioport = port;
>   return (0);
> @@ -87,10 +87,10 @@ aic7770_probe(struct device *dev)
>   sprintf(buf, "ahc_eisa:%d", eisaBase >> 12);
>   name = kstrdup(buf, GFP_ATOMIC);
>   if (name == NULL)
> - return (ENOMEM);
> + return -ENOMEM;
>   ahc = ahc_alloc(_driver_template, name);
>   if (ahc == NULL)
> - return (ENOMEM);
> + return -ENOMEM;
>   error = aic7770_config(ahc, aic7770_ident_table + edev->id.driver_data,
>  eisaBase);
>   if (error != 0) {
> diff --git a/drivers/scsi/aic7xxx/aic79xx_core.c
> b/drivers/scsi/aic7xxx/aic79xx_core.c
> index 109e2c9..bf850d8 100644
> --- a/drivers/scsi/aic7xxx/aic79xx_core.c
> +++ b/drivers/scsi/aic7xxx/aic79xx_core.c
> @@ -6422,7 +6422,7 @@ ahd_init_scbdata(struct ahd_softc *ahd)
>   scb_data->maxhscbs = ahd_probe_scbs(ahd);
>   if (scb_data->maxhscbs == 0) {
>   printk("%s: No SCB space found\n", ahd_name(ahd));
> - return (ENXIO);
> + return -ENXIO;
>   }
>  
>   ahd_initialize_hscbs(ahd);
> @@ -6501,7 +6501,7 @@ ahd_init_scbdata(struct ahd_softc *ahd)
>  
>  error_exit:
>  
> - return (ENOMEM);
> + return -ENOMEM;
>  }
>  
>  static struct scb *
> @@ -7076,7 +7076,7 @@ ahd_init(struct ahd_softc *ahd)
>   ahd->stack_size = ahd_probe_stack_size(ahd);
>   ahd->saved_stack = kmalloc(ahd->stack_size * sizeof(uint16_t), 
> GFP_ATOMIC);
>   if (ahd->saved_stack == NULL)
> - return (ENOMEM);
> + return -ENOMEM;
>  
>   /*
>* Verify that the compiler hasn't over-aggressively
> @@ -7115,7 +7115,7 @@ ahd_init(struct ahd_softc *ahd)
>  /*maxsegsz*/AHD_MAXTRANSFER_SIZE,
>  /*flags*/BUS_DMA_ALLOCNOW,
>  >buffer_dmat) != 0) {
> - return (ENOMEM);
> + return -ENOMEM;
>   }
>  #endif
>  
> @@ -7143,7 +7143,7 @@ ahd_init(struct ahd_softc *ahd)
>  /*nsegments*/1,
>  /*maxsegsz*/BUS_SPACE_MAXSIZE_32BIT,
>  /*flags*/0, >shared_data_dmat) != 0) {
> - return (ENOMEM);
> + return -ENOMEM;
>   }
>  
>   ahd->init_level++;
> @@ -7153,7 +7153,7 @@ ahd_init(struct ahd_softc *ahd)
>(void **)>shared_data_map.vaddr,
>BUS_DMA_NOWAIT,
>>shared_data_map.dmamap) != 0) {
> - return (ENOMEM);
> + return -ENOMEM;
>   }
>  
>   ahd->init_level++;
> @@ -7194,7 +7194,7 @@ ahd_init(struct ahd_softc *ahd)
>  
>   /* Allocate SCB data now that buffer_dmat is initialized */
>   if (ahd_init_scbdata(ahd) != 0)
> - return (ENOMEM);
> + return -ENOMEM;
>  
>   if ((ahd->flags & AHD_INITIATORROLE) == 0)
>   ahd->flags &= ~AHD_RESET_BUS_A;
> @@ -7644,7 +7644,7 @@ ahd_default_config(struct ahd_softc *ahd)
>   if (ahd_alloc_tstate(ahd, ahd->our_id, 'A') == NULL) {
>   printk("%s: unable to allocate ahd_tmode_tstate.  "
>  "Failing attach\n", ahd_name(ahd));
> - return (ENOMEM);
> + return -ENOMEM;
>   

[PATCH 0/2] initial submit of Microsemi smartpqi driver

2016-06-08 Thread Don Brace
This patches are based on Linus's tree

- New smartpqi driver
- removal of smartpqi PCI IDs from aacraid driver
  - this patch depends on adoption of smartpqi driver

---

Don Brace (2):
  smartpqi: initial commit of Microsemi smartpqi driver
  aacraid: remove wildcard for series 9 controllers


 MAINTAINERS|   11 
 drivers/scsi/Kconfig   |1 
 drivers/scsi/Makefile  |1 
 drivers/scsi/aacraid/linit.c   |2 
 drivers/scsi/smartpqi/Kconfig  |   50 
 drivers/scsi/smartpqi/Makefile |3 
 drivers/scsi/smartpqi/smartpqi.h   | 1106 
 drivers/scsi/smartpqi/smartpqi_init.c  | 6817 
 drivers/scsi/smartpqi/smartpqi_sas_transport.c |  350 +
 drivers/scsi/smartpqi/smartpqi_sis.c   |  394 +
 drivers/scsi/smartpqi/smartpqi_sis.h   |   32 
 11 files changed, 8765 insertions(+), 2 deletions(-)
 create mode 100644 drivers/scsi/smartpqi/Kconfig
 create mode 100644 drivers/scsi/smartpqi/Makefile
 create mode 100644 drivers/scsi/smartpqi/smartpqi.h
 create mode 100644 drivers/scsi/smartpqi/smartpqi_init.c
 create mode 100644 drivers/scsi/smartpqi/smartpqi_sas_transport.c
 create mode 100644 drivers/scsi/smartpqi/smartpqi_sis.c
 create mode 100644 drivers/scsi/smartpqi/smartpqi_sis.h

--
Signature
--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v3 0/6] Introduce pci_(request|release)_(mem|io)_regions

2016-06-08 Thread Jeff Kirsher
On Wed, 2016-06-08 at 09:28 +0200, Johannes Thumshirn wrote:
> On Tue, Jun 07, 2016 at 09:44:00AM +0200, Johannes Thumshirn wrote:
> > The first patch in this series introduces the following 4 helper
> functions to
> > the PCI core:
> > 
> > * pci_request_mem_regions()
> > * pci_request_io_regions()
> > * pci_release_mem_regions()
> > * pci_release_io_regions()
> > 
> > which encapsulate the request and release of a PCI device's memory or
> I/O
> > bars.
> > 
> > The subsequent patches convert the drivers, which use the
> > pci_request_selected_regions(pdev, 
> >   pci_select_bars(pdev, IORESOURCE_MEM), name); 
> > and similar pattern to use the new interface.
> > 
> > This was suggested by Christoph Hellwig in
> > http://lists.infradead.org/pipermail/linux-nvme/2016-May/004570.html an
> d
> > tested on kernel v4.6 with NVMe.
> > 
> 
> Btw, as I've seen already Jeff applying the Intel Ethernet patch to his
> tree, I think this should go via the PCI tree as the build dependency is
> the PCI patch.

Bjorn should pick up the entire series, I just applied the Intel patch (and
dependent patches) so we could touch test it.

signature.asc
Description: This is a digitally signed message part


[PATCH] aic7xxx: fix wrong return values

2016-06-08 Thread Luis de Bethencourt
Convention of error codes says to return them as negative values.

Signed-off-by: Luis de Bethencourt 
---
 drivers/scsi/aic7xxx/aic7770_osm.c |  6 +++---
 drivers/scsi/aic7xxx/aic79xx_core.c| 24 
 drivers/scsi/aic7xxx/aic79xx_osm.c |  8 
 drivers/scsi/aic7xxx/aic79xx_osm_pci.c | 16 
 drivers/scsi/aic7xxx/aic79xx_pci.c |  2 +-
 drivers/scsi/aic7xxx/aic7xxx_core.c| 26 +-
 drivers/scsi/aic7xxx/aic7xxx_osm.c |  8 
 drivers/scsi/aic7xxx/aic7xxx_osm_pci.c | 12 ++--
 drivers/scsi/aic7xxx/aic7xxx_pci.c |  4 ++--
 9 files changed, 53 insertions(+), 53 deletions(-)

diff --git a/drivers/scsi/aic7xxx/aic7770_osm.c 
b/drivers/scsi/aic7xxx/aic7770_osm.c
index 3d401d0..50f030d 100644
--- a/drivers/scsi/aic7xxx/aic7770_osm.c
+++ b/drivers/scsi/aic7xxx/aic7770_osm.c
@@ -51,7 +51,7 @@ aic7770_map_registers(struct ahc_softc *ahc, u_int port)
 * Lock out other contenders for our i/o space.
 */
if (!request_region(port, AHC_EISA_IOSIZE, "aic7xxx"))
-   return (ENOMEM);
+   return -ENOMEM;
ahc->tag = BUS_SPACE_PIO;
ahc->bsh.ioport = port;
return (0);
@@ -87,10 +87,10 @@ aic7770_probe(struct device *dev)
sprintf(buf, "ahc_eisa:%d", eisaBase >> 12);
name = kstrdup(buf, GFP_ATOMIC);
if (name == NULL)
-   return (ENOMEM);
+   return -ENOMEM;
ahc = ahc_alloc(_driver_template, name);
if (ahc == NULL)
-   return (ENOMEM);
+   return -ENOMEM;
error = aic7770_config(ahc, aic7770_ident_table + edev->id.driver_data,
   eisaBase);
if (error != 0) {
diff --git a/drivers/scsi/aic7xxx/aic79xx_core.c 
b/drivers/scsi/aic7xxx/aic79xx_core.c
index 109e2c9..bf850d8 100644
--- a/drivers/scsi/aic7xxx/aic79xx_core.c
+++ b/drivers/scsi/aic7xxx/aic79xx_core.c
@@ -6422,7 +6422,7 @@ ahd_init_scbdata(struct ahd_softc *ahd)
scb_data->maxhscbs = ahd_probe_scbs(ahd);
if (scb_data->maxhscbs == 0) {
printk("%s: No SCB space found\n", ahd_name(ahd));
-   return (ENXIO);
+   return -ENXIO;
}
 
ahd_initialize_hscbs(ahd);
@@ -6501,7 +6501,7 @@ ahd_init_scbdata(struct ahd_softc *ahd)
 
 error_exit:
 
-   return (ENOMEM);
+   return -ENOMEM;
 }
 
 static struct scb *
@@ -7076,7 +7076,7 @@ ahd_init(struct ahd_softc *ahd)
ahd->stack_size = ahd_probe_stack_size(ahd);
ahd->saved_stack = kmalloc(ahd->stack_size * sizeof(uint16_t), 
GFP_ATOMIC);
if (ahd->saved_stack == NULL)
-   return (ENOMEM);
+   return -ENOMEM;
 
/*
 * Verify that the compiler hasn't over-aggressively
@@ -7115,7 +7115,7 @@ ahd_init(struct ahd_softc *ahd)
   /*maxsegsz*/AHD_MAXTRANSFER_SIZE,
   /*flags*/BUS_DMA_ALLOCNOW,
   >buffer_dmat) != 0) {
-   return (ENOMEM);
+   return -ENOMEM;
}
 #endif
 
@@ -7143,7 +7143,7 @@ ahd_init(struct ahd_softc *ahd)
   /*nsegments*/1,
   /*maxsegsz*/BUS_SPACE_MAXSIZE_32BIT,
   /*flags*/0, >shared_data_dmat) != 0) {
-   return (ENOMEM);
+   return -ENOMEM;
}
 
ahd->init_level++;
@@ -7153,7 +7153,7 @@ ahd_init(struct ahd_softc *ahd)
 (void **)>shared_data_map.vaddr,
 BUS_DMA_NOWAIT,
 >shared_data_map.dmamap) != 0) {
-   return (ENOMEM);
+   return -ENOMEM;
}
 
ahd->init_level++;
@@ -7194,7 +7194,7 @@ ahd_init(struct ahd_softc *ahd)
 
/* Allocate SCB data now that buffer_dmat is initialized */
if (ahd_init_scbdata(ahd) != 0)
-   return (ENOMEM);
+   return -ENOMEM;
 
if ((ahd->flags & AHD_INITIATORROLE) == 0)
ahd->flags &= ~AHD_RESET_BUS_A;
@@ -7644,7 +7644,7 @@ ahd_default_config(struct ahd_softc *ahd)
if (ahd_alloc_tstate(ahd, ahd->our_id, 'A') == NULL) {
printk("%s: unable to allocate ahd_tmode_tstate.  "
   "Failing attach\n", ahd_name(ahd));
-   return (ENOMEM);
+   return -ENOMEM;
}
 
for (targ = 0; targ < AHD_NUM_TARGETS; targ++) {
@@ -7723,7 +7723,7 @@ ahd_parse_cfgdata(struct ahd_softc *ahd, struct 
seeprom_config *sc)
if (ahd_alloc_tstate(ahd, ahd->our_id, 'A') == NULL) {
printk("%s: unable to allocate ahd_tmode_tstate.  "
   "Failing attach\n", ahd_name(ahd));
-   return (ENOMEM);
+   return -ENOMEM;
}
 
for (targ = 0; targ < max_targ; targ++) {
@@ -7967,7 +7967,7 @@ ahd_suspend(struct 

[PATCH] target: Fix for hang of Ordered task in TCM

2016-06-08 Thread Bryant G. Ly
From: Nicholas Bellinger 

If a command with a Simple task attribute is failed due to a Unit
Attention, then a subsequent command with an Ordered task attribute
will hang forever.  The reason for this is that the Unit Attention
status is checked for in target_setup_cmd_from_cdb, before the call
to target_execute_cmd, which calls target_handle_task_attr, which
in turn increments dev->simple_cmds.

However, transport_generic_request_failure still calls
transport_complete_task_attr, which will decrement dev->simple_cmds.
In this case, simple_cmds is now -1.  So when a command with the
Ordered task attribute is sent, target_handle_task_attr sees that
dev->simple_cmds is not 0, so it decides it can't execute the
command until all the (nonexistent) Simple commands have completed.

Reported-by: Michael Cyr 
Signed-off-by: Nicholas Bellinger 
Signed-off-by: Bryant G. Ly 
---
 drivers/target/target_core_internal.h  |  1 +
 drivers/target/target_core_sbc.c   |  2 +-
 drivers/target/target_core_transport.c | 62 +++---
 include/target/target_core_fabric.h|  1 -
 4 files changed, 37 insertions(+), 29 deletions(-)

diff --git a/drivers/target/target_core_internal.h 
b/drivers/target/target_core_internal.h
index fc91e85..e2c970a 100644
--- a/drivers/target/target_core_internal.h
+++ b/drivers/target/target_core_internal.h
@@ -146,6 +146,7 @@ sense_reason_t  target_cmd_size_check(struct se_cmd 
*cmd, unsigned int size);
 void   target_qf_do_work(struct work_struct *work);
 bool   target_check_wce(struct se_device *dev);
 bool   target_check_fua(struct se_device *dev);
+void   __target_execute_cmd(struct se_cmd *, bool);
 
 /* target_core_stat.c */
 void   target_stat_setup_dev_default_groups(struct se_device *);
diff --git a/drivers/target/target_core_sbc.c b/drivers/target/target_core_sbc.c
index a9057aa..04f616b 100644
--- a/drivers/target/target_core_sbc.c
+++ b/drivers/target/target_core_sbc.c
@@ -602,7 +602,7 @@ static sense_reason_t compare_and_write_callback(struct 
se_cmd *cmd, bool succes
cmd->transport_state |= CMD_T_ACTIVE|CMD_T_BUSY|CMD_T_SENT;
spin_unlock_irq(>t_state_lock);
 
-   __target_execute_cmd(cmd);
+   __target_execute_cmd(cmd, false);
 
kfree(buf);
return ret;
diff --git a/drivers/target/target_core_transport.c 
b/drivers/target/target_core_transport.c
index e887635..7c4ea39 100644
--- a/drivers/target/target_core_transport.c
+++ b/drivers/target/target_core_transport.c
@@ -1303,23 +1303,6 @@ target_setup_cmd_from_cdb(struct se_cmd *cmd, unsigned 
char *cdb)
 
trace_target_sequencer_start(cmd);
 
-   /*
-* Check for an existing UNIT ATTENTION condition
-*/
-   ret = target_scsi3_ua_check(cmd);
-   if (ret)
-   return ret;
-
-   ret = target_alua_state_check(cmd);
-   if (ret)
-   return ret;
-
-   ret = target_check_reservation(cmd);
-   if (ret) {
-   cmd->scsi_status = SAM_STAT_RESERVATION_CONFLICT;
-   return ret;
-   }
-
ret = dev->transport->parse_cdb(cmd);
if (ret == TCM_UNSUPPORTED_SCSI_OPCODE)
pr_warn_ratelimited("%s/%s: Unsupported SCSI Opcode 0x%02x, 
sending CHECK_CONDITION.\n",
@@ -1762,20 +1745,45 @@ queue_full:
 }
 EXPORT_SYMBOL(transport_generic_request_failure);
 
-void __target_execute_cmd(struct se_cmd *cmd)
+void __target_execute_cmd(struct se_cmd *cmd, bool do_checks)
 {
sense_reason_t ret;
 
-   if (cmd->execute_cmd) {
-   ret = cmd->execute_cmd(cmd);
-   if (ret) {
-   spin_lock_irq(>t_state_lock);
-   cmd->transport_state &= ~(CMD_T_BUSY|CMD_T_SENT);
-   spin_unlock_irq(>t_state_lock);
+   if (!cmd->execute_cmd) {
+   ret = TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE;
+   goto err;
+   }
+   if (do_checks) {
+   /*
+* Check for an existing UNIT ATTENTION condition after
+* target_handle_task_attr() has done SAM task attr
+* checking, and possibly have already defered execution
+* out to target_restart_delayed_cmds() context.
+*/
+   ret = target_scsi3_ua_check(cmd);
+   if (ret)
+   goto err;
+
+   ret = target_alua_state_check(cmd);
+   if (ret)
+   goto err;
 
-   transport_generic_request_failure(cmd, ret);
+   ret = target_check_reservation(cmd);
+   if (ret) {
+   cmd->scsi_status = SAM_STAT_RESERVATION_CONFLICT;
+   goto err;
}
}
+
+   ret = cmd->execute_cmd(cmd);
+   if (!ret)
+   return;
+err:
+   spin_lock_irq(>t_state_lock);

Re: NVMe over Fabrics target implementation

2016-06-08 Thread Christoph Hellwig
On Wed, Jun 08, 2016 at 04:12:27PM +0300, Sagi Grimberg wrote:
>> Because it keeps the code simple.  If you had actually participated
>> on our development list you might have seen that until not too long
>> ago we have very fine grainded locks here.  In the end Armen convinced
>> me that it's easier to maintain if we don't bother with fine grained
>> locking outside the fast path, especially as it significantly simplifies
>> the discovery implementation.   If if it ever turns out to be an
>> issue we can change it easily as the implementation is well encapsulated.
>
> We did change that, and Nic is raising a valid point in terms of having
> a global mutex around all the ports. If the requirement of nvme
> subsystems and ports configuration is that it should happen fast enough
> and scale to the numbers that Nic is referring to, we'll need to change
> that back.
>
> Having said that, I'm not sure this is a real hard requirement for RDMA
> and FC in the mid-term, because from what I've seen, the workloads Nic
> is referring to are more typical for iscsi/tcp where connections are
> cheaper and you need more to saturate a high-speed interconnects, so
> we'll probably see this when we have nvme over tcp working.

I'm not really worried about connection establishment - that can be
changed to RCU locking really easily.  I'm a bit more worried about
the case where a driver would block long in ->add_port.  But let's
worry about that if an actual user comes up.  The last thing we need
in a new driver is lots of complexity for hypothetical use cases,
I'm much more interested in having the driver simple, testable and
actually tested than optimizing for something.

That is to say the priorities here are very different from Nic's goals
for the target code.
--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: NVMe over Fabrics target implementation

2016-06-08 Thread Sagi Grimberg



*) Extensible to multiple types of backend drivers.

nvme-target needs a way to absorb new backend drivers, that
does not effect existing configfs group layout or attributes.

Looking at the nvmet/configfs layout as-is, there are no multiple
backend types defined, nor a way to control backend feature bits
exposed to nvme namespaces at runtime.


Hey Nic,

As for different type of backends, I still don't see a big justification
for adding the LIO backends pscsi (as it doesn't make sense),
ramdisk (we have brd), or file (losetup).

What kind of feature bits would you want to expose at runtime?


And that's very much intentional.  We have a very well working block
layer which we're going to use, no need to reivent it.  The block
layer supports NVMe pass through just fine in case we'll need it,
as I spent the last year preparing it for that.


Why does it ever make sense for $SUBSYSTEM_NQN_0 with $PORT_DRIVER_FOO
to block operation of $SUBSYSTEM_NQN_1 with $PORT_DRIVER_BAR..?


Because it keeps the code simple.  If you had actually participated
on our development list you might have seen that until not too long
ago we have very fine grainded locks here.  In the end Armen convinced
me that it's easier to maintain if we don't bother with fine grained
locking outside the fast path, especially as it significantly simplifies
the discovery implementation.   If if it ever turns out to be an
issue we can change it easily as the implementation is well encapsulated.


We did change that, and Nic is raising a valid point in terms of having
a global mutex around all the ports. If the requirement of nvme
subsystems and ports configuration is that it should happen fast enough
and scale to the numbers that Nic is referring to, we'll need to change
that back.

Having said that, I'm not sure this is a real hard requirement for RDMA
and FC in the mid-term, because from what I've seen, the workloads Nic
is referring to are more typical for iscsi/tcp where connections are
cheaper and you need more to saturate a high-speed interconnects, so
we'll probably see this when we have nvme over tcp working.
--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: NVMe over Fabrics target implementation

2016-06-08 Thread Christoph Hellwig
On Tue, Jun 07, 2016 at 10:21:41PM -0700, Nicholas A. Bellinger wrote:
> *) Extensible to multiple types of backend drivers.
> 
> nvme-target needs a way to absorb new backend drivers, that
> does not effect existing configfs group layout or attributes.
> 
> Looking at the nvmet/configfs layout as-is, there are no multiple
> backend types defined, nor a way to control backend feature bits
> exposed to nvme namespaces at runtime.

And that's very much intentional.  We have a very well working block
layer which we're going to use, no need to reivent it.  The block
layer supports NVMe pass through just fine in case we'll need it,
as I spent the last year preparing it for that.

> Why does it ever make sense for $SUBSYSTEM_NQN_0 with $PORT_DRIVER_FOO
> to block operation of $SUBSYSTEM_NQN_1 with $PORT_DRIVER_BAR..?

Because it keeps the code simple.  If you had actually participated
on our development list you might have seen that until not too long
ago we have very fine grainded locks here.  In the end Armen convinced
me that it's easier to maintain if we don't bother with fine grained
locking outside the fast path, especially as it significantly simplifies
the discovery implementation.   If if it ever turns out to be an
issue we can change it easily as the implementation is well encapsulated.
--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [RFC 0/2] nvme/loop: Add support for controllers-per-port model

2016-06-08 Thread Christoph Hellwig
Hi Nic,

multiple ports have been on the todo list ever since we put that short
cut in place, but your patches aren't really how this should work.

The host NQN is not available for the actual drivers - we need to be able
to virtualize it for containers for example, that's why we need to pass
it by pointer depending on the arguments.  Also there is no need to
add another sysfs interface - just like for real drivers we can simply
create the ports in configfs and assign an address to it, we just need
to stop ignoring it.

Something like the patch below is the right way, it just needs a bit more
fine tuning and proper test cases:

diff --git a/drivers/nvme/target/loop.c b/drivers/nvme/target/loop.c
index 94e7829..ecd2c4c 100644
--- a/drivers/nvme/target/loop.c
+++ b/drivers/nvme/target/loop.c
@@ -57,6 +57,7 @@ struct nvme_loop_ctrl {
struct blk_mq_tag_set   tag_set;
struct nvme_loop_iodasync_event_iod;
struct nvme_ctrlctrl;
+   struct nvme_loop_port   *port;
 
struct nvmet_ctrl   *target_ctrl;
struct work_struct  delete_work;
@@ -74,11 +75,17 @@ struct nvme_loop_queue {
struct nvme_loop_ctrl   *ctrl;
 };
 
-static struct nvmet_port *nvmet_loop_port;
-
 static LIST_HEAD(nvme_loop_ctrl_list);
 static DEFINE_MUTEX(nvme_loop_ctrl_mutex);
 
+struct nvme_loop_port {
+   struct list_headentry;
+   struct nvmet_port   *port;
+};
+
+static LIST_HEAD(nvme_loop_ports);
+static DEFINE_SPINLOCK(nvme_loop_port_lock);
+
 static void nvme_loop_queue_response(struct nvmet_req *nvme_req);
 static void nvme_loop_delete_ctrl(struct nvmet_ctrl *ctrl);
 
@@ -172,7 +179,7 @@ static int nvme_loop_queue_rq(struct blk_mq_hw_ctx *hctx,
return ret;
 
iod->cmd.common.flags |= NVME_CMD_SGL_METABUF;
-   iod->req.port = nvmet_loop_port;
+   iod->req.port = queue->ctrl->port->port;
if (!nvmet_req_init(>req, >nvme_cq,
>nvme_sq, _loop_ops)) {
nvme_cleanup_cmd(req);
@@ -210,6 +217,7 @@ static void nvme_loop_submit_async_event(struct nvme_ctrl 
*arg, int aer_idx)
iod->cmd.common.opcode = nvme_admin_async_event;
iod->cmd.common.command_id = NVME_LOOP_AQ_BLKMQ_DEPTH;
iod->cmd.common.flags |= NVME_CMD_SGL_METABUF;
+   iod->req.port = queue->ctrl->port->port;
 
if (!nvmet_req_init(>req, >nvme_cq, >nvme_sq,
_loop_ops)) {
@@ -597,6 +605,20 @@ out_destroy_queues:
return ret;
 }
 
+static struct nvme_loop_port *
+__nvme_loop_find_port(const char *traddr)
+{
+   struct nvme_loop_port *p;
+
+   list_for_each_entry(p, _loop_ports, entry) {
+   if (!strncmp(traddr, p->port->disc_addr.traddr,
+   NVMF_TRADDR_SIZE))
+   return p;
+   }
+
+   return NULL;
+}
+
 static struct nvme_ctrl *nvme_loop_create_ctrl(struct device *dev,
struct nvmf_ctrl_options *opts)
 {
@@ -610,6 +632,17 @@ static struct nvme_ctrl *nvme_loop_create_ctrl(struct 
device *dev,
ctrl->ctrl.opts = opts;
INIT_LIST_HEAD(>list);
 
+   spin_lock(_loop_port_lock);
+   ctrl->port = __nvme_loop_find_port(opts->traddr);
+   spin_unlock(_loop_port_lock);
+   if (!ctrl->port) {
+   ret = -EINVAL;
+   dev_warn(ctrl->ctrl.device,
+"could not find port at addr %s\n",
+opts->traddr);
+   goto out_put_ctrl;
+   }
+
INIT_WORK(>delete_work, nvme_loop_del_ctrl_work);
INIT_WORK(>reset_work, nvme_loop_reset_ctrl_work);
 
@@ -684,27 +717,40 @@ out_put_ctrl:
 
 static int nvme_loop_add_port(struct nvmet_port *port)
 {
-   /*
-* XXX: disalow adding more than one port so
-* there is no connection rejections when a
-* a subsystem is assigned to a port for which
-* loop doesn't have a pointer.
-* This scenario would be possible if we allowed
-* more than one port to be added and a subsystem
-* was assigned to a port other than nvmet_loop_port.
-*/
+   struct nvme_loop_port *p, *old;
+
+   p = kmalloc(sizeof(*p), GFP_KERNEL);
+   if (!p)
+   return -ENOMEM;
+
+   spin_lock(_loop_port_lock);
+   old = __nvme_loop_find_port(port->disc_addr.traddr);
+   if (old)
+   goto out_dup;
 
-   if (nvmet_loop_port)
-   return -EPERM;
+   p->port = port;
+   list_add_tail_rcu(>entry, _loop_ports);
+   port->priv = p;
+   spin_unlock(_loop_port_lock);
 
-   nvmet_loop_port = port;
return 0;
+out_dup:
+   spin_unlock(_loop_port_lock);
+   kfree(p);
+   pr_err("duplicate port name: %s\n", port->disc_addr.traddr);
+   return -EINVAL;
 }
 
 static void nvme_loop_remove_port(struct nvmet_port *port)
 {
-   if (port == nvmet_loop_port)
-   nvmet_loop_port = NULL;
+   struct 

Re: [PATCH v3 0/6] Introduce pci_(request|release)_(mem|io)_regions

2016-06-08 Thread Johannes Thumshirn
On Tue, Jun 07, 2016 at 09:44:00AM +0200, Johannes Thumshirn wrote:
> The first patch in this series introduces the following 4 helper functions to
> the PCI core:
> 
> * pci_request_mem_regions()
> * pci_request_io_regions()
> * pci_release_mem_regions()
> * pci_release_io_regions()
> 
> which encapsulate the request and release of a PCI device's memory or I/O
> bars.
> 
> The subsequent patches convert the drivers, which use the
> pci_request_selected_regions(pdev, 
>   pci_select_bars(pdev, IORESOURCE_MEM), name); 
> and similar pattern to use the new interface.
> 
> This was suggested by Christoph Hellwig in
> http://lists.infradead.org/pipermail/linux-nvme/2016-May/004570.html and
> tested on kernel v4.6 with NVMe.
> 

Btw, as I've seen already Jeff applying the Intel Ethernet patch to his
tree, I think this should go via the PCI tree as the build dependency is
the PCI patch.

Thanks,
Johannes
-- 
Johannes Thumshirn  Storage
jthumsh...@suse.de+49 911 74053 689
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: Felix Imendörffer, Jane Smithard, Graham Norton
HRB 21284 (AG Nürnberg)
Key fingerprint = EC38 9CAB C2C4 F25D 8600 D0D0 0393 969D 2D76 0850
--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v2] scsi_debug: fix sleep in invalid context

2016-06-08 Thread Douglas Gilbert

On 2016-06-07 09:55 PM, Christoph Hellwig wrote:

+static int p_fill_from_dev_buffer(struct scsi_cmnd *scp, const void *arr,
+ int arr_len, unsigned int off_dst)
+{
+   int act_len, n;
+   struct scsi_data_buffer *sdb = scsi_in(scp);
+   off_t skip = off_dst;


Why off_t which is a signed value instead of the unsigned in passed in?


Because off_t is the type of the last argument to sg_pcopy_from_buffer()
which is where the off_dst value goes. So there is potentially a size
of integer change plus signedness, IMO best expressed by defining a
new auto variable. I notice some projects have a uoff_t typedef for
offsets that make no sense when negative (like this case), but not the
Linux kernel.


+#define RL_BUCKET_ELEMS 8
+
  /* Even though each pseudo target has a REPORT LUNS "well known logical unit"
   * (W-LUN), the normal Linux scanning logic does not associate it with a
   * device (e.g. /dev/sg7). The following magic will make that association:
@@ -3285,12 +3315,14 @@ static int resp_report_luns(struct scsi_cmnd *scp,
unsigned char select_report;
u64 lun;
struct scsi_lun *lun_p;
-   u8 *arr;
+   u8 arr[RL_BUCKET_ELEMS * sizeof(struct scsi_lun)];


just use an on-stack array of type struct scsi_lun here, e.g.:

struct scsi_lun arr[RL_BUCKET_ELEMS];

Which you can then use directly instead of lun_p later, but which can
also be passed p_fill_from_dev_buffer as that takes a void pointer.


Can do.

Doug Gilbert

--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html