Re: unexpected scsi timeout

2007-07-25 Thread Vasily Averin
Albert Lee wrote:
 Vasily Averin wrote:
 I've noticed that some scsi commands for DVD-drive attached to pata_via
 successfully finishes without any delays but reports about TIMEOUT 
 condition. It
 happens because of ATA_ERR bit is set in status register. As result for 
 each
 command  Error Handler thread awakened, requests sense buffer and go to 
 sleep again.
 Need more info.  Please post boot dmesg and the result of 'lspci -nn'
 and 'hdparm -I /dev/srX' and when such errors occur.
 
 Your log looks ok. It's normal for TEST_UNIT_READY to return ATA_ERR when no 
 disc
 inside and libata EH triggered to request sense.

It's a bit strange for me, IMHO other scsi drivers requests sense buffer without
EH thread assistance.
Currently we know that ATA_ERR can be returned; it is not error, but one of
expected responses. Why we cannot request sense without EH? I would like to
understand is it implementation drawback or I missed something probably?

Thank you,
Vasily Averin

-
To unsubscribe from this list: send the line unsubscribe linux-scsi in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: unexpected scsi timeout

2007-07-25 Thread Tejun Heo
Vasily Averin wrote:
 Albert Lee wrote:
 Vasily Averin wrote:
 I've noticed that some scsi commands for DVD-drive attached to pata_via
 successfully finishes without any delays but reports about TIMEOUT 
 condition. It
 happens because of ATA_ERR bit is set in status register. As result for 
 each
 command  Error Handler thread awakened, requests sense buffer and go to 
 sleep again.
 Need more info.  Please post boot dmesg and the result of 'lspci -nn'
 and 'hdparm -I /dev/srX' and when such errors occur.
 Your log looks ok. It's normal for TEST_UNIT_READY to return ATA_ERR when no 
 disc
 inside and libata EH triggered to request sense.
 
 It's a bit strange for me, IMHO other scsi drivers requests sense buffer 
 without
 EH thread assistance.
 Currently we know that ATA_ERR can be returned; it is not error, but one of
 expected responses. Why we cannot request sense without EH? I would like to
 understand is it implementation drawback or I missed something probably?

That was a design choice.  It's easier to implement that way.

-- 
tejun
-
To unsubscribe from this list: send the line unsubscribe linux-scsi in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCHSET 0/5] Peaceful co-existence of scsi_sgtable and Large IO sg-chaining

2007-07-25 Thread Benny Halevy
James Bottomley wrote:
 On Tue, 2007-07-24 at 17:01 +0300, Benny Halevy wrote:
 FUJITA Tomonori wrote:
 I should have said that, was the approach to use separate buffer for
 sglists instead of putting the sglists and the parameters in one
 buffer completely rejected?
 I think that James should be asked this question.
 My understanding was that he preferred allocating the sgtable
 header along with the scatterlist array.
 
 All I really cared about was insulating the drivers from future changes
 in this area.  It strikes me that for chained sglist implementations,
 this can all become a block layer responsibility, since more than SCSI
 will want to make use of it.

agreed.

 
 Just remember, though that whatever is picked has to work in both memory
 constrained embedded systems as well as high end clusters.  It seems to
 me (but this isn't a mandate) that a single tunable sg element chunk
 size will accomplish this the best (as in get rid of the entire SCSI
 sglist sizing machinery) .

maybe :)
I'm not as familiar as you are with all the different uses of linux.
IMO, having a tunable is worse for the administrator and I'd prefer
an automatic mechanism that will dynamically adapt the allocation
size(s) to the actual use, very much like the one you have today.

 
 However, I'm perfectly happy to go with whatever the empirical evidence
 says is best .. and hopefully, now we don't have to pick this once and
 for all time ... we can alter it if whatever is chosen proves to be
 suboptimal.

I agree.  This isn't a catholic marriage :)
We'll run some performance experiments comparing the sgtable chaining
implementation vs. a scsi_data_buff implementation pointing
at a possibly chained sglist and let's see if we can measure
any difference.  We'll send results as soon as we have them.

 
 There are pro's and con's either way.  In my opinion separating
 the headers is better for mapping buffers that have a power of 2
 #pages (which seems to be the typical case) since when you're
 losing one entry in the sgtable for the header you'd waste a lot
 more when you just cross the bucket boundary. E.g. for 64 pages
 you need to allocate from the 64 to 127 bucket rather than the
 33 to 64 bucket).  Separated, one sgtable header structure
 can just be embedded in struct scsi_cmnd for uni-directional transfers
 (wasting some space when transferring no data, but saving space and
 cycles in the common case vs. allocating it from a separate memory pool)
 and the one for bidi read buffers can be allocated separately just for
 bidi commands.
 
 This is all opinion ... could someone actually run some performance
 tests?
 
 James
 
 

-
To unsubscribe from this list: send the line unsubscribe linux-scsi in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH AB1/5] SCSI: SG pools allocation cleanup.

2007-07-25 Thread Boaz Harrosh
I have a very serious and stupid bug in this patch
which did not show in my tests. Please forgive me.
Below is a diff of the bug. As an answer to this mail 
I will resend 2 revised patches AB1, and A2 is also affected.

Sorry
Boaz

diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
index c065de5..29adcc6 100644
--- a/drivers/scsi/scsi_lib.c
+++ b/drivers/scsi/scsi_lib.c
@@ -1649,7 +1649,7 @@ int __init scsi_init_queue(void)
 
for (i = 0, size = 8; i  SG_MEMPOOL_NR; i++, size = 1) {
struct scsi_host_sg_pool *sgp = scsi_sg_pools + i;
-   sgp-size = size;
+   sgp-size = (i != SG_MEMPOOL_NR-1) ? size : 
SCSI_MAX_SG_SEGMENTS;
sgp-slab = kmem_cache_create(sg_names[i],
sgp-size*sizeof(struct scatterlist),
0, 0, NULL);

-
To unsubscribe from this list: send the line unsubscribe linux-scsi in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCHSET 0/5] Peaceful co-existence of scsi_sgtable and Large IO sg-chaining

2007-07-25 Thread FUJITA Tomonori
From: Benny Halevy [EMAIL PROTECTED]
Subject: Re: [PATCHSET 0/5] Peaceful co-existence of scsi_sgtable and Large IO 
sg-chaining
Date: Wed, 25 Jul 2007 11:26:44 +0300

  However, I'm perfectly happy to go with whatever the empirical evidence
  says is best .. and hopefully, now we don't have to pick this once and
  for all time ... we can alter it if whatever is chosen proves to be
  suboptimal.
 
 I agree.  This isn't a catholic marriage :)
 We'll run some performance experiments comparing the sgtable chaining
 implementation vs. a scsi_data_buff implementation pointing
 at a possibly chained sglist and let's see if we can measure
 any difference.  We'll send results as soon as we have them.

I did some tests with your sgtable patchset and the approach to use
separate buffer for sglists. As expected, there was no performance
difference with small I/Os. I've not tried very large I/Os, which
might give some difference.

The patchset to use separate buffer for sglists is available:

git://git.kernel.org/pub/scm/linux/kernel/git/tomo/linux-2.6-bidi.git 
simple-sgtable


Can you clean up your patchset and upload somewhere?
-
To unsubscribe from this list: send the line unsubscribe linux-scsi in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH A2/5 ver2] SCSI: scsi_sgtable implementation

2007-07-25 Thread Boaz Harrosh

  As proposed by James Bottomley all I/O members of struc scsi_cmnd
  and the resid member, which need to be duplicated for bidirectional
  transfers. Can be allocated together with the sg-list they are
  pointing to. This way when bidi comes the all structure can be duplicated
  with minimal change to code, and with no extra baggage when bidi is not
  used. The resulting code is the use of a new mechanism called scsi_sgtable.

  scsi_cmnd.h
  - define a new scsi_sgtable structure that will hold IO descriptors + the
actual scattergather array.
  - Hold a pointer to the scsi_sgtable in scsi_cmnd.
  - Deprecate old, now unnecessary, IO members of scsi_cmnd. These members are
kept for compatibility with unconverted drivers, still lurking around in
the code tree. Last patch in the series removes them completely.
  - Modify data accessors to now use new members of scsi_sgtable.

  scsi_lib.c
  - scsi_lib is converted to use the new scsi_sgtable, in stead of the old
members and sg-arrays.
  - scsi_{alloc,free}_sgtable() API has changed. This will break scsi_stgt
which will need to be converted to new implementation.
  - Special code is inserted to initialize the old compatibility members from
the new structures. This code will be removed.

 Signed-off-by: Boaz Harrosh [EMAIL PROTECTED]
---
 drivers/scsi/scsi_lib.c  |  116 +++---
 include/scsi/scsi_cmnd.h |   40 ++--
 2 files changed, 82 insertions(+), 74 deletions(-)

diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
index 694bffa..899b7df 100644
--- a/drivers/scsi/scsi_lib.c
+++ b/drivers/scsi/scsi_lib.c
@@ -35,16 +35,17 @@
 /*
  * Should fit within a single page.
  */
-enum { SCSI_MAX_SG_SEGMENTS = (PAGE_SIZE / sizeof(struct scatterlist)) };
+enum { SCSI_MAX_SG_SEGMENTS = ((PAGE_SIZE - sizeof(struct scsi_sgtable)) /
+   sizeof(struct scatterlist)) };
 
 enum { SG_MEMPOOL_NR =
-   (SCSI_MAX_SG_SEGMENTS = 8) +
-   (SCSI_MAX_SG_SEGMENTS = 16) +
-   (SCSI_MAX_SG_SEGMENTS = 32) +
-   (SCSI_MAX_SG_SEGMENTS = 64) +
-   (SCSI_MAX_SG_SEGMENTS = 128) +
-   (SCSI_MAX_SG_SEGMENTS = 256) +
-   (SCSI_MAX_SG_SEGMENTS = 512)
+   (SCSI_MAX_SG_SEGMENTS  8) +
+   (SCSI_MAX_SG_SEGMENTS  16) +
+   (SCSI_MAX_SG_SEGMENTS  32) +
+   (SCSI_MAX_SG_SEGMENTS  64) +
+   (SCSI_MAX_SG_SEGMENTS  128) +
+   (SCSI_MAX_SG_SEGMENTS  256) +
+   (SCSI_MAX_SG_SEGMENTS  512)
 };
 
 struct scsi_host_sg_pool {
@@ -54,7 +55,10 @@ struct scsi_host_sg_pool {
 };
 static struct scsi_host_sg_pool scsi_sg_pools[SG_MEMPOOL_NR];
 
-
+static inline unsigned scsi_pool_size(int pool)
+{
+   return scsi_sg_pools[pool].size;
+}
 
 static void scsi_run_queue(struct request_queue *q);
 
@@ -699,7 +703,7 @@ static unsigned scsi_sgtable_index(unsigned nents)
int i, size;
 
for (i = 0, size = 8; i  SG_MEMPOOL_NR-1; i++, size = 1)
-   if (size = nents)
+   if (size  nents)
return i;
 
if (SCSI_MAX_SG_SEGMENTS = nents)
@@ -710,26 +714,26 @@ static unsigned scsi_sgtable_index(unsigned nents)
return -1;
 }
 
-struct scatterlist *scsi_alloc_sgtable(struct scsi_cmnd *cmd, gfp_t gfp_mask)
+struct scsi_sgtable *scsi_alloc_sgtable(int sg_count, gfp_t gfp_mask)
 {
-   unsigned int pool = scsi_sgtable_index(cmd-use_sg);
-   struct scatterlist *sgl;
+   unsigned int pool = scsi_sgtable_index(sg_count);
+   struct scsi_sgtable *sgt;
 
-   sgl = mempool_alloc(scsi_sg_pools[pool].pool, gfp_mask);
-   if (unlikely(!sgl))
+   sgt = mempool_alloc(scsi_sg_pools[pool].pool, gfp_mask);
+   if (unlikely(!sgt))
return NULL;
 
-   cmd-sg_pool = pool;
-   return sgl;
+   memset(sgt, 0, SG_TABLE_SIZEOF(scsi_pool_size(pool)));
+   sgt-sg_count = sg_count;
+   sgt-sg_pool = pool;
+   return sgt;
 }
-
 EXPORT_SYMBOL(scsi_alloc_sgtable);
 
-void scsi_free_sgtable(struct scsi_cmnd *cmd)
+void scsi_free_sgtable(struct scsi_sgtable *sgt)
 {
-   mempool_free(cmd-request_buffer, scsi_sg_pools[cmd-sg_pool].pool);
+   mempool_free(sgt, scsi_sg_pools[sgt-sg_pool].pool);
 }
-
 EXPORT_SYMBOL(scsi_free_sgtable);
 
 /*
@@ -751,13 +755,12 @@ EXPORT_SYMBOL(scsi_free_sgtable);
  */
 static void scsi_release_buffers(struct scsi_cmnd *cmd)
 {
-   if (cmd-use_sg)
-   scsi_free_sgtable(cmd);
+   if (cmd-sgtable)
+   scsi_free_sgtable(cmd-sgtable);
 
-   /*
-* Zero these out.  They now point to freed memory, and it is
-* dangerous to hang onto the pointers.
-*/
+   cmd-sgtable = NULL;
+
+   /*FIXME: make code backward compatible with old system */
cmd-request_buffer = NULL;
cmd-request_bufflen = 0;
cmd-use_sg = 0;
@@ -794,7 +797,7 @@ static void scsi_release_buffers(struct scsi_cmnd *cmd)
 void scsi_io_completion(struct scsi_cmnd *cmd, unsigned 

[PATCH AB1/5 ver2] SCSI: SG pools allocation cleanup.

2007-07-25 Thread Boaz Harrosh

  - The code Automatically calculates at compile time the
maximum size sg-array that will fit in a memory-page and will allocate
pools of BASE_2 size, up to that maximum size.
  - split scsi_alloc() into an helper scsi_sgtable_index() that will return
the index of the pool for a given sg_count.
  - Remove now unused SCSI_MAX_PHYS_SEGMENTS
  - rename sglist_len to sg_pool which is what it always was.
  - Some extra prints at scsi_init_queue(). These prints will be removed
once everything stabilizes.

  Now that the Arrays are automatically calculated to fit in a page, what
  about ARCH's that have a very big page size? I have, just for demonstration,
  calculated upto 512 entries. But I suspect that other kernel-subsystems are
  bounded to 256 or 128, is that true? should I allow more/less than 512 here?

some common numbers:
Arch  | SCSI_MAX_SG_SEGMENTS =  | sizeof(struct scatterlist)
--|-|---
x86_64| 128 |32
i386 CONFIG_HIGHMEM64G=y  | 205 |20
i386 other| 256 |16

  Could some one give example numbers of an ARCH with big page size?

Signed-off-by: Boaz Harrosh [EMAIL PROTECTED]
---
 drivers/scsi/scsi_lib.c  |  143 +-
 include/scsi/scsi.h  |7 --
 include/scsi/scsi_cmnd.h |   19 +-
 3 files changed, 80 insertions(+), 89 deletions(-)

diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
index 5edadfe..694bffa 100644
--- a/drivers/scsi/scsi_lib.c
+++ b/drivers/scsi/scsi_lib.c
@@ -30,40 +30,31 @@
 #include scsi_priv.h
 #include scsi_logging.h
 
-
-#define SG_MEMPOOL_NR  ARRAY_SIZE(scsi_sg_pools)
 #define SG_MEMPOOL_SIZE2
 
+/*
+ * Should fit within a single page.
+ */
+enum { SCSI_MAX_SG_SEGMENTS = (PAGE_SIZE / sizeof(struct scatterlist)) };
+
+enum { SG_MEMPOOL_NR =
+   (SCSI_MAX_SG_SEGMENTS = 8) +
+   (SCSI_MAX_SG_SEGMENTS = 16) +
+   (SCSI_MAX_SG_SEGMENTS = 32) +
+   (SCSI_MAX_SG_SEGMENTS = 64) +
+   (SCSI_MAX_SG_SEGMENTS = 128) +
+   (SCSI_MAX_SG_SEGMENTS = 256) +
+   (SCSI_MAX_SG_SEGMENTS = 512)
+};
+
 struct scsi_host_sg_pool {
-   size_t  size;
-   char*name; 
+   unsignedsize;
struct kmem_cache   *slab;
mempool_t   *pool;
 };
+static struct scsi_host_sg_pool scsi_sg_pools[SG_MEMPOOL_NR];
+
 
-#if (SCSI_MAX_PHYS_SEGMENTS  32)
-#error SCSI_MAX_PHYS_SEGMENTS is too small
-#endif
-
-#define SP(x) { x, sgpool- #x } 
-static struct scsi_host_sg_pool scsi_sg_pools[] = {
-   SP(8),
-   SP(16),
-   SP(32),
-#if (SCSI_MAX_PHYS_SEGMENTS  32)
-   SP(64),
-#if (SCSI_MAX_PHYS_SEGMENTS  64)
-   SP(128),
-#if (SCSI_MAX_PHYS_SEGMENTS  128)
-   SP(256),
-#if (SCSI_MAX_PHYS_SEGMENTS  256)
-#error SCSI_MAX_PHYS_SEGMENTS is too large
-#endif
-#endif
-#endif
-#endif
-}; 
-#undef SP
 
 static void scsi_run_queue(struct request_queue *q);
 
@@ -703,44 +694,32 @@ static struct scsi_cmnd *scsi_end_request(struct 
scsi_cmnd *cmd, int uptodate,
return NULL;
 }
 
+static unsigned scsi_sgtable_index(unsigned nents)
+{
+   int i, size;
+
+   for (i = 0, size = 8; i  SG_MEMPOOL_NR-1; i++, size = 1)
+   if (size = nents)
+   return i;
+
+   if (SCSI_MAX_SG_SEGMENTS = nents)
+   return SG_MEMPOOL_NR-1;
+
+   printk(KERN_ERR scsi: bad segment count=%d\n, nents);
+   BUG();
+   return -1;
+}
+
 struct scatterlist *scsi_alloc_sgtable(struct scsi_cmnd *cmd, gfp_t gfp_mask)
 {
-   struct scsi_host_sg_pool *sgp;
+   unsigned int pool = scsi_sgtable_index(cmd-use_sg);
struct scatterlist *sgl;
 
-   BUG_ON(!cmd-use_sg);
-
-   switch (cmd-use_sg) {
-   case 1 ... 8:
-   cmd-sglist_len = 0;
-   break;
-   case 9 ... 16:
-   cmd-sglist_len = 1;
-   break;
-   case 17 ... 32:
-   cmd-sglist_len = 2;
-   break;
-#if (SCSI_MAX_PHYS_SEGMENTS  32)
-   case 33 ... 64:
-   cmd-sglist_len = 3;
-   break;
-#if (SCSI_MAX_PHYS_SEGMENTS  64)
-   case 65 ... 128:
-   cmd-sglist_len = 4;
-   break;
-#if (SCSI_MAX_PHYS_SEGMENTS   128)
-   case 129 ... 256:
-   cmd-sglist_len = 5;
-   break;
-#endif
-#endif
-#endif
-   default:
+   sgl = mempool_alloc(scsi_sg_pools[pool].pool, gfp_mask);
+   if (unlikely(!sgl))
return NULL;
-   }
 
-   sgp = scsi_sg_pools + cmd-sglist_len;
-   sgl = mempool_alloc(sgp-pool, gfp_mask);
+   cmd-sg_pool = pool;
return sgl;
 }
 
@@ -748,13 +727,7 @@ EXPORT_SYMBOL(scsi_alloc_sgtable);
 
 void scsi_free_sgtable(struct scsi_cmnd *cmd)
 {
-   struct scatterlist *sgl = cmd-request_buffer;
-   struct 

[PATCH trivial] include linux/mutex.h from scsi_transport_iscsi.h

2007-07-25 Thread Michael S. Tsirkin
scsi/scsi_transport_iscsi.h uses struct mutex, so while
linux/mutex.h seems to be pulled in indirectly
by one of the headers it includes, the right thing
is to include linux/mutex.h directly.

Signed-off-by: Michael S. Tsirkin [EMAIL PROTECTED]

---

diff --git a/include/scsi/scsi_transport_iscsi.h 
b/include/scsi/scsi_transport_iscsi.h
index 706c0cd..7530e98 100644
--- a/include/scsi/scsi_transport_iscsi.h
+++ b/include/scsi/scsi_transport_iscsi.h
@@ -24,6 +24,7 @@
 #define SCSI_TRANSPORT_ISCSI_H
 
 #include linux/device.h
+#include linux/mutex.h
 #include scsi/iscsi_if.h
 
 struct scsi_transport_template;


-- 
MST
-
To unsubscribe from this list: send the line unsubscribe linux-scsi in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: unexpected scsi timeout

2007-07-25 Thread James Bottomley
On Wed, 2007-07-25 at 16:42 +0900, Tejun Heo wrote:
 Vasily Averin wrote:
  Albert Lee wrote:
  Vasily Averin wrote:
  I've noticed that some scsi commands for DVD-drive attached to pata_via
  successfully finishes without any delays but reports about TIMEOUT 
  condition. It
  happens because of ATA_ERR bit is set in status register. As result for 
  each
  command  Error Handler thread awakened, requests sense buffer and go to 
  sleep again.
  Need more info.  Please post boot dmesg and the result of 'lspci -nn'
  and 'hdparm -I /dev/srX' and when such errors occur.
  Your log looks ok. It's normal for TEST_UNIT_READY to return ATA_ERR when 
  no disc
  inside and libata EH triggered to request sense.
  
  It's a bit strange for me, IMHO other scsi drivers requests sense buffer 
  without
  EH thread assistance.
  Currently we know that ATA_ERR can be returned; it is not error, but one of
  expected responses. Why we cannot request sense without EH? I would like to
  understand is it implementation drawback or I missed something probably?
 
 That was a design choice.  It's easier to implement that way.

And just so we're clear what SCSI allows:

On ordinary SCSI devices, when the device goes into a check condition
state, it won't accept any more commands until it sees a request sense.
For SCSI devices this can be a problem (because there are several
thousand sense conditions, some of which correspond to everything's
alright), so a large number of SCSI drivers implement auto request sense
emulation, which means that in the driver, as soon as they see the check
condition, they immediately send a REQUEST SENSE command to pick up the
sense code (minimising the time the device is blocked).

For drivers that don't want to implement this (and we have a few in
SCSI) the alternative mechanism is to have the eh thread collect the
sense data.  This is the route libata has chosen.

James


-
To unsubscribe from this list: send the line unsubscribe linux-scsi in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH trivial] include linux/mutex.h from scsi_transport_iscsi.h

2007-07-25 Thread Mike Christie

Michael S. Tsirkin wrote:

scsi/scsi_transport_iscsi.h uses struct mutex, so while
linux/mutex.h seems to be pulled in indirectly
by one of the headers it includes, the right thing
is to include linux/mutex.h directly.



Is that part about always including the header directly right? If so 
then were you going to include list.h too, and were you going to fix up 
some of the other iscsi code?

-
To unsubscribe from this list: send the line unsubscribe linux-scsi in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCHSET 0/5] Peaceful co-existence of scsi_sgtable and Large IO sg-chaining

2007-07-25 Thread Boaz Harrosh
FUJITA Tomonori wrote:
 From: Benny Halevy [EMAIL PROTECTED]
 Subject: Re: [PATCHSET 0/5] Peaceful co-existence of scsi_sgtable and Large 
 IO sg-chaining
 Date: Wed, 25 Jul 2007 11:26:44 +0300
 
 However, I'm perfectly happy to go with whatever the empirical evidence
 says is best .. and hopefully, now we don't have to pick this once and
 for all time ... we can alter it if whatever is chosen proves to be
 suboptimal.
 I agree.  This isn't a catholic marriage :)
 We'll run some performance experiments comparing the sgtable chaining
 implementation vs. a scsi_data_buff implementation pointing
 at a possibly chained sglist and let's see if we can measure
 any difference.  We'll send results as soon as we have them.
 
 I did some tests with your sgtable patchset and the approach to use
 separate buffer for sglists. As expected, there was no performance
 difference with small I/Os. I've not tried very large I/Os, which
 might give some difference.
 
 The patchset to use separate buffer for sglists is available:
 
 git://git.kernel.org/pub/scm/linux/kernel/git/tomo/linux-2.6-bidi.git 
 simple-sgtable
 
 
 Can you clean up your patchset and upload somewhere?

Sorry sure. Here is scsi_sgtable complete work over linux-block:
http://www.bhalevy.com/open-osd/download/scsi_sgtable/linux-block
 
Here is just scsi_sgtable, no chaining, over scsi-misc + more
drivers:
http://www.bhalevy.com/open-osd/download/scsi_sgtable/scsi-misc


Next week I will try to mount lots of scsi_debug devices and
use large parallel IO to try and find a difference. I will
test Jens's sglist-arch tree against above sglist-arch+scsi_sgtable

I have lots of reservations about Tomo's last patches. For me
they are a regression. They use 3 allocations per command instead
of 2. They use an extra pointer and an extra global slab_pool. And
for what, for grouping some scsi_cmnd members in a substructure.
If we want to go the pointing way, keeping our extra scatterlist
and our base_2 count on most ARCHs. Than we can just use the 
scsi_data_buff embedded inside scsi_cmnd. 

The second scsi_data_buff for bidi can come either from an extra 
slab_pool like in Tomo's patch - bidi can pay - or in scsi.c at 
scsi_setup_command_freelist() the code can inspect Tomo's flag 
to the request_queue QUEUE_FLAG_BIDI and will than allocate a 
bigger scsi_cmnd in the free list.

I have coded that approach and it is very simple:
http://www.bhalevy.com/open-osd/download/scsi_data_buff

They are over Jens's sglist-arch branch
I have revised all scsi-ml places and it all compiles
But is totally untested.

I will add this branch to the above tests, but I suspect that
they are identical in every way to current code.


For review here is the main scsi_data_buff patch:

--
From: Boaz Harrosh [EMAIL PROTECTED]
Date: Wed, 25 Jul 2007 20:19:14 +0300
Subject: [PATCH] SCSI: scsi_data_buff

  In preparation for bidi we abstract all IO members of scsi_cmnd
  that will need to duplicate into a substructure.
  - Group all IO members of scsi_cmnd into a scsi_data_buff
structure.
  - Adjust accessors to new members.
  - scsi_{alloc,free}_sgtable receive a scsi_data_buff instead of
scsi_cmnd. And work on it. (Supporting chaining like before)
  - Adjust scsi_init_io() and  scsi_release_buffers() for above
change.
  - Fix other parts of scsi_lib to members migration. Use accessors
where appropriate.

 Signed-off-by: Boaz Harrosh [EMAIL PROTECTED]
---
 drivers/scsi/scsi_lib.c  |   68 +++--
 include/scsi/scsi_cmnd.h |   34 +++---
 2 files changed, 46 insertions(+), 56 deletions(-)

diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
index d62b184..2b8a865 100644
--- a/drivers/scsi/scsi_lib.c
+++ b/drivers/scsi/scsi_lib.c
@@ -714,16 +714,14 @@ static unsigned scsi_sgtable_index(unsigned nents)
return -1;
 }
 
-struct scatterlist *scsi_alloc_sgtable(struct scsi_cmnd *cmd, gfp_t gfp_mask)
+struct scatterlist *scsi_alloc_sgtable(struct scsi_data_buff *sdb, gfp_t 
gfp_mask)
 {
struct scsi_host_sg_pool *sgp;
struct scatterlist *sgl, *prev, *ret;
unsigned int index;
int this, left;
 
-   BUG_ON(!cmd-use_sg);
-
-   left = cmd-use_sg;
+   left = sdb-sg_count;
ret = prev = NULL;
do {
this = left;
@@ -747,7 +745,7 @@ struct scatterlist *scsi_alloc_sgtable(struct scsi_cmnd 
*cmd, gfp_t gfp_mask)
 * first loop through, set initial index and return value
 */
if (!ret) {
-   cmd-sg_pool = index;
+   sdb-sg_pool = index;
ret = sgl;
}
 
@@ -769,10 +767,10 @@ struct scatterlist *scsi_alloc_sgtable(struct scsi_cmnd 
*cmd, gfp_t gfp_mask)
} while (left);
 
/*
-* -use_sg may get modified after dma mapping has potentially
+* sdb-sg_count may get modified after blk_rq_map_sg() potentially
  

Re: [PATCH trivial] include linux/mutex.h from scsi_transport_iscsi.h

2007-07-25 Thread Michael S. Tsirkin
 Quoting Mike Christie [EMAIL PROTECTED]:
 Subject: Re: [PATCH trivial] include linux/mutex.h from scsi_transport_iscsi.h
 
 Michael S. Tsirkin wrote:
 scsi/scsi_transport_iscsi.h uses struct mutex, so while
 linux/mutex.h seems to be pulled in indirectly
 by one of the headers it includes, the right thing
 is to include linux/mutex.h directly.
 
 
 Is that part about always including the header directly right?

Think so. Analogous patches by me has been accepted in various
subsystems. See e.g. f8916c11a4dc4cb2367e9bee1788f4e0f1b4eabc.

 If so 
 then were you going to include list.h too,

Makes sense. I'll repost.

 and were you going to fix up 
 some of the other iscsi code?

Not at the moment.
The reason I noticed this is because I'm doing some other project.
I'll post patches for other files if/when I notice any issues.

-- 
MST
-
To unsubscribe from this list: send the line unsubscribe linux-scsi in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] sgtable over sglist (Re: [RFC 4/8] scsi-ml: scsi_sgtable implementation)

2007-07-25 Thread Boaz Harrosh
Comments about this patch embedded inside

FUJITA Tomonori wrote:
 
 I've attached the sgtable patch for review. It's against the
 sglist-arch branch in Jens' tree.
 
 ---
 From: FUJITA Tomonori [EMAIL PROTECTED]
 Subject: [PATCH] move all the I/O descriptors to a new scsi_sgtable structure
 
 based on Boaz Harrosh [EMAIL PROTECTED] scsi_sgtable patch.
 
 Signed-off-by: FUJITA Tomonori [EMAIL PROTECTED]
 ---
  drivers/scsi/scsi_lib.c  |   92 +++--
  include/scsi/scsi_cmnd.h |   39 +--
  2 files changed, 82 insertions(+), 49 deletions(-)
 
 diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
 index 5fb1048..2fa1852 100644
 --- a/drivers/scsi/scsi_lib.c
 +++ b/drivers/scsi/scsi_lib.c
 @@ -52,6 +52,8 @@ static struct scsi_host_sg_pool scsi_sg_pools[] = {
  };
  #undef SP
  
 +static struct kmem_cache *scsi_sgtable_cache;
 +
One more slab pool to do regular IO

  static void scsi_run_queue(struct request_queue *q);
  
  /*
 @@ -731,16 +733,27 @@ static inline unsigned int scsi_sgtable_index(unsigned 
 short nents)
   return index;
  }
  
 -struct scatterlist *scsi_alloc_sgtable(struct scsi_cmnd *cmd, gfp_t gfp_mask)
 +struct scsi_sgtable *scsi_alloc_sgtable(struct scsi_cmnd *cmd, gfp_t 
 gfp_mask,
 + int sg_count)
  {
   struct scsi_host_sg_pool *sgp;
   struct scatterlist *sgl, *prev, *ret;
 + struct scsi_sgtable *sgt;
   unsigned int index;
   int this, left;
  
 - BUG_ON(!cmd-use_sg);
 + sgt = kmem_cache_zalloc(scsi_sgtable_cache, gfp_mask);
 + if (!sgt)
 + return NULL;
One more allocation that can fail for every io. Even if we have
a scsi_cmnd.

 +
 + /*
 +  * don't allow subsequent mempool allocs to sleep, it would
 +  * violate the mempool principle.
 +  */
 + gfp_mask = ~__GFP_WAIT;
 + gfp_mask |= __GFP_HIGH;
We used to sometime wait for that Large 128 scatterlist full page.
But now this small allocation is probably good. but we no longer
wait for the big allocation below.

  
 - left = cmd-use_sg;
 + left = sg_count;
   ret = prev = NULL;
   do {
   this = left;
 @@ -764,7 +777,7 @@ struct scatterlist *scsi_alloc_sgtable(struct scsi_cmnd 
 *cmd, gfp_t gfp_mask)
* first loop through, set initial index and return value
*/
   if (!ret) {
 - cmd-sglist_len = index;
 + sgt-sglist_len = index;
   ret = sgl;
   }
  
 @@ -776,21 +789,18 @@ struct scatterlist *scsi_alloc_sgtable(struct scsi_cmnd 
 *cmd, gfp_t gfp_mask)
   if (prev)
   sg_chain(prev, SCSI_MAX_SG_SEGMENTS, sgl);
  
 - /*
 -  * don't allow subsequent mempool allocs to sleep, it would
 -  * violate the mempool principle.
 -  */
 - gfp_mask = ~__GFP_WAIT;
 - gfp_mask |= __GFP_HIGH;
   prev = sgl;
   } while (left);
  
   /*
 -  * -use_sg may get modified after dma mapping has potentially
 +  * -sg_count may get modified after dma mapping has potentially
* shrunk the number of segments, so keep a copy of it for free.
*/
 - cmd-__use_sg = cmd-use_sg;
 - return ret;
 + sgt-sg_count = sg_count;
 + sgt-__sg_count = sg_count;
 + sgt-sglist = ret;
 + cmd-sgtable = sgt;
We used to set that in scsi_init_io. So this scsi_alloc_sgtable()
can be called twice for bidi. Now we can not. Also the API change
is not friendly for bidi and will have to change. (It used to be good)
If you set it here why do you return it.

 + return sgt;
  enomem:
   if (ret) {
   /*
 @@ -809,6 +819,8 @@ enomem:
  
   mempool_free(prev, sgp-pool);
   }
 + kmem_cache_free(scsi_sgtable_cache, sgt);
 +
   return NULL;
  }
  
 @@ -816,21 +828,22 @@ EXPORT_SYMBOL(scsi_alloc_sgtable);
  
  void scsi_free_sgtable(struct scsi_cmnd *cmd)
  {
 - struct scatterlist *sgl = cmd-request_buffer;
 + struct scsi_sgtable *sgt = cmd-sgtable;
 + struct scatterlist *sgl = sgt-sglist;
   struct scsi_host_sg_pool *sgp;
  
 - BUG_ON(cmd-sglist_len = SG_MEMPOOL_NR);
 + BUG_ON(sgt-sglist_len = SG_MEMPOOL_NR);
  
   /*
* if this is the biggest size sglist, check if we have
* chained parts we need to free
*/
 - if (cmd-__use_sg  SCSI_MAX_SG_SEGMENTS) {
 + if (sgt-__sg_count  SCSI_MAX_SG_SEGMENTS) {
   unsigned short this, left;
   struct scatterlist *next;
   unsigned int index;
  
 - left = cmd-__use_sg - (SCSI_MAX_SG_SEGMENTS - 1);
 + left = sgt-__sg_count - (SCSI_MAX_SG_SEGMENTS - 1);
   next = sg_chain_ptr(sgl[SCSI_MAX_SG_SEGMENTS - 1]);
   while (left  next) {
   sgl = next;
 @@ -854,11 +867,12 @@ void 

Re: [PATCHSET 0/5] Peaceful co-existence of scsi_sgtable and Large IO sg-chaining

2007-07-25 Thread Boaz Harrosh
FUJITA Tomonori wrote:
 From: Benny Halevy [EMAIL PROTECTED]
 Subject: Re: [PATCHSET 0/5] Peaceful co-existence of scsi_sgtable and Large 
 IO sg-chaining
 Date: Wed, 25 Jul 2007 11:26:44 +0300
 
 However, I'm perfectly happy to go with whatever the empirical evidence
 says is best .. and hopefully, now we don't have to pick this once and
 for all time ... we can alter it if whatever is chosen proves to be
 suboptimal.
 I agree.  This isn't a catholic marriage :)
 We'll run some performance experiments comparing the sgtable chaining
 implementation vs. a scsi_data_buff implementation pointing
 at a possibly chained sglist and let's see if we can measure
 any difference.  We'll send results as soon as we have them.
 
 I did some tests with your sgtable patchset and the approach to use
 separate buffer for sglists. As expected, there was no performance
 difference with small I/Os. I've not tried very large I/Os, which
 might give some difference.
 
 The patchset to use separate buffer for sglists is available:
 
 git://git.kernel.org/pub/scm/linux/kernel/git/tomo/linux-2.6-bidi.git 
 simple-sgtable
 
 
 Can you clean up your patchset and upload somewhere?

Sorry sure. Here is scsi_sgtable complete work over linux-block:
http://www.bhalevy.com/open-osd/download/scsi_sgtable/linux-block
 
Here is just scsi_sgtable, no chaining, over scsi-misc + more
drivers:
http://www.bhalevy.com/open-osd/download/scsi_sgtable/scsi-misc

Next week I will try to mount lots of scsi_debug devices and
use large parallel IO to try and find a difference. I will
test Jens's sglist-arch tree against above sglist-arch+scsi_sgtable

I have lots of reservations about Tomo's last patches. For me
they are a regression. They use 3 allocations per command instead
of 2. They use an extra pointer and an extra global slab_pool. And
for what, for grouping some scsi_cmnd members in a substructure.
If we want to go the pointing way, keeping our extra scatterlist
and our base_2 count on most ARCHs. Than we can just use the 
scsi_data_buff embedded inside scsi_cmnd. 

The second scsi_data_buff for bidi can come either from an extra 
slab_pool like in Tomo's patch - bidi can pay - or in scsi.c at 
scsi_setup_command_freelist() the code can inspect Tomo's flag 
to the request_queue QUEUE_FLAG_BIDI and will than allocate a 
bigger scsi_cmnd in the free list.

I have coded that approach and it is very simple:
http://www.bhalevy.com/open-osd/download/scsi_data_buff

They are over Jens's sglist-arch branch
I have revised all scsi-ml places and it all compiles
But is totally untested.

I will add this branch to the above tests, but I suspect that
they are identical in every way to current code.

For review here is the main scsi_data_buff patch:
--
From: Boaz Harrosh [EMAIL PROTECTED]
Date: Wed, 25 Jul 2007 20:19:14 +0300
Subject: [PATCH] SCSI: scsi_data_buff

  In preparation for bidi we abstract all IO members of scsi_cmnd
  that will need to duplicate into a substructure.
  - Group all IO members of scsi_cmnd into a scsi_data_buff
structure.
  - Adjust accessors to new members.
  - scsi_{alloc,free}_sgtable receive a scsi_data_buff instead of
scsi_cmnd. And work on it. (Supporting chaining like before)
  - Adjust scsi_init_io() and  scsi_release_buffers() for above
change.
  - Fix other parts of scsi_lib to members migration. Use accessors
where appropriate.

 Signed-off-by: Boaz Harrosh [EMAIL PROTECTED]
---
 drivers/scsi/scsi_lib.c  |   68 +++--
 include/scsi/scsi_cmnd.h |   34 +++---
 2 files changed, 46 insertions(+), 56 deletions(-)

diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
index d62b184..2b8a865 100644
--- a/drivers/scsi/scsi_lib.c
+++ b/drivers/scsi/scsi_lib.c
@@ -714,16 +714,14 @@ static unsigned scsi_sgtable_index(unsigned nents)
return -1;
 }
 
-struct scatterlist *scsi_alloc_sgtable(struct scsi_cmnd *cmd, gfp_t gfp_mask)
+struct scatterlist *scsi_alloc_sgtable(struct scsi_data_buff *sdb, gfp_t 
gfp_mask)
 {
struct scsi_host_sg_pool *sgp;
struct scatterlist *sgl, *prev, *ret;
unsigned int index;
int this, left;
 
-   BUG_ON(!cmd-use_sg);
-
-   left = cmd-use_sg;
+   left = sdb-sg_count;
ret = prev = NULL;
do {
this = left;
@@ -747,7 +745,7 @@ struct scatterlist *scsi_alloc_sgtable(struct scsi_cmnd 
*cmd, gfp_t gfp_mask)
 * first loop through, set initial index and return value
 */
if (!ret) {
-   cmd-sg_pool = index;
+   sdb-sg_pool = index;
ret = sgl;
}
 
@@ -769,10 +767,10 @@ struct scatterlist *scsi_alloc_sgtable(struct scsi_cmnd 
*cmd, gfp_t gfp_mask)
} while (left);
 
/*
-* -use_sg may get modified after dma mapping has potentially
+* sdb-sg_count may get modified after blk_rq_map_sg() potentially
 

RE: [PATCH 3/3] mptsas: add SMP passthrough support via bsg

2007-07-25 Thread Moore, Eric
On  Tuesday, July 24, 2007 6:48 PM, FUJITA Tomonori wrote:
  I hadn't enabled bsg support in the linux kernel, that was 
 my problem.
 
 What do you mean? You might hit the bug that you can't build scsi as a
 modular. It was fixed before rc1.
 

The issue is I'm new to BSG, and I didn't know I needed to enable
CONFIG_BLK_DEV_BSG in the kernel config.   I upgraded today to rc1, and
your correct, I don't have to link the scsi mod into kernel.   Don't
worry about this point, I'm squared away now.

 
   # ./sgv4-tools/smp_rep_manufacturer /sys/class/bsg/expander-0:1
   
   
   I think that James tested this with aic94xx, however, I guess that
   nobody has tested this with mptsas.
   
  
  I got garbage (I'm using the 2.6.23-git11 patch from last 
 week, before
  rc1):
  
  # ./smp_rep_manufacturer /sys/class/bsg/expander-2:0
SAS-1.1 format: 1
vendor identification: _BUS_ADD
product identification: RESS=unix:abstra
product revision level: ct=/
component vendor identification: tmp/dbus
component id: 11609
component revision level: 69
  
  With Doug Gilberts tools it works:
  
  # smp_rep_manufacturer --sa=0x500605b016b0 /dev/mptctl
  Report manufacturer response:
SAS-1.1 format: 0
vendor identification: LSILOGIC
product identification: SASx12 A.0
product revision level:
  
  
  Also, unloading and reloading the driver resulted in two expander
  entryies in /sys/class/bsg.The old entry was not deleted when I
  unloaded the driver.  When I tryied to run 
 smp_rep_manufacture on the
  old expander instance, it panicked.

With a sas analyzer, I figured out today why the bsg version of
smp_rep_manufacture is not working.There is a bug in
mptsas_smp_handler. The calculation of the first scatter gather element
size for the outbound data is incorrect.   Its being set to the resonse
data length, when instead it should be the request data legth.

This is incorrect:

+   flagsLength |= (rsp-data_len - 4);

It should be 

+   flagsLength |= (smpreq-RequestDataLength);


 
 I forgot to remove bsg entries. James fixed the bug. Please try
 2.6.23-rc1.

thanks, I noticed its fixed in rc1


 
 But probably, the tool still don't work against an expander. Does it
 work against the Virtual SMP Port?
 

I tried out this by passing /sys/class/bsg/sas_host0, and I saw it
return Virtual SMP.  I guess this can be left in.


 Oh, I thought that LSI wants to send the smp_reply back to user space
 since Doug's smp-utils does. But If you don't want, I'll just put the
 response check code that you suggested in the previous mail.
 

On second thought, It would be nice to have iocstatus, iocloginfo, and
sasstatus available in user space, that way the appliacation will have a
better understanding of what error occurred.   Without that info, how
will you it know if the response data you receive is valid data?   For
instance, before I identified the bug in the sgel size, you were
displaying garbage.   The driver could have prevented that by returning
-ENXIO I guess, instead how about pushing up that info to user space.
what do you think?maybe there should be some translation to common
error returns code between the varous sas vendors supporting this
portal.

Eric
-
To unsubscribe from this list: send the line unsubscribe linux-scsi in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


(resend) scsi0: Unexpected busfree while idle - Adaptec 29160N Ultra160 SCSI adapter

2007-07-25 Thread Jesper Juhl

Resending this since all I'm getting is silence and it's quite a problem for 
this box. I did a few google searches and aparently I'm not the only 
one having this problem - I see reports going back for years, but no 
solutions.

I've got heaps of dmesg output if wanted/needed, I'll test patches, 
suggestions etc. 

Any ideas people?


On Sunday 15 July 2007 04:04:50 Jesper Juhl wrote:
 On 09/07/07, Jesper Juhl [EMAIL PROTECTED] wrote:
  I just experienced a long hang and a lot of unpleasant messages in dmesg
  while building randconfig kernels in a loop.
 
 
 It just happened again without me doing anything special, just normal
 desktop use, surfing the net, reading email etc. This time the kernel
 version was 2.6.22-g1db6178c
 
 This time the start of the messages look like this (slightly different
 from last time - quoted below):
 
 [ 8039.510026] scsi0: Unexpected busfree while idle
 [ 8039.510036] SEQADDR == 0x18
 [ 8069.492318] sr 0:0:4:0: Attempting to queue an ABORT message
 [ 8069.492324] CDB: 0x0 0x0 0x0 0x0 0x0 0x0
 [ 8069.492342] scsi0: At time of recovery, card was not paused
 [ 8069.492353]  Dump Card State Begins 
 [ 8069.492357] scsi0: Dumping Card State while idle, at SEQADDR 0x18
 [ 8069.492362] Card was paused
 [ 8069.492371] ACCUM = 0xa, SINDEX = 0x48, DINDEX = 0xe4, ARG_2 = 0x2
 [ 8069.492378] HCNT = 0x0 SCBPTR = 0xb
 [ 8069.492384] SCSIPHASE[0x0] SCSISIGI[0x0] ERROR[0x0]
 [ 8069.492406] SCSIBUSL[0x0] LASTPHASE[0x1] SCSISEQ[0x1a]
 [ 8069.492426] SBLKCTL[0xa] SCSIRATE[0x0] SEQCTL[0x10]
 [ 8069.492447] SEQ_FLAGS[0xc0] SSTAT0[0x0] SSTAT1[0x0]
 [ 8069.492467] SSTAT2[0x0] SSTAT3[0x0] SIMODE0[0x8]
 [ 8069.492487] SIMODE1[0xa4] SXFRCTL0[0x80] DFCNTRL[0x0]
 [ 8069.492507] DFSTATUS[0x89]
 [ 8069.492515] STACK: 0x0 0x164 0x179 0x17
 [ 8069.492536] SCB count = 24
 [ 8069.492540] Kernel NEXTQSCB = 12
 [ 8069.492545] Card NEXTQSCB = 17
 [ 8069.492549] QINFIFO entries: 17 18
 [ 8069.492561] Waiting Queue entries: 11:10
 [ 8069.492573] Disconnected Queue entries:
 [ 8069.492581] QOUTFIFO entries:
 [ 8069.492587] Sequencer Free SCB List: 3 1 13 14 5 6 0 17 10 19 4 8 7
 2 16 20 18 12 15 9 21 22 23 24 25
  26 27 28 29 30 31
 [ 8069.492694] Sequencer SCB Info:
 [ 8069.492698]   0 SCB_CONTROL[0xe0] SCB_SCSIID[0x67] SCB_LUN[0x0]
 [ 8069.492723] SCB_TAG[0xff]
 [ 8069.492729]   1 SCB_CONTROL[0xe0] SCB_SCSIID[0x67] SCB_LUN[0x0]
 [ 8069.492753] SCB_TAG[0xff]
 [ 8069.492759]   2 SCB_CONTROL[0xe0] SCB_SCSIID[0x67] SCB_LUN[0x0]
 [ 8069.492784] SCB_TAG[0xff]
 [ 8069.492791]   3 SCB_CONTROL[0xe0] SCB_SCSIID[0x67] SCB_LUN[0x0]
 [ 8069.492816] SCB_TAG[0xff]
 [ 8069.492822]   4 SCB_CONTROL[0xe0] SCB_SCSIID[0x67] SCB_LUN[0x0]
 [ 8069.492845] SCB_TAG[0xff]
 [ 8069.492851]   5 SCB_CONTROL[0xe0] SCB_SCSIID[0x67] SCB_LUN[0x0]
 [ 8069.492875] SCB_TAG[0xff]
 [ 8069.492880]   6 SCB_CONTROL[0xe0] SCB_SCSIID[0x67] SCB_LUN[0x0]
 [ 8069.492904] SCB_TAG[0xff]
 [ 8069.492910]   7 SCB_CONTROL[0xe0] SCB_SCSIID[0x67] SCB_LUN[0x0]
 [ 8069.492934] SCB_TAG[0xff]
 [ 8069.492940]   8 SCB_CONTROL[0xe0] SCB_SCSIID[0x67] SCB_LUN[0x0]
 [ 8069.492964] SCB_TAG[0xff]
 [ 8069.492969]   9 SCB_CONTROL[0xe0] SCB_SCSIID[0x67] SCB_LUN[0x0]
 [ 8069.492994] SCB_TAG[0xff]
 [ 8069.493000]  10 SCB_CONTROL[0xe0] SCB_SCSIID[0x67] SCB_LUN[0x0]
 [ 8069.493024] SCB_TAG[0xff]
 [ 8069.493030]  11 SCB_CONTROL[0x40] SCB_SCSIID[0x47] SCB_LUN[0x0]
 [ 8069.493054] SCB_TAG[0xa]
 [ 8069.493060]  12 SCB_CONTROL[0xe0] SCB_SCSIID[0x67] SCB_LUN[0x0]
 [ 8069.493085] SCB_TAG[0xff]
 [ 8069.493090]  13 SCB_CONTROL[0xe0] SCB_SCSIID[0x67] SCB_LUN[0x0]
 [ 8069.493114] SCB_TAG[0xff]
 [ 8069.493120]  14 SCB_CONTROL[0xe0] SCB_SCSIID[0x67] SCB_LUN[0x0]
 [ 8069.493145] SCB_TAG[0xff]
 [ 8069.493150]  15 SCB_CONTROL[0xe0] SCB_SCSIID[0x67] SCB_LUN[0x0]
 [ 8069.493175] SCB_TAG[0xff]
 [ 8069.493181]  16 SCB_CONTROL[0xe0] SCB_SCSIID[0x67] SCB_LUN[0x0]
 [ 8069.493206] SCB_TAG[0xff]
 [ 8069.493212]  17 SCB_CONTROL[0xe0] SCB_SCSIID[0x67] SCB_LUN[0x0]
 [ 8069.493235] SCB_TAG[0xff]
 [ 8069.493241]  18 SCB_CONTROL[0xe0] SCB_SCSIID[0x67] SCB_LUN[0x0]
 [ 8069.493266] SCB_TAG[0xff]
 [ 8069.493272]  19 SCB_CONTROL[0xe0] SCB_SCSIID[0x67] SCB_LUN[0x0]
 [ 8069.493296] SCB_TAG[0xff]
 [ 8069.493302]  20 SCB_CONTROL[0xe0] SCB_SCSIID[0x67] SCB_LUN[0x0]
 [ 8069.493316] SCB_TAG[0xff]
 [ 8069.493323]  21 SCB_CONTROL[0x0] SCB_SCSIID[0xff] SCB_LUN[0xff]
 [ 8069.493348] SCB_TAG[0xff]
 [ 8069.493354]  22 SCB_CONTROL[0x0] SCB_SCSIID[0xff] SCB_LUN[0xff]
 [ 8069.493378] SCB_TAG[0xff]
 [ 8069.493383]  23 SCB_CONTROL[0x0] SCB_SCSIID[0xff] SCB_LUN[0xff]
 [ 8069.493407] SCB_TAG[0xff]
 [ 8069.493413]  24 SCB_CONTROL[0x0] SCB_SCSIID[0xff] SCB_LUN[0xff]
 [ 8069.493437] SCB_TAG[0xff]
 [ 8069.493443]  25 SCB_CONTROL[0x0] SCB_SCSIID[0xff] SCB_LUN[0xff]
 [ 8069.493469] SCB_TAG[0xff]
 [ 8069.493475]  26 SCB_CONTROL[0x0] SCB_SCSIID[0xff] SCB_LUN[0xff]
 [ 8069.493501] SCB_TAG[0xff]
 [ 8069.493507]  27 SCB_CONTROL[0x0] SCB_SCSIID[0xff] SCB_LUN[0xff]
 [ 8069.493531] SCB_TAG[0xff]
 [ 8069.493537]  28 SCB_CONTROL[0x0] SCB_SCSIID[0xff] 

2.6.23-rc1-mm1: SCSI_SRP_ATTRS compile error

2007-07-25 Thread Adrian Bunk
On Wed, Jul 25, 2007 at 05:36:56PM +0100, Andy Whitcroft wrote:
 Of the machines we test releases on automatically this only compiles on
 NUMA-Q and does not boot there (some PCI issue).
 
 
 ppc64 (beavis):
 
 drivers/built-in.o(.text+0xd2784): In function `.srp_rport_add':
 : undefined reference to `.scsi_tgt_it_nexus_create'
 drivers/built-in.o(.text+0xd2884): In function `.srp_rport_del':
 : undefined reference to `.scsi_tgt_it_nexus_destroy'
 make: *** [.tmp_vmlinux1] Error 1
 
 
 x86_64 (bl6-13):
 
 ERROR: scsi_tgt_it_nexus_destroy [drivers/scsi/scsi_transport_srp.ko]
 undefined!
 ERROR: scsi_tgt_it_nexus_create [drivers/scsi/scsi_transport_srp.ko]
 undefined!
 make[1]: *** [__modpost] Error 1
...

Caused-By  : git-scsi-target.patch
Workaround : enable CONFIG_SCSI_TGT

Is there any good reason why all the SCSI transport attributes options 
are user visible?

With an answer to this question I can fix this bug.

 -apw

cu
Adrian

-- 

   Is there not promise of rain? Ling Tan asked suddenly out
of the darkness. There had been need of rain for many days.
   Only a promise, Lao Er said.
   Pearl S. Buck - Dragon Seed

-
To unsubscribe from this list: send the line unsubscribe linux-scsi in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html