Author: maks
Date: Mon Dec 24 00:39:41 2007
New Revision: 10007

Log:
add latest git firewire fixes


Added:
   dists/trunk/linux-2.6/debian/patches/bugfix/all/git-ieee1394.patch
Modified:
   dists/trunk/linux-2.6/debian/changelog
   dists/trunk/linux-2.6/debian/patches/series/1~experimental.1

Modified: dists/trunk/linux-2.6/debian/changelog
==============================================================================
--- dists/trunk/linux-2.6/debian/changelog      (original)
+++ dists/trunk/linux-2.6/debian/changelog      Mon Dec 24 00:39:41 2007
@@ -26,6 +26,7 @@
   * Reenable DABUSB as firmware is BSD licensed.
   * [hppa]: Disable OCFS2, due build trouble.
   * topconfig: Enable delay accounting TASKSTATS. (closes: #433204)
+  * Add git-ieee1394.patch for latest firewire fixes.
 
   [ Bastian Blank ]
   * [amd64, i386]: Set kernel architecture to x86.
@@ -40,7 +41,7 @@
   [ dann frazier ]
   * [ia64]: Enable BLK_CPQ_DA
 
- -- maximilian attems <[EMAIL PROTECTED]>  Tue, 18 Dec 2007 22:50:02 +0100
+ -- maximilian attems <[EMAIL PROTECTED]>  Mon, 24 Dec 2007 01:31:44 +0100
 
 linux-2.6 (2.6.23-1~experimental.1) UNRELEASED; urgency=low
 

Added: dists/trunk/linux-2.6/debian/patches/bugfix/all/git-ieee1394.patch
==============================================================================
--- (empty file)
+++ dists/trunk/linux-2.6/debian/patches/bugfix/all/git-ieee1394.patch  Mon Dec 
24 00:39:41 2007
@@ -0,0 +1,1133 @@
+GIT 948b907d3fb2d02f77aea8d5b015cc533bff8d65 
git+ssh://master.kernel.org/pub/scm/linux/kernel/git/ieee1394/linux1394-2.6.git
+
+commit 
+Author: Stefan Richter <[EMAIL PROTECTED]>
+Date:   Sat Dec 22 22:14:52 2007 +0100
+
+    firewire: fw-ohci: CycleTooLong interrupt management
+    
+    The firewire-ohci driver so far lacked the ability to resume cycle
+    master duty after that condition happened, as added to ohci1394 in Linux
+    2.6.18 by commit 57fdb58fa5a140bdd52cf4c4ffc30df73676f0a5.  This ports
+    this patch to fw-ohci.
+    
+    The "cycle too long" condition has been seen in practice
+      - with IIDC cameras if a mode with packets too large for a speed is
+        chosen,
+      - sporadically when capturing DV on a VIA VT6306 card with ohci1394/
+        ieee1394/ raw1394/ dvgrab 2.
+        https://bugzilla.redhat.com/show_bug.cgi?id=415841#c7
+    (This does not fix Fedora bug 415841.)
+    
+    Signed-off-by: Stefan Richter <[EMAIL PROTECTED]>
+
+commit f22532d403d2b91ec355c3dc008688bb87013600
+Author: Rabin Vincent <[EMAIL PROTECTED]>
+Date:   Fri Dec 21 23:02:15 2007 +0530
+
+    firewire: Fix extraction of source node id
+    
+    Fix extraction of the source node id from the packet header.
+    
+    Signed-off-by: Rabin Vincent <[EMAIL PROTECTED]>
+    Signed-off-by: Stefan Richter <[EMAIL PROTECTED]>
+
+commit 47349df008fd1ebcf6512200e9e2eccb313952c4
+Author: David Moore <[EMAIL PROTECTED]>
+Date:   Wed Dec 19 15:26:38 2007 -0500
+
+    firewire: fw-ohci: Bug fixes for packet-per-buffer support
+    
+    This patch corrects a number of bugs in the current OHCI 1.0
+    packet-per-buffer support:
+    
+    1. Correctly deal with payloads that cross a page boundary.  The
+    previous version would not split the descriptor at such a boundary,
+    potentially corrupting unrelated memory.
+    
+    2. Allow user-space to specify multiple packets per struct
+    fw_cdev_iso_packet in the same way that dual-buffer allows.  This is
+    signaled by header_length being a multiple of header_size.  This
+    multiple determines the number of packets.  The payload size allocated
+    per packet is determined by dividing the total payload size by the
+    number of packets.
+    
+    3. Make sync support work properly for packet-per-buffer.
+    
+    I have tested this patch with libdc1394 by forcing my OHCI 1.1
+    controller to use the packet-per-buffer support instead of dual-buffer.
+    
+    I would greatly appreciate testing by those who have a DV devices and
+    other types of iso streamers to make sure I didn't cause any
+    regressions.
+    
+    Stefan, with this patch, I'm hoping that libdc1394 will work with all
+    your OHCI 1.0 controllers now.
+    
+    The one bit of future work that remains for packet-per-buffer support is
+    the automatic compaction of short payloads that I discussed with
+    Kristian.
+    
+    Signed-off-by: David Moore <[EMAIL PROTECTED]>
+    Signed-off-by: Stefan Richter <[EMAIL PROTECTED]>
+
+commit e9f5ca46377ac60a6b7d52c6c19a1661c87c6e20
+Author: David Moore <[EMAIL PROTECTED]>
+Date:   Wed Dec 19 03:09:18 2007 -0500
+
+    firewire: fw-ohci: Fix for dualbuffer three-or-more buffers
+    
+    This patch fixes the problem where different OHCI 1.1 controllers behave
+    differently when a received iso packet straddles three or more buffers
+    when using the dual-buffer receive mode.  Two changes are made in order
+    to handle this situation:
+    
+    1. The packet sync DMA descriptor is given a non-zero header length and
+    non-zero payload length.  This is because zero-payload descriptors are
+    not discussed in the OHCI 1.1 specs and their behavior is thus
+    undefined.  Instead we use a header size just large enough for a single
+    header and a payload length of 4 bytes for this first descriptor.
+    
+    2. As we process received packets in the context's tasklet, read the
+    packet length out of the headers.  Keep track of the running total of
+    the packet length as "excess_bytes", so we can ignore any descriptors
+    where no packet starts or ends.  These descriptors may not have had
+    their first_res_count or second_res_count fields updated by the
+    controller so we cannot rely on those values.
+    
+    The main drawback of this patch is that the excess_bytes value might get
+    "out of sync" with the packet descriptors if something strange happens
+    to the DMA program.  I'm not if such a thing could ever happen, but I
+    appreciate any suggestions in making it more robust.
+    
+    Also, the packet-per-buffer support may need a similar fix to deal with
+    issue 1, but I haven't done any work on that yet.
+    
+    Stefan, I'm hoping that with this patch, all your OHCI 1.1 controllers
+    will work properly with an unmodified version of libdc1394.
+    
+    Signed-off-by: David Moore <[EMAIL PROTECTED]>
+    Signed-off-by: Stefan Richter <[EMAIL PROTECTED]>
+
+commit 070ca2f30c2bbaeeeb740dfad01cc9a27905e6a9
+Author: Stefan Richter <[EMAIL PROTECTED]>
+Date:   Sun Dec 16 20:53:13 2007 +0100
+
+    ieee1394: ohci1394: don't schedule IT tasklets on IR events
+    
+    Bug noted by Pieter Palmers:  Isochronous transmit tasklets were
+    scheduled on isochronous receive events, in addition to the proper
+    isochronous receive tasklets.
+    
+    http://marc.info/?l=linux1394-devel&m=119783196222802
+    
+    Signed-off-by: Stefan Richter <[EMAIL PROTECTED]>
+
+commit 27aa95c9e41622c0d4f5c8d30b62abae0cc9ada0
+Author: Stefan Richter <[EMAIL PROTECTED]>
+Date:   Sun Dec 16 17:32:11 2007 +0100
+
+    firewire: fw-sbp2: remove unused misleading macro
+    
+    SBP2_MAX_SECTORS is nowhere used in fw-sbp2.
+    It merely got copied over from sbp2 where it played a role in the past.
+    
+    Signed-off-by: Stefan Richter <[EMAIL PROTECTED]>
+
+commit 332319e2852838221b2ece1389248414e060cc94
+Author: Stefan Richter <[EMAIL PROTECTED]>
+Date:   Sun Dec 16 17:31:26 2007 +0100
+
+    ieee1394: sbp2: raise default transfer size limit
+    
+    This patch speeds up sbp2 a little bit --- but more importantly, it
+    brings the behavior of sbp2 and fw-sbp2 closer to each other.  Like
+    fw-sbp2, sbp2 now does not limit the size of single transfers to 255
+    sectors anymore, unless told so by a blacklist flag or by module load
+    parameters.
+    
+    Only very old bridge chips have been known to need the 255 sectors
+    limit, and we have got one such chip in our hardwired blacklist.  There
+    certainly is a danger that more bridges need that limit; but I prefer to
+    have this issue present in both fw-sbp2 and sbp2 rather than just one of
+    them.
+    
+    An OXUF922 with 400GB 7200RPM disk on an S400 controller is sped up by
+    this patch from 22.9 to 23.5 MB/s according to hdparm.  The same effect
+    could be achieved before by setting a higher max_sectors module
+    parameter.  On buses which use 1394b beta mode, sbp2 and fw-sbp2 will
+    now achieve virtually the same bandwidth.  Fw-sbp2 only remains faster
+    on 1394a buses due to fw-core's gap count optimization.
+    
+    Signed-off-by: Stefan Richter <[EMAIL PROTECTED]>
+
+commit bc2dfcc923803ab9a60e5316748e25d425a2bd08
+Author: Stefan Richter <[EMAIL PROTECTED]>
+Date:   Sat Dec 15 14:11:41 2007 +0100
+
+    ieee1394: remove unused code
+    
+    The code has been in "#if 0 - #endif" since Linux 2.6.12.
+    
+    Signed-off-by: Stefan Richter <[EMAIL PROTECTED]>
+
+commit aa541d501d5be17ba05e8e6374371c5b376ab994
+Author: Stefan Richter <[EMAIL PROTECTED]>
+Date:   Sat Dec 15 14:04:42 2007 +0100
+
+    ieee1394: small cleanup after "nopage"
+    
+    Signed-off-by: Stefan Richter <[EMAIL PROTECTED]>
+
+commit 598c25878bf4e7de677079022c42635ebd846e62
+Author: Stefan Richter <[EMAIL PROTECTED]>
+Date:   Sat Dec 22 21:53:33 2007 +0100
+
+    Revert "firewire: fw-ohci: CycleTooLong interrupt management"
+    
+    This reverts commit 5bd0c4ef883f5e3c24ab91127de0292ebd0fa405.
+    Needs to be updated to rate-limit the kernel log message.
+
+commit 0394d46dc8485840992f6dd57e39f2336e85a6fe
+Author: Nick Piggin <[EMAIL PROTECTED]>
+Date:   Wed Dec 5 18:15:53 2007 +1100
+
+    ieee1394: nopage
+    
+    Convert ieee1394 from nopage to fault.
+    Remove redundant vma range checks (correct resource range check is 
retained).
+    
+    Signed-off-by: Nick Piggin <[EMAIL PROTECTED]>
+    Signed-off-by: Andrew Morton <[EMAIL PROTECTED]>
+    Signed-off-by: Stefan Richter <[EMAIL PROTECTED]>
+
+commit 5bd0c4ef883f5e3c24ab91127de0292ebd0fa405
+Author: Stefan Richter <[EMAIL PROTECTED]>
+Date:   Sun Dec 9 14:03:09 2007 +0100
+
+    firewire: fw-ohci: CycleTooLong interrupt management
+    
+    According to a report by Robin Theander, VIA VT6306 may sporadically
+    trip the "isochronous cycle too long" condition when capturing DV in
+    buffer-fill mode with ohci1394/ ieee1394/ raw1394/ dvgrab 2.
+    https://bugzilla.redhat.com/show_bug.cgi?id=415841#c7
+    
+    The firewire-ohci driver so far lacked the ability to resume cycle
+    master duty after that condition happened, an ability added to ohci1394
+    in Linux 2.6.18 by a patch from Jean-Baptiste Mur (commit
+    57fdb58fa5a140bdd52cf4c4ffc30df73676f0a5).  This ports this commit to
+    fw-ohci just to be sure, since this condition can potentially also
+    happen with fw-ohci.
+    
+    This does alas not fix above referenced Fedora bug 415841.
+    
+    Signed-off-by: Stefan Richter <[EMAIL PROTECTED]>
+
+commit 1526cb4169cce7b87db54c47ce0fd0c1bd7fb16a
+Author: Joe Perches <[EMAIL PROTECTED]>
+Date:   Mon Nov 19 17:48:10 2007 -0800
+
+    ieee1394: Add missing "space"
+    
+    Signed-off-by: Joe Perches <[EMAIL PROTECTED]>
+    Signed-off-by: Stefan Richter <[EMAIL PROTECTED]>
+
+commit b4be6170ef896e3a98a71e03f3514ccff264ffde
+Author: Jay Fenlason <[EMAIL PROTECTED]>
+Date:   Wed Nov 7 17:39:00 2007 -0500
+
+    firewire: fw-sbp2: quiet logout errors on device removal
+    
+    This suppresses both reply timed out and management write failed
+    errors on LOGOUT requests.
+    
+    Signed-off-by: Jay Fenlason <[EMAIL PROTECTED]>
+    Signed-off-by: Stefan Richter <[EMAIL PROTECTED]>
+
+commit 8f50ff61ed0282179371cbef173b8b0aad0d1313
+Author: Stefan Richter <[EMAIL PROTECTED]>
+Date:   Sun Nov 4 14:59:24 2007 +0100
+
+    ieee1394: sbp2: s/g list access cosmetics
+    
+    Replace sg->length by sg_dma_len(sg).  Rename a variable for shorter
+    line lengths and eliminate some superfluous local variables.
+    
+    Signed-off-by: Stefan Richter <[EMAIL PROTECTED]>
+
+commit fe702f621c6bdead79dd4172cd00b35ece4b88c3
+Author: Stefan Richter <[EMAIL PROTECTED]>
+Date:   Sun Nov 4 14:58:43 2007 +0100
+
+    ieee1394: sbp2: enable s/g chaining
+    
+    Signed-off-by: Stefan Richter <[EMAIL PROTECTED]>
+
+commit 53e11c39606617de4fea57077891abb3870ff383
+Author: Stefan Richter <[EMAIL PROTECTED]>
+Date:   Sun Nov 4 14:58:11 2007 +0100
+
+    firewire: fw-sbp2: enable s/g chaining
+    
+    Signed-off-by: Stefan Richter <[EMAIL PROTECTED]>
+
+commit c3e0d276f016e052dfd87b73041a5be6dd08454d
+Author: Stefan Richter <[EMAIL PROTECTED]>
+Date:   Wed Nov 7 01:12:51 2007 +0100
+
+    firewire: fw-sbp2: refactor workq and kref handling
+    
+    This somewhat reduces the size of firewire-sbp2.ko.
+    
+    Signed-off-by: Stefan Richter <[EMAIL PROTECTED]>
+ drivers/firewire/fw-ohci.c               |  154 ++++++++++++++++--------------
+ drivers/firewire/fw-sbp2.c               |   85 ++++++++++-------
+ drivers/firewire/fw-transaction.c        |    2 +-
+ drivers/ieee1394/dma.c                   |   39 +++-----
+ drivers/ieee1394/ieee1394_transactions.c |   68 -------------
+ drivers/ieee1394/ohci1394.c              |   12 ++-
+ drivers/ieee1394/raw1394.c               |    4 +-
+ drivers/ieee1394/sbp2.c                  |   53 ++++++-----
+ drivers/ieee1394/sbp2.h                  |    1 -
+ 9 files changed, 184 insertions(+), 234 deletions(-)
+
+diff --git a/drivers/firewire/fw-ohci.c b/drivers/firewire/fw-ohci.c
+index 436a855..74d5d94 100644
+--- a/drivers/firewire/fw-ohci.c
++++ b/drivers/firewire/fw-ohci.c
+@@ -125,6 +125,7 @@ struct context {
+ struct iso_context {
+       struct fw_iso_context base;
+       struct context context;
++      int excess_bytes;
+       void *header;
+       size_t header_length;
+ };
+@@ -1078,6 +1079,13 @@ static irqreturn_t irq_handler(int irq, void *data)
+       if (unlikely(event & OHCI1394_postedWriteErr))
+               fw_error("PCI posted write error\n");
+ 
++      if (unlikely(event & OHCI1394_cycleTooLong)) {
++              if (printk_ratelimit())
++                      fw_notify("isochronous cycle too long\n");
++              reg_write(ohci, OHCI1394_LinkControlSet,
++                        OHCI1394_LinkControl_cycleMaster);
++      }
++
+       if (event & OHCI1394_cycle64Seconds) {
+               cycle_time = reg_read(ohci, OHCI1394_IsochronousCycleTimer);
+               if ((cycle_time & 0x80000000) == 0)
+@@ -1151,8 +1159,8 @@ static int ohci_enable(struct fw_card *card, u32 
*config_rom, size_t length)
+                 OHCI1394_RQPkt | OHCI1394_RSPkt |
+                 OHCI1394_reqTxComplete | OHCI1394_respTxComplete |
+                 OHCI1394_isochRx | OHCI1394_isochTx |
+-                OHCI1394_postedWriteErr | OHCI1394_cycle64Seconds |
+-                OHCI1394_masterIntEnable);
++                OHCI1394_postedWriteErr | OHCI1394_cycleTooLong |
++                OHCI1394_cycle64Seconds | OHCI1394_masterIntEnable);
+ 
+       /* Activate link_on bit and contender bit in our self ID packets.*/
+       if (ohci_update_phy_reg(card, 4, 0,
+@@ -1408,9 +1416,13 @@ static int handle_ir_dualbuffer_packet(struct context 
*context,
+       void *p, *end;
+       int i;
+ 
+-      if (db->first_res_count > 0 && db->second_res_count > 0)
+-              /* This descriptor isn't done yet, stop iteration. */
+-              return 0;
++      if (db->first_res_count > 0 && db->second_res_count > 0) {
++              if (ctx->excess_bytes <= le16_to_cpu(db->second_req_count)) {
++                      /* This descriptor isn't done yet, stop iteration. */
++                      return 0;
++              }
++              ctx->excess_bytes -= le16_to_cpu(db->second_req_count);
++      }
+ 
+       header_length = le16_to_cpu(db->first_req_count) -
+               le16_to_cpu(db->first_res_count);
+@@ -1429,11 +1441,15 @@ static int handle_ir_dualbuffer_packet(struct context 
*context,
+               *(u32 *) (ctx->header + i) = __swab32(*(u32 *) (p + 4));
+               memcpy(ctx->header + i + 4, p + 8, ctx->base.header_size - 4);
+               i += ctx->base.header_size;
++              ctx->excess_bytes +=
++                      (le32_to_cpu(*(u32 *)(p + 4)) >> 16) & 0xffff;
+               p += ctx->base.header_size + 4;
+       }
+-
+       ctx->header_length = i;
+ 
++      ctx->excess_bytes -= le16_to_cpu(db->second_req_count) -
++              le16_to_cpu(db->second_res_count);
++
+       if (le16_to_cpu(db->control) & DESCRIPTOR_IRQ_ALWAYS) {
+               ir_header = (__le32 *) (db + 1);
+               ctx->base.callback(&ctx->base,
+@@ -1452,24 +1468,24 @@ static int handle_ir_packet_per_buffer(struct context 
*context,
+ {
+       struct iso_context *ctx =
+               container_of(context, struct iso_context, context);
+-      struct descriptor *pd = d + 1;
++      struct descriptor *pd;
+       __le32 *ir_header;
+-      size_t header_length;
+-      void *p, *end;
+-      int i, z;
++      void *p;
++      int i;
+ 
+-      if (pd->res_count == pd->req_count)
++      for (pd = d; pd <= last; pd++) {
++              if (pd->transfer_status)
++                      break;
++      }
++      if (pd > last)
+               /* Descriptor(s) not done yet, stop iteration */
+               return 0;
+ 
+-      header_length = le16_to_cpu(d->req_count);
+-
+       i   = ctx->header_length;
+-      z   = le32_to_cpu(pd->branch_address) & 0xf;
+-      p   = d + z;
+-      end = p + header_length;
++      p   = last + 1;
+ 
+-      while (p < end && i + ctx->base.header_size <= PAGE_SIZE) {
++      if (ctx->base.header_size > 0 &&
++                      i + ctx->base.header_size <= PAGE_SIZE) {
+               /*
+                * The iso header is byteswapped to little endian by
+                * the controller, but the remaining header quadlets
+@@ -1478,14 +1494,11 @@ static int handle_ir_packet_per_buffer(struct context 
*context,
+                */
+               *(u32 *) (ctx->header + i) = __swab32(*(u32 *) (p + 4));
+               memcpy(ctx->header + i + 4, p + 8, ctx->base.header_size - 4);
+-              i += ctx->base.header_size;
+-              p += ctx->base.header_size + 4;
++              ctx->header_length += ctx->base.header_size;
+       }
+ 
+-      ctx->header_length = i;
+-
+-      if (le16_to_cpu(pd->control) & DESCRIPTOR_IRQ_ALWAYS) {
+-              ir_header = (__le32 *) (d + z);
++      if (le16_to_cpu(last->control) & DESCRIPTOR_IRQ_ALWAYS) {
++              ir_header = (__le32 *) p;
+               ctx->base.callback(&ctx->base,
+                                  le32_to_cpu(ir_header[0]) & 0xffff,
+                                  ctx->header_length, ctx->header,
+@@ -1493,7 +1506,6 @@ static int handle_ir_packet_per_buffer(struct context 
*context,
+               ctx->header_length = 0;
+       }
+ 
+-
+       return 1;
+ }
+ 
+@@ -1775,19 +1787,6 @@ ohci_queue_iso_receive_dualbuffer(struct fw_iso_context 
*base,
+        * packet, retransmit or terminate..
+        */
+ 
+-      if (packet->skip) {
+-              d = context_get_descriptors(&ctx->context, 2, &d_bus);
+-              if (d == NULL)
+-                      return -ENOMEM;
+-
+-              db = (struct db_descriptor *) d;
+-              db->control = cpu_to_le16(DESCRIPTOR_STATUS |
+-                                        DESCRIPTOR_BRANCH_ALWAYS |
+-                                        DESCRIPTOR_WAIT);
+-              db->first_size = cpu_to_le16(ctx->base.header_size + 4);
+-              context_append(&ctx->context, d, 2, 0);
+-      }
+-
+       p = packet;
+       z = 2;
+ 
+@@ -1815,11 +1814,18 @@ ohci_queue_iso_receive_dualbuffer(struct 
fw_iso_context *base,
+               db->control = cpu_to_le16(DESCRIPTOR_STATUS |
+                                         DESCRIPTOR_BRANCH_ALWAYS);
+               db->first_size = cpu_to_le16(ctx->base.header_size + 4);
+-              db->first_req_count = cpu_to_le16(header_size);
++              if (p->skip && rest == p->payload_length) {
++                      db->control |= cpu_to_le16(DESCRIPTOR_WAIT);
++                      db->first_req_count = db->first_size;
++              } else {
++                      db->first_req_count = cpu_to_le16(header_size);
++              }
+               db->first_res_count = db->first_req_count;
+               db->first_buffer = cpu_to_le32(d_bus + sizeof(*db));
+ 
+-              if (offset + rest < PAGE_SIZE)
++              if (p->skip && rest == p->payload_length)
++                      length = 4;
++              else if (offset + rest < PAGE_SIZE)
+                       length = rest;
+               else
+                       length = PAGE_SIZE - offset;
+@@ -1835,7 +1841,8 @@ ohci_queue_iso_receive_dualbuffer(struct fw_iso_context 
*base,
+               context_append(&ctx->context, d, z, header_z);
+               offset = (offset + length) & ~PAGE_MASK;
+               rest -= length;
+-              page++;
++              if (offset == 0)
++                      page++;
+       }
+ 
+       return 0;
+@@ -1849,67 +1856,70 @@ ohci_queue_iso_receive_packet_per_buffer(struct 
fw_iso_context *base,
+ {
+       struct iso_context *ctx = container_of(base, struct iso_context, base);
+       struct descriptor *d = NULL, *pd = NULL;
+-      struct fw_iso_packet *p;
++      struct fw_iso_packet *p = packet;
+       dma_addr_t d_bus, page_bus;
+       u32 z, header_z, rest;
+-      int i, page, offset, packet_count, header_size;
+-
+-      if (packet->skip) {
+-              d = context_get_descriptors(&ctx->context, 1, &d_bus);
+-              if (d == NULL)
+-                      return -ENOMEM;
+-
+-              d->control = cpu_to_le16(DESCRIPTOR_STATUS |
+-                                       DESCRIPTOR_INPUT_LAST |
+-                                       DESCRIPTOR_BRANCH_ALWAYS |
+-                                       DESCRIPTOR_WAIT);
+-              context_append(&ctx->context, d, 1, 0);
+-      }
+-
+-      /* one descriptor for header, one for payload */
+-      /* FIXME: handle cases where we need multiple desc. for payload */
+-      z = 2;
+-      p = packet;
++      int i, j, length;
++      int page, offset, packet_count, header_size, payload_per_buffer;
+ 
+       /*
+        * The OHCI controller puts the status word in the
+        * buffer too, so we need 4 extra bytes per packet.
+        */
+       packet_count = p->header_length / ctx->base.header_size;
+-      header_size  = packet_count * (ctx->base.header_size + 4);
++      header_size  = ctx->base.header_size + 4;
+ 
+       /* Get header size in number of descriptors. */
+       header_z = DIV_ROUND_UP(header_size, sizeof(*d));
+       page     = payload >> PAGE_SHIFT;
+       offset   = payload & ~PAGE_MASK;
+-      rest     = p->payload_length;
++      payload_per_buffer = p->payload_length / packet_count;
+ 
+       for (i = 0; i < packet_count; i++) {
+               /* d points to the header descriptor */
++              z = DIV_ROUND_UP(payload_per_buffer + offset, PAGE_SIZE) + 1;
+               d = context_get_descriptors(&ctx->context,
+-                                          z + header_z, &d_bus);
++                              z + header_z, &d_bus);
+               if (d == NULL)
+                       return -ENOMEM;
+ 
+-              d->control      = cpu_to_le16(DESCRIPTOR_INPUT_MORE);
++              d->control      = cpu_to_le16(DESCRIPTOR_STATUS |
++                                            DESCRIPTOR_INPUT_MORE);
++              if (p->skip && i == 0)
++                      d->control |= cpu_to_le16(DESCRIPTOR_WAIT);
+               d->req_count    = cpu_to_le16(header_size);
+               d->res_count    = d->req_count;
++              d->transfer_status = 0;
+               d->data_address = cpu_to_le32(d_bus + (z * sizeof(*d)));
+ 
+-              /* pd points to the payload descriptor */
+-              pd = d + 1;
++              rest = payload_per_buffer;
++              for (j = 1; j < z; j++) {
++                      pd = d + j;
++                      pd->control = cpu_to_le16(DESCRIPTOR_STATUS |
++                                                DESCRIPTOR_INPUT_MORE);
++
++                      if (offset + rest < PAGE_SIZE)
++                              length = rest;
++                      else
++                              length = PAGE_SIZE - offset;
++                      pd->req_count = cpu_to_le16(length);
++                      pd->res_count = pd->req_count;
++                      pd->transfer_status = 0;
++
++                      page_bus = page_private(buffer->pages[page]);
++                      pd->data_address = cpu_to_le32(page_bus + offset);
++
++                      offset = (offset + length) & ~PAGE_MASK;
++                      rest -= length;
++                      if (offset == 0)
++                              page++;
++              }
+               pd->control = cpu_to_le16(DESCRIPTOR_STATUS |
+                                         DESCRIPTOR_INPUT_LAST |
+                                         DESCRIPTOR_BRANCH_ALWAYS);
+-              if (p->interrupt)
++              if (p->interrupt && i == packet_count - 1)
+                       pd->control |= cpu_to_le16(DESCRIPTOR_IRQ_ALWAYS);
+ 
+-              pd->req_count = cpu_to_le16(rest);
+-              pd->res_count = pd->req_count;
+-
+-              page_bus = page_private(buffer->pages[page]);
+-              pd->data_address = cpu_to_le32(page_bus + offset);
+-
+               context_append(&ctx->context, d, z, header_z);
+       }
+ 
+diff --git a/drivers/firewire/fw-sbp2.c b/drivers/firewire/fw-sbp2.c
+index 624ff3e..9040417 100644
+--- a/drivers/firewire/fw-sbp2.c
++++ b/drivers/firewire/fw-sbp2.c
+@@ -151,9 +151,7 @@ struct sbp2_target {
+ };
+ 
+ #define SBP2_MAX_SG_ELEMENT_LENGTH    0xf000
+-#define SBP2_MAX_SECTORS              255     /* Max sectors supported */
+ #define SBP2_ORB_TIMEOUT              2000    /* Timeout in ms */
+-
+ #define SBP2_ORB_NULL                 0x80000000
+ 
+ #define SBP2_DIRECTION_TO_MEDIA               0x0
+@@ -540,14 +538,26 @@ sbp2_send_management_orb(struct sbp2_logical_unit *lu, 
int node_id,
+ 
+       retval = -EIO;
+       if (sbp2_cancel_orbs(lu) == 0) {
+-              fw_error("orb reply timed out, rcode=0x%02x\n",
+-                       orb->base.rcode);
++              /*
++               * Logout requests frequently get sent to devices that aren't
++               * there any more, resulting in extraneous error messages in
++               * the logs.  Unfortunately, this means logout requests that
++               * actually fail don't get logged.
++               */
++              if (function != SBP2_LOGOUT_REQUEST)
++                      fw_error("orb reply timed out, rcode=0x%02x\n",
++                               orb->base.rcode);
+               goto out;
+       }
+ 
+       if (orb->base.rcode != RCODE_COMPLETE) {
+-              fw_error("management write failed, rcode 0x%02x\n",
+-                       orb->base.rcode);
++              /*
++               * On device removal from the bus, sometimes the logout
++               * request times out, sometimes it just fails.
++               */
++              if (function != SBP2_LOGOUT_REQUEST)
++                      fw_error("management write failed, rcode 0x%02x\n",
++                               orb->base.rcode);
+               goto out;
+       }
+ 
+@@ -628,6 +638,21 @@ static void sbp2_release_target(struct kref *kref)
+ 
+ static struct workqueue_struct *sbp2_wq;
+ 
++/*
++ * Always get the target's kref when scheduling work on one its units.
++ * Each workqueue job is responsible to call sbp2_target_put() upon return.
++ */
++static void sbp2_queue_work(struct sbp2_logical_unit *lu, unsigned long delay)
++{
++      if (queue_delayed_work(sbp2_wq, &lu->work, delay))
++              kref_get(&lu->tgt->kref);
++}
++
++static void sbp2_target_put(struct sbp2_target *tgt)
++{
++      kref_put(&tgt->kref, sbp2_release_target);
++}
++
+ static void sbp2_reconnect(struct work_struct *work);
+ 
+ static void sbp2_login(struct work_struct *work)
+@@ -649,16 +674,12 @@ static void sbp2_login(struct work_struct *work)
+ 
+       if (sbp2_send_management_orb(lu, node_id, generation,
+                               SBP2_LOGIN_REQUEST, lu->lun, &response) < 0) {
+-              if (lu->retries++ < 5) {
+-                      if (queue_delayed_work(sbp2_wq, &lu->work,
+-                                             DIV_ROUND_UP(HZ, 5)))
+-                              kref_get(&lu->tgt->kref);
+-              } else {
++              if (lu->retries++ < 5)
++                      sbp2_queue_work(lu, DIV_ROUND_UP(HZ, 5));
++              else
+                       fw_error("failed to login to %s LUN %04x\n",
+                                unit->device.bus_id, lu->lun);
+-              }
+-              kref_put(&lu->tgt->kref, sbp2_release_target);
+-              return;
++              goto out;
+       }
+ 
+       lu->generation        = generation;
+@@ -700,7 +721,8 @@ static void sbp2_login(struct work_struct *work)
+               lu->sdev = sdev;
+               scsi_device_put(sdev);
+       }
+-      kref_put(&lu->tgt->kref, sbp2_release_target);
++ out:
++      sbp2_target_put(lu->tgt);
+ }
+ 
+ static int sbp2_add_logical_unit(struct sbp2_target *tgt, int lun_entry)
+@@ -865,18 +887,13 @@ static int sbp2_probe(struct device *dev)
+ 
+       get_device(&unit->device);
+ 
+-      /*
+-       * We schedule work to do the login so we can easily
+-       * reschedule retries. Always get the ref before scheduling
+-       * work.
+-       */
++      /* Do the login in a workqueue so we can easily reschedule retries. */
+       list_for_each_entry(lu, &tgt->lu_list, link)
+-              if (queue_delayed_work(sbp2_wq, &lu->work, 0))
+-                      kref_get(&tgt->kref);
++              sbp2_queue_work(lu, 0);
+       return 0;
+ 
+  fail_tgt_put:
+-      kref_put(&tgt->kref, sbp2_release_target);
++      sbp2_target_put(tgt);
+       return -ENOMEM;
+ 
+  fail_shost_put:
+@@ -889,7 +906,7 @@ static int sbp2_remove(struct device *dev)
+       struct fw_unit *unit = fw_unit(dev);
+       struct sbp2_target *tgt = unit->device.driver_data;
+ 
+-      kref_put(&tgt->kref, sbp2_release_target);
++      sbp2_target_put(tgt);
+       return 0;
+ }
+ 
+@@ -915,10 +932,8 @@ static void sbp2_reconnect(struct work_struct *work)
+                       lu->retries = 0;
+                       PREPARE_DELAYED_WORK(&lu->work, sbp2_login);
+               }
+-              if (queue_delayed_work(sbp2_wq, &lu->work, DIV_ROUND_UP(HZ, 5)))
+-                      kref_get(&lu->tgt->kref);
+-              kref_put(&lu->tgt->kref, sbp2_release_target);
+-              return;
++              sbp2_queue_work(lu, DIV_ROUND_UP(HZ, 5));
++              goto out;
+       }
+ 
+       lu->generation        = generation;
+@@ -930,8 +945,8 @@ static void sbp2_reconnect(struct work_struct *work)
+ 
+       sbp2_agent_reset(lu);
+       sbp2_cancel_orbs(lu);
+-
+-      kref_put(&lu->tgt->kref, sbp2_release_target);
++ out:
++      sbp2_target_put(lu->tgt);
+ }
+ 
+ static void sbp2_update(struct fw_unit *unit)
+@@ -947,8 +962,7 @@ static void sbp2_update(struct fw_unit *unit)
+        */
+       list_for_each_entry(lu, &tgt->lu_list, link) {
+               lu->retries = 0;
+-              if (queue_delayed_work(sbp2_wq, &lu->work, 0))
+-                      kref_get(&tgt->kref);
++              sbp2_queue_work(lu, 0);
+       }
+ }
+ 
+@@ -1103,9 +1117,9 @@ sbp2_map_scatterlist(struct sbp2_command_orb *orb, 
struct fw_device *device,
+        * elements larger than 65535 bytes, some IOMMUs may merge sg elements
+        * during DMA mapping, and Linux currently doesn't prevent this.
+        */
+-      for (i = 0, j = 0; i < count; i++) {
+-              sg_len = sg_dma_len(sg + i);
+-              sg_addr = sg_dma_address(sg + i);
++      for (i = 0, j = 0; i < count; i++, sg = sg_next(sg)) {
++              sg_len = sg_dma_len(sg);
++              sg_addr = sg_dma_address(sg);
+               while (sg_len) {
+                       /* FIXME: This won't get us out of the pinch. */
+                       if (unlikely(j >= ARRAY_SIZE(orb->page_table))) {
+@@ -1325,6 +1339,7 @@ static struct scsi_host_template scsi_driver_template = {
+       .this_id                = -1,
+       .sg_tablesize           = SG_ALL,
+       .use_clustering         = ENABLE_CLUSTERING,
++      .use_sg_chaining        = ENABLE_SG_CHAINING,
+       .cmd_per_lun            = 1,
+       .can_queue              = 1,
+       .sdev_attrs             = sbp2_scsi_sysfs_attrs,
+diff --git a/drivers/firewire/fw-transaction.c 
b/drivers/firewire/fw-transaction.c
+index c00d4a9..8018c3b 100644
+--- a/drivers/firewire/fw-transaction.c
++++ b/drivers/firewire/fw-transaction.c
+@@ -650,7 +650,7 @@ fw_core_handle_request(struct fw_card *card, struct 
fw_packet *p)
+                HEADER_GET_OFFSET_HIGH(p->header[1]) << 32) | p->header[2];
+       tcode       = HEADER_GET_TCODE(p->header[0]);
+       destination = HEADER_GET_DESTINATION(p->header[0]);
+-      source      = HEADER_GET_SOURCE(p->header[0]);
++      source      = HEADER_GET_SOURCE(p->header[1]);
+ 
+       spin_lock_irqsave(&address_handler_lock, flags);
+       handler = lookup_enclosing_address_handler(&address_handler_list,
+diff --git a/drivers/ieee1394/dma.c b/drivers/ieee1394/dma.c
+index 7c4eb39..73685e7 100644
+--- a/drivers/ieee1394/dma.c
++++ b/drivers/ieee1394/dma.c
+@@ -231,37 +231,24 @@ void dma_region_sync_for_device(struct dma_region *dma, 
unsigned long offset,
+ 
+ #ifdef CONFIG_MMU
+ 
+-/* nopage() handler for mmap access */
+-
+-static struct page *dma_region_pagefault(struct vm_area_struct *area,
+-                                       unsigned long address, int *type)
++static int dma_region_pagefault(struct vm_area_struct *vma,
++                              struct vm_fault *vmf)
+ {
+-      unsigned long offset;
+-      unsigned long kernel_virt_addr;
+-      struct page *ret = NOPAGE_SIGBUS;
+-
+-      struct dma_region *dma = (struct dma_region *)area->vm_private_data;
++      struct dma_region *dma = (struct dma_region *)vma->vm_private_data;
+ 
+       if (!dma->kvirt)
+-              goto out;
+-
+-      if ((address < (unsigned long)area->vm_start) ||
+-          (address >
+-           (unsigned long)area->vm_start + (dma->n_pages << PAGE_SHIFT)))
+-              goto out;
+-
+-      if (type)
+-              *type = VM_FAULT_MINOR;
+-      offset = address - area->vm_start;
+-      kernel_virt_addr = (unsigned long)dma->kvirt + offset;
+-      ret = vmalloc_to_page((void *)kernel_virt_addr);
+-      get_page(ret);
+-      out:
+-      return ret;
++              return VM_FAULT_SIGBUS;
++
++      if (vmf->pgoff >= dma->n_pages)
++              return VM_FAULT_SIGBUS;
++
++      vmf->page = vmalloc_to_page(dma->kvirt + (vmf->pgoff << PAGE_SHIFT));
++      get_page(vmf->page);
++      return 0;
+ }
+ 
+ static struct vm_operations_struct dma_region_vm_ops = {
+-      .nopage = dma_region_pagefault,
++      .fault = dma_region_pagefault,
+ };
+ 
+ /**
+@@ -275,7 +262,7 @@ int dma_region_mmap(struct dma_region *dma, struct file 
*file,
+       if (!dma->kvirt)
+               return -EINVAL;
+ 
+-      /* must be page-aligned */
++      /* must be page-aligned (XXX: comment is wrong, we could allow pgoff) */
+       if (vma->vm_pgoff != 0)
+               return -EINVAL;
+ 
+diff --git a/drivers/ieee1394/ieee1394_transactions.c 
b/drivers/ieee1394/ieee1394_transactions.c
+index 6779893..10c3d9f 100644
+--- a/drivers/ieee1394/ieee1394_transactions.c
++++ b/drivers/ieee1394/ieee1394_transactions.c
+@@ -570,71 +570,3 @@ int hpsb_write(struct hpsb_host *host, nodeid_t node, 
unsigned int generation,
+ 
+       return retval;
+ }
+-
+-#if 0
+-
+-int hpsb_lock(struct hpsb_host *host, nodeid_t node, unsigned int generation,
+-            u64 addr, int extcode, quadlet_t * data, quadlet_t arg)
+-{
+-      struct hpsb_packet *packet;
+-      int retval = 0;
+-
+-      BUG_ON(in_interrupt()); // We can't be called in an interrupt, yet
+-
+-      packet = hpsb_make_lockpacket(host, node, addr, extcode, data, arg);
+-      if (!packet)
+-              return -ENOMEM;
+-
+-      packet->generation = generation;
+-      retval = hpsb_send_packet_and_wait(packet);
+-      if (retval < 0)
+-              goto hpsb_lock_fail;
+-
+-      retval = hpsb_packet_success(packet);
+-
+-      if (retval == 0) {
+-              *data = packet->data[0];
+-      }
+-
+-      hpsb_lock_fail:
+-      hpsb_free_tlabel(packet);
+-      hpsb_free_packet(packet);
+-
+-      return retval;
+-}
+-
+-int hpsb_send_gasp(struct hpsb_host *host, int channel, unsigned int 
generation,
+-                 quadlet_t * buffer, size_t length, u32 specifier_id,
+-                 unsigned int version)
+-{
+-      struct hpsb_packet *packet;
+-      int retval = 0;
+-      u16 specifier_id_hi = (specifier_id & 0x00ffff00) >> 8;
+-      u8 specifier_id_lo = specifier_id & 0xff;
+-
+-      HPSB_VERBOSE("Send GASP: channel = %d, length = %Zd", channel, length);
+-
+-      length += 8;
+-
+-      packet = hpsb_make_streampacket(host, NULL, length, channel, 3, 0);
+-      if (!packet)
+-              return -ENOMEM;
+-
+-      packet->data[0] = cpu_to_be32((host->node_id << 16) | specifier_id_hi);
+-      packet->data[1] =
+-          cpu_to_be32((specifier_id_lo << 24) | (version & 0x00ffffff));
+-
+-      memcpy(&(packet->data[2]), buffer, length - 8);
+-
+-      packet->generation = generation;
+-
+-      packet->no_waiter = 1;
+-
+-      retval = hpsb_send_packet(packet);
+-      if (retval < 0)
+-              hpsb_free_packet(packet);
+-
+-      return retval;
+-}
+-
+-#endif                                /*  0  */
+diff --git a/drivers/ieee1394/ohci1394.c b/drivers/ieee1394/ohci1394.c
+index 372c5c1..969de2a 100644
+--- a/drivers/ieee1394/ohci1394.c
++++ b/drivers/ieee1394/ohci1394.c
+@@ -2126,10 +2126,14 @@ static void ohci_schedule_iso_tasklets(struct ti_ohci 
*ohci,
+       list_for_each_entry(t, &ohci->iso_tasklet_list, link) {
+               mask = 1 << t->context;
+ 
+-              if (t->type == OHCI_ISO_TRANSMIT && tx_event & mask)
+-                      tasklet_schedule(&t->tasklet);
+-              else if (rx_event & mask)
+-                      tasklet_schedule(&t->tasklet);
++              if (t->type == OHCI_ISO_TRANSMIT) {
++                      if (tx_event & mask)
++                              tasklet_schedule(&t->tasklet);
++              } else {
++                      /* OHCI_ISO_RECEIVE or OHCI_ISO_MULTICHANNEL_RECEIVE */
++                      if (rx_event & mask)
++                              tasklet_schedule(&t->tasklet);
++              }
+       }
+ 
+       spin_unlock_irqrestore(&ohci->iso_tasklet_list_lock, flags);
+diff --git a/drivers/ieee1394/raw1394.c b/drivers/ieee1394/raw1394.c
+index cadf047..37e7e10 100644
+--- a/drivers/ieee1394/raw1394.c
++++ b/drivers/ieee1394/raw1394.c
+@@ -858,7 +858,7 @@ static int arm_read(struct hpsb_host *host, int nodeid, 
quadlet_t * buffer,
+       int found = 0, size = 0, rcode = -1;
+       struct arm_request_response *arm_req_resp = NULL;
+ 
+-      DBGMSG("arm_read  called by node: %X"
++      DBGMSG("arm_read  called by node: %X "
+              "addr: %4.4x %8.8x length: %Zu", nodeid,
+              (u16) ((addr >> 32) & 0xFFFF), (u32) (addr & 0xFFFFFFFF),
+              length);
+@@ -1012,7 +1012,7 @@ static int arm_write(struct hpsb_host *host, int nodeid, 
int destid,
+       int found = 0, size = 0, rcode = -1, length_conflict = 0;
+       struct arm_request_response *arm_req_resp = NULL;
+ 
+-      DBGMSG("arm_write called by node: %X"
++      DBGMSG("arm_write called by node: %X "
+              "addr: %4.4x %8.8x length: %Zu", nodeid,
+              (u16) ((addr >> 32) & 0xFFFF), (u32) (addr & 0xFFFFFFFF),
+              length);
+diff --git a/drivers/ieee1394/sbp2.c b/drivers/ieee1394/sbp2.c
+index b83d254..d2747f0 100644
+--- a/drivers/ieee1394/sbp2.c
++++ b/drivers/ieee1394/sbp2.c
+@@ -51,6 +51,7 @@
+  * Grep for inline FIXME comments below.
+  */
+ 
++#include <linux/blkdev.h>
+ #include <linux/compiler.h>
+ #include <linux/delay.h>
+ #include <linux/device.h>
+@@ -127,17 +128,21 @@ MODULE_PARM_DESC(serialize_io, "Serialize requests 
coming from SCSI drivers "
+                "(default = Y, faster but buggy = N)");
+ 
+ /*
+- * Bump up max_sectors if you'd like to support very large sized
+- * transfers. Please note that some older sbp2 bridge chips are broken for
+- * transfers greater or equal to 128KB.  Default is a value of 255
+- * sectors, or just under 128KB (at 512 byte sector size). I can note that
+- * the Oxsemi sbp2 chipsets have no problems supporting very large
+- * transfer sizes.
++ * Adjust max_sectors if you'd like to influence how many sectors each SCSI
++ * command can transfer at most. Please note that some older SBP-2 bridge
++ * chips are broken for transfers greater or equal to 128KB, therefore
++ * max_sectors used to be a safe 255 sectors for many years. We now have a
++ * default of 0 here which means that we let the SCSI stack choose a limit.
++ *
++ * The SBP2_WORKAROUND_128K_MAX_TRANS flag, if set either in the workarounds
++ * module parameter or in the sbp2_workarounds_table[], will override the
++ * value of max_sectors. We should use sbp2_workarounds_table[] to cover any
++ * bridge chip which becomes known to need the 255 sectors limit.
+  */
+-static int sbp2_max_sectors = SBP2_MAX_SECTORS;
++static int sbp2_max_sectors;
+ module_param_named(max_sectors, sbp2_max_sectors, int, 0444);
+ MODULE_PARM_DESC(max_sectors, "Change max sectors per I/O supported "
+-               "(default = " __stringify(SBP2_MAX_SECTORS) ")");
++               "(default = 0 = use SCSI stack's default)");
+ 
+ /*
+  * Exclusive login to sbp2 device? In most cases, the sbp2 driver should
+@@ -326,6 +331,7 @@ static struct scsi_host_template sbp2_shost_template = {
+       .this_id                 = -1,
+       .sg_tablesize            = SG_ALL,
+       .use_clustering          = ENABLE_CLUSTERING,
++      .use_sg_chaining         = ENABLE_SG_CHAINING,
+       .cmd_per_lun             = SBP2_MAX_CMDS,
+       .can_queue               = SBP2_MAX_CMDS,
+       .sdev_attrs              = sbp2_sysfs_sdev_attrs,
+@@ -1451,7 +1457,7 @@ static void sbp2_prep_command_orb_sg(struct 
sbp2_command_orb *orb,
+                                    struct sbp2_fwhost_info *hi,
+                                    struct sbp2_command_info *cmd,
+                                    unsigned int scsi_use_sg,
+-                                   struct scatterlist *sgpnt,
++                                   struct scatterlist *sg,
+                                    u32 orb_direction,
+                                    enum dma_data_direction dma_dir)
+ {
+@@ -1461,12 +1467,12 @@ static void sbp2_prep_command_orb_sg(struct 
sbp2_command_orb *orb,
+ 
+       /* special case if only one element (and less than 64KB in size) */
+       if ((scsi_use_sg == 1) &&
+-          (sgpnt[0].length <= SBP2_MAX_SG_ELEMENT_LENGTH)) {
++          (sg_dma_len(sg) <= SBP2_MAX_SG_ELEMENT_LENGTH)) {
+ 
+-              cmd->dma_size = sgpnt[0].length;
++              cmd->dma_size = sg_dma_len(sg);
+               cmd->dma_type = CMD_DMA_PAGE;
+               cmd->cmd_dma = dma_map_page(hi->host->device.parent,
+-                                          sg_page(&sgpnt[0]), sgpnt[0].offset,
++                                          sg_page(sg), sg->offset,
+                                           cmd->dma_size, cmd->dma_dir);
+ 
+               orb->data_descriptor_lo = cmd->cmd_dma;
+@@ -1477,11 +1483,11 @@ static void sbp2_prep_command_orb_sg(struct 
sbp2_command_orb *orb,
+                                               &cmd->scatter_gather_element[0];
+               u32 sg_count, sg_len;
+               dma_addr_t sg_addr;
+-              int i, count = dma_map_sg(hi->host->device.parent, sgpnt,
++              int i, count = dma_map_sg(hi->host->device.parent, sg,
+                                         scsi_use_sg, dma_dir);
+ 
+               cmd->dma_size = scsi_use_sg;
+-              cmd->sge_buffer = sgpnt;
++              cmd->sge_buffer = sg;
+ 
+               /* use page tables (s/g) */
+               orb->misc |= ORB_SET_PAGE_TABLE_PRESENT(0x1);
+@@ -1489,9 +1495,9 @@ static void sbp2_prep_command_orb_sg(struct 
sbp2_command_orb *orb,
+ 
+               /* loop through and fill out our SBP-2 page tables
+                * (and split up anything too large) */
+-              for (i = 0, sg_count = 0 ; i < count; i++, sgpnt++) {
+-                      sg_len = sg_dma_len(sgpnt);
+-                      sg_addr = sg_dma_address(sgpnt);
++              for (i = 0, sg_count = 0; i < count; i++, sg = sg_next(sg)) {
++                      sg_len = sg_dma_len(sg);
++                      sg_addr = sg_dma_address(sg);
+                       while (sg_len) {
+                               sg_element[sg_count].segment_base_lo = sg_addr;
+                               if (sg_len > SBP2_MAX_SG_ELEMENT_LENGTH) {
+@@ -1521,11 +1527,10 @@ static void sbp2_create_command_orb(struct sbp2_lu *lu,
+                                   unchar *scsi_cmd,
+                                   unsigned int scsi_use_sg,
+                                   unsigned int scsi_request_bufflen,
+-                                  void *scsi_request_buffer,
++                                  struct scatterlist *sg,
+                                   enum dma_data_direction dma_dir)
+ {
+       struct sbp2_fwhost_info *hi = lu->hi;
+-      struct scatterlist *sgpnt = (struct scatterlist *)scsi_request_buffer;
+       struct sbp2_command_orb *orb = &cmd->command_orb;
+       u32 orb_direction;
+ 
+@@ -1560,7 +1565,7 @@ static void sbp2_create_command_orb(struct sbp2_lu *lu,
+               orb->data_descriptor_lo = 0x0;
+               orb->misc |= ORB_SET_DIRECTION(1);
+       } else
+-              sbp2_prep_command_orb_sg(orb, hi, cmd, scsi_use_sg, sgpnt,
++              sbp2_prep_command_orb_sg(orb, hi, cmd, scsi_use_sg, sg,
+                                        orb_direction, dma_dir);
+ 
+       sbp2util_cpu_to_be32_buffer(orb, sizeof(*orb));
+@@ -1650,7 +1655,6 @@ static int sbp2_send_command(struct sbp2_lu *lu, struct 
scsi_cmnd *SCpnt,
+                            void (*done)(struct scsi_cmnd *))
+ {
+       unchar *scsi_cmd = (unchar *)SCpnt->cmnd;
+-      unsigned int request_bufflen = scsi_bufflen(SCpnt);
+       struct sbp2_command_info *cmd;
+ 
+       cmd = sbp2util_allocate_command_orb(lu, SCpnt, done);
+@@ -1658,7 +1662,7 @@ static int sbp2_send_command(struct sbp2_lu *lu, struct 
scsi_cmnd *SCpnt,
+               return -EIO;
+ 
+       sbp2_create_command_orb(lu, cmd, scsi_cmd, scsi_sg_count(SCpnt),
+-                              request_bufflen, scsi_sglist(SCpnt),
++                              scsi_bufflen(SCpnt), scsi_sglist(SCpnt),
+                               SCpnt->sc_data_direction);
+       sbp2_link_orb_command(lu, cmd);
+ 
+@@ -1981,6 +1985,8 @@ static int sbp2scsi_slave_configure(struct scsi_device 
*sdev)
+               sdev->skip_ms_page_8 = 1;
+       if (lu->workarounds & SBP2_WORKAROUND_FIX_CAPACITY)
+               sdev->fix_capacity = 1;
++      if (lu->workarounds & SBP2_WORKAROUND_128K_MAX_TRANS)
++              blk_queue_max_sectors(sdev->request_queue, 128 * 1024 / 512);
+       return 0;
+ }
+ 
+@@ -2087,9 +2093,6 @@ static int sbp2_module_init(void)
+               sbp2_shost_template.cmd_per_lun = 1;
+       }
+ 
+-      if (sbp2_default_workarounds & SBP2_WORKAROUND_128K_MAX_TRANS &&
+-          (sbp2_max_sectors * 512) > (128 * 1024))
+-              sbp2_max_sectors = 128 * 1024 / 512;
+       sbp2_shost_template.max_sectors = sbp2_max_sectors;
+ 
+       hpsb_register_highlevel(&sbp2_highlevel);
+diff --git a/drivers/ieee1394/sbp2.h b/drivers/ieee1394/sbp2.h
+index 333a4bb..d2ecb0d 100644
+--- a/drivers/ieee1394/sbp2.h
++++ b/drivers/ieee1394/sbp2.h
+@@ -222,7 +222,6 @@ struct sbp2_status_block {
+  */
+ 
+ #define SBP2_MAX_SG_ELEMENT_LENGTH            0xf000
+-#define SBP2_MAX_SECTORS                      255
+ /* There is no real limitation of the queue depth (i.e. length of the linked
+  * list of command ORBs) at the target. The chosen depth is merely an
+  * implementation detail of the sbp2 driver. */

Modified: dists/trunk/linux-2.6/debian/patches/series/1~experimental.1
==============================================================================
--- dists/trunk/linux-2.6/debian/patches/series/1~experimental.1        
(original)
+++ dists/trunk/linux-2.6/debian/patches/series/1~experimental.1        Mon Dec 
24 00:39:41 2007
@@ -36,3 +36,4 @@
 + bugfix/arm/disable-chelsio_t3.patch
 + bugfix/arm/disable-video_bt848.patch
 + bugfix/arm/disable-scsi_acard.patch
++ bugfix/all/git-ieee1394.patch

_______________________________________________
Kernel-svn-changes mailing list
[email protected]
http://lists.alioth.debian.org/mailman/listinfo/kernel-svn-changes

Reply via email to