The patch titled
iommu sg merging: IA64: make sba_iommu respect the segment size limits
has been removed from the -mm tree. Its filename was
iommu-sg-merging-ia64-make-sba_iommu-respect-the-segment-size-limits.patch
This patch was dropped because it was merged into mainline or a subsystem tree
The current -mm tree may be found at http://userweb.kernel.org/~akpm/mmotm/
------------------------------------------------------
Subject: iommu sg merging: IA64: make sba_iommu respect the segment size limits
From: FUJITA Tomonori <[EMAIL PROTECTED]>
This patch makes sba iommu respect segment size limits when merging sg
lists.
Signed-off-by: FUJITA Tomonori <[EMAIL PROTECTED]>
Cc: Jeff Garzik <[EMAIL PROTECTED]>
Cc: James Bottomley <[EMAIL PROTECTED]>
Acked-by: Jens Axboe <[EMAIL PROTECTED]>
Cc: "Luck, Tony" <[EMAIL PROTECTED]>
Signed-off-by: Andrew Morton <[EMAIL PROTECTED]>
---
arch/ia64/hp/common/sba_iommu.c | 8 ++++++--
1 file changed, 6 insertions(+), 2 deletions(-)
diff -puN
arch/ia64/hp/common/sba_iommu.c~iommu-sg-merging-ia64-make-sba_iommu-respect-the-segment-size-limits
arch/ia64/hp/common/sba_iommu.c
---
a/arch/ia64/hp/common/sba_iommu.c~iommu-sg-merging-ia64-make-sba_iommu-respect-the-segment-size-limits
+++ a/arch/ia64/hp/common/sba_iommu.c
@@ -1265,7 +1265,7 @@ sba_fill_pdir(
* the sglist do both.
*/
static SBA_INLINE int
-sba_coalesce_chunks( struct ioc *ioc,
+sba_coalesce_chunks(struct ioc *ioc, struct device *dev,
struct scatterlist *startsg,
int nents)
{
@@ -1275,6 +1275,7 @@ sba_coalesce_chunks( struct ioc *ioc,
struct scatterlist *dma_sg; /* next DMA stream head */
unsigned long dma_offset, dma_len; /* start/len of DMA stream */
int n_mappings = 0;
+ unsigned int max_seg_size = dma_get_max_seg_size(dev);
while (nents > 0) {
unsigned long vaddr = (unsigned long) sba_sg_address(startsg);
@@ -1314,6 +1315,9 @@ sba_coalesce_chunks( struct ioc *ioc,
> DMA_CHUNK_SIZE)
break;
+ if (dma_len + startsg->length > max_seg_size)
+ break;
+
/*
** Then look for virtually contiguous blocks.
**
@@ -1441,7 +1445,7 @@ int sba_map_sg(struct device *dev, struc
** w/o this association, we wouldn't have coherent DMA!
** Access to the virtual address is what forces a two pass algorithm.
*/
- coalesced = sba_coalesce_chunks(ioc, sglist, nents);
+ coalesced = sba_coalesce_chunks(ioc, dev, sglist, nents);
/*
** Program the I/O Pdir
_
Patches currently in -mm which might be from [EMAIL PROTECTED] are
origin.patch
-
To unsubscribe from this list: send the line "unsubscribe mm-commits" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html