On Tue, 2011-01-18 at 09:16 +0200, Or Gerlitz wrote:
David Dillow wrote:
We're talking about different things --
max_segments(sg_tablesize)/max_sectors are
the limits as we're adding pages to the bio via bio_add_page().
blk_rq_map_sg() uses
max_segment_size as a bound on the largest
On Tue, Jan 18, 2011 at 07:25:24AM -0500, David Dillow wrote:
Also when working with direct I/O from user space and/or under file-system,
did you really see many BIOs that can be merged? I was under the impression,
that (specifically after some time the system is active) for the most case,
David Dillow wrote:
On Mon, 2011-01-10 at 11:58 -0800, Vu Pham wrote:
David Dillow wrote:
either. The SRP FMR mapping code is careful to mask the SG address with
the FMR page mask, so we should never ask the HCA to map a page with the
first_byte_offset != 0. Instead, we tell the target
On Tue, 2011-01-18 at 11:53 -0800, Vu Pham wrote:
Our hw/fw guys confirm that there is no problem, my suspect is wrong.
To explain clearly how hw translate from remote rdma address to physical
address in fmr's MTT
X = requested/rdma_va - MPT.start + MPT.fbo
MTT index = X / MPT.blocksize
On Mon, 17 Jan 2011, Moni Shoua wrote:
Unlike with send/receive multicast groups, there is no indication for IPoIB
that a send-only multicast group is useless. Therefore, even a single packet
to a multicast destination leaves a multicast entry on the fabric until the
host interface is down.
On 01/12/2011 12:32 PM, Roland Dreier wrote:
Did this patch ever make it upstream? I don't see it.
Nope, guess it got dropped. It doesn't apply after the IBoE changes --
can you regenerate it?
Hey,
Did this:
http://www.mail-archive.com/linux-rdma@vger.kernel.org/msg03182.html
ever
Did this:
http://www.mail-archive.com/linux-rdma@vger.kernel.org/msg03182.html
ever make it upstream?
No, but this patch set is still listed in patchworks as new. I just updated
the set to 2.6.37 and pushed those changes into an af_ib branch in my git tree.
- Sean
--
To unsubscribe from
On 01/18/2011 05:14 PM, Hefty, Sean wrote:
Did this:
http://www.mail-archive.com/linux-rdma@vger.kernel.org/msg03182.html
ever make it upstream?
No, but this patch set is still listed in patchworks as new. I just updated
the set to 2.6.37 and pushed those changes into an af_ib branch in my
This allows us to guarantee the ability to submit up to 8 MB requests
based on the current value of SCSI_MAX_SG_CHAIN_SEGMENTS. While FMR will
usually condense the requests into 8 SG entries, it is imperative that
the target support external tables in case the FMR mapping fails or is
not
A persistent thorn in our side has been getting large (1 MB+) requests
from SRP on a system that has been up for any period of time. As we're
using RAID6 8+2 LUNs, we need to generate a full 1 MB IO to avoid a R/M/W
cycle on some hardware, and other hardware just likes the larger requests,
even
Add .dma_boundary to force each page into its own S/G entry to give us
worst case fragmentation.
include scatterlist.h to pick up ARCH_HAS_SG_CHAIN for scsi.h -- patch
to fix is floating in the ether
Fix direct IO when doing more than 1 MB IOs -- proper patch to fix is
floating in the ether as
This is to clean up prior to further changes.
---
drivers/infiniband/ulp/srp/ib_srp.c | 144 ++-
1 files changed, 73 insertions(+), 71 deletions(-)
diff --git a/drivers/infiniband/ulp/srp/ib_srp.c
b/drivers/infiniband/ulp/srp/ib_srp.c
index 197e26c..060e6a8
Different configurations of target software allow differing max sizes of
the command IU. Allowing this to be changed per-target allows all
targets on an initiator to get an optimal setting.
We deprecate srp_sg_tablesize and replace it with cmd_sg_entries in
preparation for allowing more indirect
Instead of forcing all of the S/G entries to fit in one FMR, and falling
back to indirect descriptors if that fails, allow the use of as many
FMRs as needed to map the request. This lays the groundwork for allowing
indirect descriptor tables that are larger than can fit in the command
IU, but
It is unclear what bug this fixed, and so it's not clear how safe it is
to remove it, or if things were actually an HCA issue with FMR. Always
enable the workaround for now.
TODO: Better description
---
drivers/infiniband/ulp/srp/ib_srp.c | 17 +
1 files changed, 1
Most targets don't support indirect tables that do not fit in the
command, but FMR failures are exceedingly rare. Allow these targets to
reap the benefits of the large tables but fail in a manner that lets the
user know that the data didn't make it there.
This could/should be merged with the next
Now that we can get larger SG lists, we can take advantage of HCAs that
allow us to use larger FMR sizes. In many cases, we can use up to 512
entries, so start there and work our way down.
---
drivers/infiniband/ulp/srp/ib_srp.c | 31 +++
Hi Dave,
Now that at least one vendor is implementing full support for the SRP
indirect memory descriptor tables, we can safely expand the sg_tablesize,
and realize some performance gains, in many cases quite large. I don't
have vendor code that implements the full support needed for
18 matches
Mail list logo