Return fields according to the port cache rather than calling the kernel.
Signed-off-by: Matan Barak
Signed-off-by: Or Gerlitz
---
src/mlx4.c |2 ++
src/mlx4.h |7 +++
src/verbs.c | 40 ++--
3 files changed, 47 insertions(+), 2 deletions(-)
In order to implement IP based addressing for UD QPs, we need a way to
resolve the addresses internally.
When the provider detects an Ethernet link layer with IP based gids,
it calls a helper function in libibverbs in order to resolve the
Ethernet L2 params.
Signed-off-by: Matan Barak
Signed-off
Hi Yishai,
This is just a rebased-port of the series which adds support for Ethernet L2
address
resolution for UD QPs, whose L2 address-handles, unlike RC/UC/XRC/etc QPs are
set
from user space without going through uverbs and the kernel IB core.
Matan and Or.
changes from V3:
- Adapt the cod
Hi Roland,
This series adds support for Ethernet L2 address resolution for RoCE UD QPs,
whose L2 address-handles, unlike RC/UC/XRC/etc QPs are set from user space
without going through uverbs and the kernel IB core. The code is also
compatible both with old kernels that don't run IP based addressi
Add an enum that describes ibv_port_cap_flags that complies
with the respective kernel enum.
This value could be fetched when using ibv_query_port in
port_cap_flags.
Signed-off-by: Matan Barak
Signed-off-by: Or Gerlitz
---
include/infiniband/verbs.h | 26 ++
1 files
In order to implement RoCE IP based addressing for UD QPs, without introducing
uverbs changes, we need a way to resolve the L2 Ethernet addresses from
user-space.
This is done with netlink through libnl, and in libibverbs such that multiple
vendor provider libraries can use the code.
This is imp
On Tue, Aug 19, 2014 at 1:32 PM, Doug Ledford wrote:
> Whether or not the core network code is OK with us dropping the rtnl lock
> while manipulating the interface is the issue here. However, I did consider
> changing the mcast_mutex to a per interface lock instead. The are various
> optimiza
Whether or not the core network code is OK with us dropping the rtnl lock while
manipulating the interface is the issue here. However, I did consider changing
the mcast_mutex to a per interface lock instead. The are various optimizations
that can be made now that the locking is correct and rac
Excuse my top posting, I'm at a meeting and not using my normal mail client.
To answer your question, this is a confusing issue that I had to work through
myself.
When we call ib_sa_multicast_join, it will create a record for us and return
that. That record is not valid for us to call ib_sa_mu
On Tue, Aug 12, 2014 at 4:38 PM, Doug Ledford wrote:
> Commit a9c8ba5884 (IPoIB: Fix usage of uninitialized multicast objects)
> added a new flag MCAST_JOIN_STARTED, but was not very strict in how it
> was used. We didn't always initialize the completion struct before we
> set the flag, and we di
> From what I see srp_sq_size is controlled via
> configfs. Can you set it to 2048 just for the sake of
> confirmation this is indeed the issue?
Yes! This setting allowed the two machines to establish an SRP session.
I'll try some I/O tests to see how well it works.
Thanks,
Mark
On Tue, Aug 1
On Aug 18, 2014, at 8:18 AM, Bart Van Assche wrote:
> Hello,
>
> Has anyone else already tried to boot kernel 3.17-rc1 on an IB system ?
After updating to 3.17-rc1 this morning, I hit the same issue.
> The
> following call trace is triggered during boot on a system on which kernel
> 3.16 runs
On 8/19/2014 2:20 AM, Mark Lehrer wrote:
I have a client machine that is trying to establish an SRP connection,
and it is failing due to an ENOMEM memory allocation error. I traced
it down to the max_qp_sz field -- the mlx5 limits this to 16384 but
the request wants 32768.
I spent some time try
13 matches
Mail list logo