On Thu, Oct 29, 2015 at 4:34 AM, Doug Ledford wrote:
> Yes, I've pulled this in for 4.4. Thanks!
Doug, we want to run regression over the 4.4 bits, when do you expect
for them to show up @ your kernel.org tree? some of them are there but
@ least 3-4 series which you said "applied" aren't, AFAIR
On 10/28/2015 10:25 PM, Doug Ledford wrote:
> On 09/13/2015 11:13 AM, Christoph Hellwig wrote:
>> This series shrinks the WR size by splitting out the different WR
>> types.
>>
>> Patch number one is too large for the mailinglist, so if you didn't
>> get it grab it here:
>>
>>
>> http://git.i
On 10/20/2015 07:33 AM, Sagi Grimberg wrote:
> Doug, are you planning on taking this for 4.4?
>
> I think this set has converged towards inclusion.
>
> Reminder, this series goes on top of Christoph's
> wr_cleanup patches and iser bounce buffering cleanup.
Yes, I've pulled this in for 4.4. Than
On 09/13/2015 11:13 AM, Christoph Hellwig wrote:
> This series shrinks the WR size by splitting out the different WR
> types.
>
> Patch number one is too large for the mailinglist, so if you didn't
> get it grab it here:
>
>
> http://git.infradead.org/users/hch/rdma.git/commitdiff_plain/c7
On Wed, Oct 28, 2015 at 05:30:17PM -0400, Chuck Lever wrote:
> IBTA spec states:
>
> > MW access operations (i.e. RDMA Write, RDMA Reads, and Atom-
> > ics) are only allowed if the Type 2B MW is in the Valid state and the
> > QP Number (QPN) and PD of the QP performing the MW access op-
> > erati
> On Oct 28, 2015, at 4:10 PM, Jason Gunthorpe
> wrote:
>
> On Wed, Oct 28, 2015 at 03:56:08PM -0400, Chuck Lever wrote:
>
>> A key question is whether connection loss guarantees that the
>> server is fenced, for all device types, from existing
>> registered MRs. After reconnect, each MR must
On Wed, Oct 28, 2015 at 03:56:08PM -0400, Chuck Lever wrote:
> A key question is whether connection loss guarantees that the
> server is fenced, for all device types, from existing
> registered MRs. After reconnect, each MR must be registered
> again before it can be accessed remotely. Is this tru
RPC/RDMA is moving towards a model where R_keys are invalidated
as part of reply handling (either the client does it in the
reply handler, or the server does it via Send With Invalidate).
This fences the RPC's memory from the server before the RPC
consumer is awoken and can access it.
There are so
On Wed, Oct 28, 2015 at 06:44:05PM +, Eli Cohen wrote:
> > Did you read the above thread?
> >
> > Don't set uverbs_ex_cmd_mask in drivers.
>
> But that's not how it is currently. IB core does not set any of the
> extended verbs flags for all the devices.
We agreed it would be fixed as a follo
> Did you read the above thread?
>
> Don't set uverbs_ex_cmd_mask in drivers.
But that's not how it is currently. IB core does not set any of the extended
verbs flags for all the devices.
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majord..
On Wed, Oct 28, 2015 at 06:18:27PM +, Eli Cohen wrote:
> > Same comment as last time.
> >
> > http://thread.gmane.org/gmane.linux.drivers.rdma/26110/focus=26163
> >
> >This still hasn't been fixed.
>
> Jason,
>
> The relevant IB core patches have been accepted. Can you elaborate more on
> wha
> Same comment as last time.
>
> http://thread.gmane.org/gmane.linux.drivers.rdma/26110/focus=26163
>
>This still hasn't been fixed.
Jason,
The relevant IB core patches have been accepted. Can you elaborate more on
what your comments against this are?
https://github.com/dledford/linux/commit/6d8
> -Original Message-
> From: Weiny, Ira
> Sent: Wednesday, October 28, 2015 11:40 AM
>
> What about using the gfp_mask through this stack?
Can be done.
>
> I think you need to split ib_nl_send_msg into "create message" and "send
> message". Then don't add the message to the list unle
On Wed, Oct 28, 2015 at 09:44:27AM -0400, kaike@intel.com wrote:
> ret = ib_nl_send_msg(query);
> + spin_lock_irqsave(&ib_nl_request_lock, flags);
Looks like query could be kfree'd before ib_nl_send_msg returns, eg by
send_handler?
> if (ret <= 0) {
> ret = -EI
On 10/22/2015 08:20 AM, Haggai Eran wrote:
> Hi Doug,
>
> I've rebased the network namespaces patches over your 4.4 tree.
Thanks, it went in cleanly this time. Applied.
> Regards,
> Haggai
>
> Changes from v6:
> - rebased over k.o/for-4.4
> - use init_net when no netdev is found (RoCE and AF_I
On Wed, Oct 28, 2015 at 02:58:35AM +0200, Eli Cohen wrote:
> Signed-off-by: Eli Cohen
> drivers/infiniband/hw/mlx5/main.c |3 ++-
> 1 files changed, 2 insertions(+), 1 deletions(-)
>
> diff --git a/drivers/infiniband/hw/mlx5/main.c
> b/drivers/infiniband/hw/mlx5/main.c
> index f1ccd40..634d
On 10/28/2015 11:48 AM, Matan Barak wrote:
> From: Alaa Hleihel
>
> make distcheck failed because it searched for headers
> in the src directory.
> Added noinst_HEADERS to fix that.
>
> Change-Id: Ibc0949286a97ac8775156df6465e31fe301d27db
> Signed-off-by: Alaa Hleihel
> ---
> Hi Doug,
>
> This
From: Alaa Hleihel
make distcheck failed because it searched for headers
in the src directory.
Added noinst_HEADERS to fix that.
Change-Id: Ibc0949286a97ac8775156df6465e31fe301d27db
Signed-off-by: Alaa Hleihel
---
Hi Doug,
This fixes a "make distcheck" issue. make distcheck
was broken because
> > Can I add the removal of these macros to the TODO list and get this patch
> > accepted in the interm?
>
> Nope, sorry, why would I accept a known-problem patch? Would you do
> such a thing?
>
> > Many of the patches I am queueing up to submit as well as one in this
> > series do
> > not app
On 10/28/2015 03:32 AM, Sagi Grimberg wrote:
Submitting a SCSI request through the SG_IO mechanism with a scatterlist
that is longer than what is supported by the SRP initiator triggers an
infinite loop. This patch series fixes that behavior.
The individual patches in this series are as follows:
On Wed, Oct 28, 2015 at 09:44:27AM -0400, kaike@intel.com wrote:
> From: Kaike Wan
>
> It was found by Saurabh Sengar that the netlink code tried to allocate
> memory with GFP_KERNEL while holding a spinlock. While it is possible
> to fix the issue by replacing GFP_KERNEL with GFP_ATOMIC, it
Refactor ib_dispatch_event into a new function in order to avoid
duplicating code in the next patch.
Signed-off-by: Matan Barak
Reviewed-by: Haggai Eran
---
drivers/infiniband/core/cache.c | 23 +++
1 file changed, 15 insertions(+), 8 deletions(-)
diff --git a/drivers/infin
Hi Doug,
During the RoCE GID cache changes, we used a per-entry rwlock. This could cause
a major overhead when traversing the GID table. We could spend thousands of
cycles just by locking and unlocking entries. This change was requested by
Doug.
In order to solve that, we moved to one table lock.
Previously, we've searched the GID table twice, once - in order to
find the GID itself and afterwards in order to find a free GID
entry. Replacing it with finding the GID and the empty index
in one traversal.
Signed-off-by: Matan Barak
Reviewed-by: Haggai Eran
---
drivers/infiniband/core/cache.
Previously, IB GID cached used a lock per entry. This could result
in spending a lot of CPU cycles for locking and unlocking just
in order to find a GID. Changing this in favor of one lock per
a GID table.
Signed-off-by: Matan Barak
Reviewed-by: Haggai Eran
---
drivers/infiniband/core/cache.c |
From: Kaike Wan
It was found by Saurabh Sengar that the netlink code tried to allocate
memory with GFP_KERNEL while holding a spinlock. While it is possible
to fix the issue by replacing GFP_KERNEL with GFP_ATOMIC, it is better
to get rid of the spinlock while sending the packet. However, in orde
Remove unneeded variable ret, directly return 0.
Signed-off-by: Muhammad Falak R Wani
---
drivers/staging/rdma/ipath/ipath_file_ops.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/drivers/staging/rdma/ipath/ipath_file_ops.c
b/drivers/staging/rdma/ipath/ipath_file_ops.c
i
This addresses a specific mlx4 issue where the max_sge_rd
is actually smaller than max_sge (rdma reads with max_sge
entries completes with error).
The second patch removes the explicit work-around from the
iser target code.
Changes from v1:
- Fixed driver rdma segment size to be 16 bytes
Changes
The driver now exposes sufficient limits so we can
avoid having mlx4 specific work-around.
Signed-off-by: Sagi Grimberg
Reviewed-by: Steve Wise
---
drivers/infiniband/ulp/isert/ib_isert.c | 13 +++--
1 files changed, 3 insertions(+), 10 deletions(-)
diff --git a/drivers/infiniband/ul
mlx4 devices (ConnectX-2, ConnectX-3) has a limitation
where rdma read work queue entries cannot exceed 512 bytes.
A rdma_read wqe needs to fit in 512 bytes:
- wqe control segment (16 bytes)
- rdma segment (16 bytes)
- scatter elements (16 bytes each)
So max_sge_rd should be: (512 - 16 - 16) / 16
Submitting a SCSI request through the SG_IO mechanism with a scatterlist
that is longer than what is supported by the SRP initiator triggers an
infinite loop. This patch series fixes that behavior.
The individual patches in this series are as follows:
0001-IB-srp-Fix-a-spelling-error.patch
0002-
Hi Arnd,
Since we want to make counting semaphores go away,
Why do we want to make counting semaphores go away? completely?
or just for binary use cases?
I have a use case in iser target code where a counting semaphore is the
best suited synchronizing mechanism.
I have a single thread handli
Hi All,
Based on the review comments, feedback, discussion from/with Tejun,
Haggai, Doug, Jason, Liran, Sean, ORNL team, I have updated the design
as below.
This is fairly strong and simple design, addresses most of the points
raised to cover current RDMA use cases.
Feel free to skip design guide
Hi,
I finally got some chance and progress on redesigning rdma cgroup
controller for the most use cases that we discussed in this email
chain.
I am posting RFC and soon code in new email.
Parav
On Sun, Sep 20, 2015 at 4:05 PM, Haggai Eran wrote:
> On 15/09/2015 06:45, Jason Gunthorpe wrote:
>>
On Tue, Oct 27, 2015 at 9:04 PM, Leon Romanovsky wrote:
> On Tue, Oct 27, 2015 at 02:53:01PM +0200, Eran Ben Elisha wrote:
> ...
>> +enum ibv_qp_create_flags {
>> + IBV_QP_CREATE_BLOCK_SELF_MCAST_LB = 1 << 1,
>> };
>>
> I'm sure that I'm missing something important, but why did it start
35 matches
Mail list logo