Re: [for-next 7/7] IB/mlx5: Implement fragmented completion queue (CQ)

2018-02-25 Thread Santosh Shilimkar

n 2/24/2018 1:40 AM, Majd Dibbiny wrote:



On Feb 23, 2018, at 9:13 PM, Saeed Mahameed  wrote:


On Thu, 2018-02-22 at 16:04 -0800, Santosh Shilimkar wrote:
Hi Saeed


On 2/21/2018 12:13 PM, Saeed Mahameed wrote:


[...]



Jason mentioned about this patch to me off-list. We were
seeing similar issue with SRQs & QPs. So wondering whether
you have any plans to do similar change for other resouces
too so that they don't rely on higher order page allocation
for icm tables.



Hi Santosh,

Adding Majd,

Which ULP is in question ? how big are the QPs/SRQs you create that
lead to this problem ?

For icm tables we already allocate only order 0 pages:
see alloc_system_page() in
drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c

But for kernel RDMA SRQ and QP buffers there is a place for
improvement.

Majd, do you know if we have any near future plans for this.


It’s in our plans to move all the buffers to use 0-order pages.

Santosh,

Is this RDS? Do you have persistent failure with some configuration? Can you 
please share more information?


No the issue seen with user verbs and actually MLX4 driver. My
last question was more for both MLX4 and MLX5 drivers icm
allocation for all the resources.

With MLX4 driver, we have seen corruption issues with MLX4_NO_RR
while recycling the issues. So we ended up switching to round robin
bitmap allocation as it was before which was changed by one of
Jacks commit 7c6d74d23 {mlx4_core: Roll back round robin bitmap
allocation commit for CQs, SRQs, and MPTs}

With default round robin, the corruption issue went away but then
its undesired effect of bloating the icm tables till you hit the
resource limit means more memory fragmentation. Since these resources
makes use of higher order allocations and in fragmented memory scenarios
we see contention on mm lock for seconds since compaction layer is
trying to stitch pages which takes time.

If these alloaction don't make use of higher order pages, the issue
can be certainly avoided and hence the reason behind the question.

Ofcourse we wouldn't have ended up with this issue if 'MLX4_NO_RR'
worked without corruption :-)

Regards,
Santosh










Re: [for-next 7/7] IB/mlx5: Implement fragmented completion queue (CQ)

2018-02-24 Thread Majd Dibbiny

> On Feb 23, 2018, at 9:13 PM, Saeed Mahameed  wrote:
> 
>> On Thu, 2018-02-22 at 16:04 -0800, Santosh Shilimkar wrote:
>> Hi Saeed
>> 
>>> On 2/21/2018 12:13 PM, Saeed Mahameed wrote:
>>> From: Yonatan Cohen 
>>> 
>>> The current implementation of create CQ requires contiguous
>>> memory, such requirement is problematic once the memory is
>>> fragmented or the system is low in memory, it causes for
>>> failures in dma_zalloc_coherent().
>>> 
>>> This patch implements new scheme of fragmented CQ to overcome
>>> this issue by introducing new type: 'struct mlx5_frag_buf_ctrl'
>>> to allocate fragmented buffers, rather than contiguous ones.
>>> 
>>> Base the Completion Queues (CQs) on this new fragmented buffer.
>>> 
>>> It fixes following crashes:
>>> kworker/29:0: page allocation failure: order:6, mode:0x80d0
>>> CPU: 29 PID: 8374 Comm: kworker/29:0 Tainted: G OE 3.10.0
>>> Workqueue: ib_cm cm_work_handler [ib_cm]
>>> Call Trace:
>>> [<>] dump_stack+0x19/0x1b
>>> [<>] warn_alloc_failed+0x110/0x180
>>> [<>] __alloc_pages_slowpath+0x6b7/0x725
>>> [<>] __alloc_pages_nodemask+0x405/0x420
>>> [<>] dma_generic_alloc_coherent+0x8f/0x140
>>> [<>] x86_swiotlb_alloc_coherent+0x21/0x50
>>> [<>] mlx5_dma_zalloc_coherent_node+0xad/0x110 [mlx5_core]
>>> [<>] ? mlx5_db_alloc_node+0x69/0x1b0 [mlx5_core]
>>> [<>] mlx5_buf_alloc_node+0x3e/0xa0 [mlx5_core]
>>> [<>] mlx5_buf_alloc+0x14/0x20 [mlx5_core]
>>> [<>] create_cq_kernel+0x90/0x1f0 [mlx5_ib]
>>> [<>] mlx5_ib_create_cq+0x3b0/0x4e0 [mlx5_ib]
>>> 
>>> Signed-off-by: Yonatan Cohen 
>>> Reviewed-by: Tariq Toukan 
>>> Signed-off-by: Leon Romanovsky 
>>> Signed-off-by: Saeed Mahameed 
>>> ---
>> 
>> Jason mentioned about this patch to me off-list. We were
>> seeing similar issue with SRQs & QPs. So wondering whether
>> you have any plans to do similar change for other resouces
>> too so that they don't rely on higher order page allocation
>> for icm tables.
>> 
> 
> Hi Santosh,
> 
> Adding Majd,
> 
> Which ULP is in question ? how big are the QPs/SRQs you create that
> lead to this problem ?
> 
> For icm tables we already allocate only order 0 pages:
> see alloc_system_page() in
> drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c
> 
> But for kernel RDMA SRQ and QP buffers there is a place for
> improvement.
> 
> Majd, do you know if we have any near future plans for this.

It’s in our plans to move all the buffers to use 0-order pages.

Santosh,

Is this RDS? Do you have persistent failure with some configuration? Can you 
please share more information?

Thanks
> 
>> Regards,
>> Santosh


Re: [for-next 7/7] IB/mlx5: Implement fragmented completion queue (CQ)

2018-02-23 Thread Saeed Mahameed
On Thu, 2018-02-22 at 16:04 -0800, Santosh Shilimkar wrote:
> Hi Saeed
> 
> On 2/21/2018 12:13 PM, Saeed Mahameed wrote:
> > From: Yonatan Cohen 
> > 
> > The current implementation of create CQ requires contiguous
> > memory, such requirement is problematic once the memory is
> > fragmented or the system is low in memory, it causes for
> > failures in dma_zalloc_coherent().
> > 
> > This patch implements new scheme of fragmented CQ to overcome
> > this issue by introducing new type: 'struct mlx5_frag_buf_ctrl'
> > to allocate fragmented buffers, rather than contiguous ones.
> > 
> > Base the Completion Queues (CQs) on this new fragmented buffer.
> > 
> > It fixes following crashes:
> > kworker/29:0: page allocation failure: order:6, mode:0x80d0
> > CPU: 29 PID: 8374 Comm: kworker/29:0 Tainted: G OE 3.10.0
> > Workqueue: ib_cm cm_work_handler [ib_cm]
> > Call Trace:
> > [<>] dump_stack+0x19/0x1b
> > [<>] warn_alloc_failed+0x110/0x180
> > [<>] __alloc_pages_slowpath+0x6b7/0x725
> > [<>] __alloc_pages_nodemask+0x405/0x420
> > [<>] dma_generic_alloc_coherent+0x8f/0x140
> > [<>] x86_swiotlb_alloc_coherent+0x21/0x50
> > [<>] mlx5_dma_zalloc_coherent_node+0xad/0x110 [mlx5_core]
> > [<>] ? mlx5_db_alloc_node+0x69/0x1b0 [mlx5_core]
> > [<>] mlx5_buf_alloc_node+0x3e/0xa0 [mlx5_core]
> > [<>] mlx5_buf_alloc+0x14/0x20 [mlx5_core]
> > [<>] create_cq_kernel+0x90/0x1f0 [mlx5_ib]
> > [<>] mlx5_ib_create_cq+0x3b0/0x4e0 [mlx5_ib]
> > 
> > Signed-off-by: Yonatan Cohen 
> > Reviewed-by: Tariq Toukan 
> > Signed-off-by: Leon Romanovsky 
> > Signed-off-by: Saeed Mahameed 
> > ---
> 
> Jason mentioned about this patch to me off-list. We were
> seeing similar issue with SRQs & QPs. So wondering whether
> you have any plans to do similar change for other resouces
> too so that they don't rely on higher order page allocation
> for icm tables.
> 

Hi Santosh,

Adding Majd,

Which ULP is in question ? how big are the QPs/SRQs you create that
lead to this problem ?

For icm tables we already allocate only order 0 pages:
see alloc_system_page() in
drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c

But for kernel RDMA SRQ and QP buffers there is a place for
improvement.

Majd, do you know if we have any near future plans for this.

> Regards,
> Santosh

Re: [for-next 7/7] IB/mlx5: Implement fragmented completion queue (CQ)

2018-02-22 Thread Santosh Shilimkar

Hi Saeed

On 2/21/2018 12:13 PM, Saeed Mahameed wrote:

From: Yonatan Cohen 

The current implementation of create CQ requires contiguous
memory, such requirement is problematic once the memory is
fragmented or the system is low in memory, it causes for
failures in dma_zalloc_coherent().

This patch implements new scheme of fragmented CQ to overcome
this issue by introducing new type: 'struct mlx5_frag_buf_ctrl'
to allocate fragmented buffers, rather than contiguous ones.

Base the Completion Queues (CQs) on this new fragmented buffer.

It fixes following crashes:
kworker/29:0: page allocation failure: order:6, mode:0x80d0
CPU: 29 PID: 8374 Comm: kworker/29:0 Tainted: G OE 3.10.0
Workqueue: ib_cm cm_work_handler [ib_cm]
Call Trace:
[<>] dump_stack+0x19/0x1b
[<>] warn_alloc_failed+0x110/0x180
[<>] __alloc_pages_slowpath+0x6b7/0x725
[<>] __alloc_pages_nodemask+0x405/0x420
[<>] dma_generic_alloc_coherent+0x8f/0x140
[<>] x86_swiotlb_alloc_coherent+0x21/0x50
[<>] mlx5_dma_zalloc_coherent_node+0xad/0x110 [mlx5_core]
[<>] ? mlx5_db_alloc_node+0x69/0x1b0 [mlx5_core]
[<>] mlx5_buf_alloc_node+0x3e/0xa0 [mlx5_core]
[<>] mlx5_buf_alloc+0x14/0x20 [mlx5_core]
[<>] create_cq_kernel+0x90/0x1f0 [mlx5_ib]
[<>] mlx5_ib_create_cq+0x3b0/0x4e0 [mlx5_ib]

Signed-off-by: Yonatan Cohen 
Reviewed-by: Tariq Toukan 
Signed-off-by: Leon Romanovsky 
Signed-off-by: Saeed Mahameed 
---

Jason mentioned about this patch to me off-list. We were
seeing similar issue with SRQs & QPs. So wondering whether
you have any plans to do similar change for other resouces
too so that they don't rely on higher order page allocation
for icm tables.

Regards,
Santosh


Re: [for-next 7/7] IB/mlx5: Implement fragmented completion queue (CQ)

2018-02-22 Thread Jason Gunthorpe
On Wed, Feb 21, 2018 at 12:13:54PM -0800, Saeed Mahameed wrote:
> From: Yonatan Cohen 
> 
> The current implementation of create CQ requires contiguous
> memory, such requirement is problematic once the memory is
> fragmented or the system is low in memory, it causes for
> failures in dma_zalloc_coherent().
> 
> This patch implements new scheme of fragmented CQ to overcome
> this issue by introducing new type: 'struct mlx5_frag_buf_ctrl'
> to allocate fragmented buffers, rather than contiguous ones.
> 
> Base the Completion Queues (CQs) on this new fragmented buffer.
> 
> It fixes following crashes:
> kworker/29:0: page allocation failure: order:6, mode:0x80d0
> CPU: 29 PID: 8374 Comm: kworker/29:0 Tainted: G OE 3.10.0
> Workqueue: ib_cm cm_work_handler [ib_cm]
> Call Trace:
> [<>] dump_stack+0x19/0x1b
> [<>] warn_alloc_failed+0x110/0x180
> [<>] __alloc_pages_slowpath+0x6b7/0x725
> [<>] __alloc_pages_nodemask+0x405/0x420
> [<>] dma_generic_alloc_coherent+0x8f/0x140
> [<>] x86_swiotlb_alloc_coherent+0x21/0x50
> [<>] mlx5_dma_zalloc_coherent_node+0xad/0x110 [mlx5_core]
> [<>] ? mlx5_db_alloc_node+0x69/0x1b0 [mlx5_core]
> [<>] mlx5_buf_alloc_node+0x3e/0xa0 [mlx5_core]
> [<>] mlx5_buf_alloc+0x14/0x20 [mlx5_core]
> [<>] create_cq_kernel+0x90/0x1f0 [mlx5_ib]
> [<>] mlx5_ib_create_cq+0x3b0/0x4e0 [mlx5_ib]
> 
> Signed-off-by: Yonatan Cohen 
> Reviewed-by: Tariq Toukan 
> Signed-off-by: Leon Romanovsky 
> Signed-off-by: Saeed Mahameed 
>  drivers/infiniband/hw/mlx5/cq.c | 64 
> +++--
>  drivers/infiniband/hw/mlx5/mlx5_ib.h|  6 +--
>  drivers/net/ethernet/mellanox/mlx5/core/alloc.c | 37 +-
>  drivers/net/ethernet/mellanox/mlx5/core/en_rx.c | 11 +++--
>  drivers/net/ethernet/mellanox/mlx5/core/wq.c| 18 +++
>  drivers/net/ethernet/mellanox/mlx5/core/wq.h| 22 +++--
>  include/linux/mlx5/driver.h | 51 ++--
>  7 files changed, 124 insertions(+), 85 deletions(-)

For the drivers/infiniband stuff:

Acked-by: Jason Gunthorpe 

Thanks,
Jason