Re: [EXT] [PATCH 1/2] net: qed*: Reduce RX and TX default ring count when running inside kdump kernel
Hello Igor, On Wed, May 6, 2020 at 12:21 PM Igor Russkikh wrote: > > > > > #include > > +#include > > #include > > #include > > #include > > @@ -574,13 +575,13 @@ int qede_add_tc_flower_fltr(struct qede_dev *edev, > > __be16 proto, > > #define RX_RING_SIZE ((u16)BIT(RX_RING_SIZE_POW)) > > #define NUM_RX_BDS_MAX (RX_RING_SIZE - 1) > > #define NUM_RX_BDS_MIN 128 > > -#define NUM_RX_BDS_DEF ((u16)BIT(10) - 1) > > +#define NUM_RX_BDS_DEF ((is_kdump_kernel()) ? ((u16)BIT(6) - > > 1) : > > ((u16)BIT(10) - 1)) > > > > #define TX_RING_SIZE_POW 13 > > #define TX_RING_SIZE ((u16)BIT(TX_RING_SIZE_POW)) > > #define NUM_TX_BDS_MAX (TX_RING_SIZE - 1) > > #define NUM_TX_BDS_MIN 128 > > -#define NUM_TX_BDS_DEF NUM_TX_BDS_MAX > > +#define NUM_TX_BDS_DEF ((is_kdump_kernel()) ? ((u16)BIT(6) - > > 1) : > > NUM_TX_BDS_MAX) > > > > Hi Bhupesh, > > Thanks for looking into this. We are also analyzing how to reduce qed* memory > usage even more. > > Patch is good, but may I suggest not to introduce conditional logic into the > defines but instead just add two new defines like NUM_[RT]X_BDS_MIN and check > for is_kdump_kernel() in the code explicitly? > > if (is_kdump_kernel()) { > edev->q_num_rx_buffers = NUM_RX_BDS_MIN; > edev->q_num_tx_buffers = NUM_TX_BDS_MIN; > } else { > edev->q_num_rx_buffers = NUM_RX_BDS_DEF; > edev->q_num_tx_buffers = NUM_TX_BDS_DEF; > } > > This may make configuration logic more explicit. If future we may want adding > more specific configs under this `if`. Thanks for the review comments. The suggestions seem fine to me. I will incorporate them in v2. Regards, Bhupesh
Re: [EXT] [PATCH 1/2] net: qed*: Reduce RX and TX default ring count when running inside kdump kernel
> #include > +#include > #include > #include > #include > @@ -574,13 +575,13 @@ int qede_add_tc_flower_fltr(struct qede_dev *edev, > __be16 proto, > #define RX_RING_SIZE ((u16)BIT(RX_RING_SIZE_POW)) > #define NUM_RX_BDS_MAX (RX_RING_SIZE - 1) > #define NUM_RX_BDS_MIN 128 > -#define NUM_RX_BDS_DEF ((u16)BIT(10) - 1) > +#define NUM_RX_BDS_DEF ((is_kdump_kernel()) ? ((u16)BIT(6) - > 1) : > ((u16)BIT(10) - 1)) > > #define TX_RING_SIZE_POW 13 > #define TX_RING_SIZE ((u16)BIT(TX_RING_SIZE_POW)) > #define NUM_TX_BDS_MAX (TX_RING_SIZE - 1) > #define NUM_TX_BDS_MIN 128 > -#define NUM_TX_BDS_DEF NUM_TX_BDS_MAX > +#define NUM_TX_BDS_DEF ((is_kdump_kernel()) ? ((u16)BIT(6) - > 1) : > NUM_TX_BDS_MAX) > Hi Bhupesh, Thanks for looking into this. We are also analyzing how to reduce qed* memory usage even more. Patch is good, but may I suggest not to introduce conditional logic into the defines but instead just add two new defines like NUM_[RT]X_BDS_MIN and check for is_kdump_kernel() in the code explicitly? if (is_kdump_kernel()) { edev->q_num_rx_buffers = NUM_RX_BDS_MIN; edev->q_num_tx_buffers = NUM_TX_BDS_MIN; } else { edev->q_num_rx_buffers = NUM_RX_BDS_DEF; edev->q_num_tx_buffers = NUM_TX_BDS_DEF; } This may make configuration logic more explicit. If future we may want adding more specific configs under this `if`. Regards Igor
Re: [PATCH 1/2] net: qed*: Reduce RX and TX default ring count when running inside kdump kernel
Hi David, On Wed, May 6, 2020 at 2:54 AM David Miller wrote: > > From: Bhupesh Sharma > Date: Wed, 6 May 2020 00:34:40 +0530 > > > -#define NUM_RX_BDS_DEF ((u16)BIT(10) - 1) > > +#define NUM_RX_BDS_DEF ((is_kdump_kernel()) ? ((u16)BIT(6) - > > 1) : ((u16)BIT(10) - 1)) > > These parenthesis are very excessive and unnecessary. At the > very least remove the parenthesis around is_kdump_kernel(). Thanks a lot for the review. Sure, will fix this in the v2. Regards, Bhupesh
Re: [PATCH 1/2] net: qed*: Reduce RX and TX default ring count when running inside kdump kernel
From: Bhupesh Sharma Date: Wed, 6 May 2020 00:34:40 +0530 > -#define NUM_RX_BDS_DEF ((u16)BIT(10) - 1) > +#define NUM_RX_BDS_DEF ((is_kdump_kernel()) ? ((u16)BIT(6) - > 1) : ((u16)BIT(10) - 1)) These parenthesis are very excessive and unnecessary. At the very least remove the parenthesis around is_kdump_kernel().
[PATCH 1/2] net: qed*: Reduce RX and TX default ring count when running inside kdump kernel
Normally kdump kernel(s) run under severe memory constraint with the basic idea being to save the crashdump vmcore reliably when the primary kernel panics/hangs. Currently the qed* ethernet driver ends up consuming a lot of memory in the kdump kernel, leading to kdump kernel panic when one tries to save the vmcore via ssh/nfs (thus utilizing the services of the underlying qed* network interfaces). An example OOM message log seen in the kdump kernel can be seen here [1], with crashkernel size reservation of 512M. Using tools like memstrack (see [2]), we can track the modules taking up the bulk of memory in the kdump kernel and organize the memory usage output as per 'highest allocator first'. An example log for the OOM case indicates that the qed* modules end up allocating approximately 216M memory, which is a large part of the total crashkernel size: dracut-pre-pivot[676]: Report format module_summary: dracut-pre-pivot[676]: Module qed using 149.6MB (2394 pages), peak allocation 149.6MB (2394 pages) dracut-pre-pivot[676]: Module qede using 65.3MB (1045 pages), peak allocation 65.3MB (1045 pages) This patch reduces the default RX and TX ring count from 1024 to 64 when running inside kdump kernel, which leads to a significant memory saving. An example log with the patch applied shows the reduced memory allocation in the kdump kernel: dracut-pre-pivot[674]: Report format module_summary: dracut-pre-pivot[674]: Module qed using 141.8MB (2268 pages), peak allocation 141.8MB (2268 pages) <..snip..> [dracut-pre-pivot[674]: Module qede using 4.8MB (76 pages), peak allocation 4.9MB (78 pages) Tested crashdump vmcore save via ssh/nfs protocol using underlying qed* network interface after applying this patch. [1] OOM log: kworker/0:6: page allocation failure: order:6, mode:0x60c0c0(GFP_KERNEL|__GFP_COMP|__GFP_ZERO), nodemask=(null) kworker/0:6 cpuset=/ mems_allowed=0 CPU: 0 PID: 145 Comm: kworker/0:6 Not tainted 4.18.0-109.el8.aarch64 #1 Hardware name: To be filled by O.E.M. Saber/Saber, BIOS 0ACKL025 01/18/2019 Workqueue: events work_for_cpu_fn Call trace: dump_backtrace+0x0/0x188 show_stack+0x24/0x30 dump_stack+0x90/0xb4 warn_alloc+0xf4/0x178 __alloc_pages_nodemask+0xcac/0xd58 alloc_pages_current+0x8c/0xf8 kmalloc_order_trace+0x38/0x108 qed_iov_alloc+0x40/0x248 [qed] qed_resc_alloc+0x224/0x518 [qed] qed_slowpath_start+0x254/0x928 [qed] __qede_probe+0xf8/0x5e0 [qede] qede_probe+0x68/0xd8 [qede] local_pci_probe+0x44/0xa8 work_for_cpu_fn+0x20/0x30 process_one_work+0x1ac/0x3e8 worker_thread+0x44/0x448 kthread+0x130/0x138 ret_from_fork+0x10/0x18 Cannot start slowpath qede: probe of :05:00.1 failed with error -12 [2]. Memstrack tool: https://github.com/ryncsn/memstrack Cc: ke...@lists.infradead.org Cc: linux-kernel@vger.kernel.org Cc: Ariel Elior Cc: gr-everest-linux...@marvell.com Cc: Manish Chopra Cc: David S. Miller Signed-off-by: Bhupesh Sharma --- drivers/net/ethernet/qlogic/qede/qede.h | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/drivers/net/ethernet/qlogic/qede/qede.h b/drivers/net/ethernet/qlogic/qede/qede.h index 234c6f30effb..b55ab32ef0b3 100644 --- a/drivers/net/ethernet/qlogic/qede/qede.h +++ b/drivers/net/ethernet/qlogic/qede/qede.h @@ -32,6 +32,7 @@ #ifndef _QEDE_H_ #define _QEDE_H_ #include +#include #include #include #include @@ -574,13 +575,13 @@ int qede_add_tc_flower_fltr(struct qede_dev *edev, __be16 proto, #define RX_RING_SIZE ((u16)BIT(RX_RING_SIZE_POW)) #define NUM_RX_BDS_MAX (RX_RING_SIZE - 1) #define NUM_RX_BDS_MIN 128 -#define NUM_RX_BDS_DEF ((u16)BIT(10) - 1) +#define NUM_RX_BDS_DEF ((is_kdump_kernel()) ? ((u16)BIT(6) - 1) : ((u16)BIT(10) - 1)) #define TX_RING_SIZE_POW 13 #define TX_RING_SIZE ((u16)BIT(TX_RING_SIZE_POW)) #define NUM_TX_BDS_MAX (TX_RING_SIZE - 1) #define NUM_TX_BDS_MIN 128 -#define NUM_TX_BDS_DEF NUM_TX_BDS_MAX +#define NUM_TX_BDS_DEF ((is_kdump_kernel()) ? ((u16)BIT(6) - 1) : NUM_TX_BDS_MAX) #define QEDE_MIN_PKT_LEN 64 #define QEDE_RX_HDR_SIZE 256 -- 2.7.4