RE: [PATCH v2 0/3] hv_netvsc: Prevent packet loss during VF add/remove

2021-01-12 Thread Long Li
> Subject: Re: [PATCH v2 0/3] hv_netvsc: Prevent packet loss during VF > add/remove > > On Fri, 8 Jan 2021 16:53:40 -0800 Long Li wrote: > > From: Long Li > > > > This patch set fixes issues with packet loss on VF add/remove. > > These patches are for net

[PATCH v2 2/3] hv_netvsc: Wait for completion on request SWITCH_DATA_PATH

2021-01-08 Thread Long Li
From: Long Li The completion indicates if NVSP_MSG4_TYPE_SWITCH_DATA_PATH has been processed by the VSP. The traffic is steered to VF or synthetic after we receive this completion. Signed-off-by: Long Li Reported-by: kernel test robot --- Change from v1: Fixed warnings from kernel test robot

[PATCH v2 3/3] hv_netvsc: Process NETDEV_GOING_DOWN on VF hot remove

2021-01-08 Thread Long Li
From: Long Li On VF hot remove, NETDEV_GOING_DOWN is sent to notify the VF is about to go down. At this time, the VF is still sending/receiving traffic and we request the VSP to switch datapath. On completion, the datapath is switched to synthetic and we can proceed with VF hot remove. Signed

[PATCH v2 1/3] hv_netvsc: Check VF datapath when sending traffic to VF

2021-01-08 Thread Long Li
From: Long Li The driver needs to check if the datapath has been switched to VF before sending traffic to VF. Signed-off-by: Long Li Reviewed-by: Haiyang Zhang --- drivers/net/hyperv/netvsc_drv.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/drivers/net/hyperv

[PATCH v2 0/3] hv_netvsc: Prevent packet loss during VF add/remove

2021-01-08 Thread Long Li
From: Long Li This patch set fixes issues with packet loss on VF add/remove. Long Li (3): hv_netvsc: Check VF datapath when sending traffic to VF hv_netvsc: Wait for completion on request SWITCH_DATA_PATH hv_netvsc: Process NETDEV_GOING_DOWN on VF hot remove drivers/net/hyperv/netvsc.c

RE: [PATCH 2/3] hv_netvsc: Wait for completion on request NVSP_MSG4_TYPE_SWITCH_DATA_PATH

2021-01-07 Thread Long Li
C4wLjAwMDAiLCJQIj > > > oiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000sdata=MMXkQ > > KENGpyfW0NJs2khBSKTuBExFSZaWHgWyyIj6UU%3Dreserved=0 > > git remote add linux-review > > > https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgith > > ub.com%2F0day- > >

RE: [PATCH 2/3] hv_netvsc: Wait for completion on request NVSP_MSG4_TYPE_SWITCH_DATA_PATH

2021-01-06 Thread Long Li
review > https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgith > ub.com%2F0day- > ci%2Flinuxdata=04%7C01%7Clongli%40microsoft.com%7C695cf3d454e > b468b85fb08d8b1fb3ddd%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0 > %7C637455042608753098%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjA >

[PATCH 2/3] hv_netvsc: Wait for completion on request NVSP_MSG4_TYPE_SWITCH_DATA_PATH

2021-01-05 Thread Long Li
From: Long Li The completion indicates if NVSP_MSG4_TYPE_SWITCH_DATA_PATH has been processed by the VSP. The traffic is steered to VF or synthetic after we receive this completion. Signed-off-by: Long Li --- drivers/net/hyperv/netvsc.c | 34 +++-- drivers/net

[PATCH 3/3] hv_netvsc: Process NETDEV_GOING_DOWN on VF hot remove

2021-01-05 Thread Long Li
From: Long Li On VF hot remove, NETDEV_GOING_DOWN is sent to notify the VF is about to go down. At this time, the VF is still sending/receiving traffic and we request the VSP to switch datapath. On completion, the datapath is switched to synthetic and we can proceed with VF hot remove. Signed

[PATCH 1/3] hv_netvsc: Check VF datapath when sending traffic to VF

2021-01-05 Thread Long Li
From: Long Li The driver needs to check if the datapath has been switched to VF before sending traffic to VF. Signed-off-by: Long Li --- drivers/net/hyperv/netvsc_drv.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/drivers/net/hyperv/netvsc_drv.c b/drivers/net/hyperv

[PATCH v1] mm/migrate: fix comment spelling

2020-10-24 Thread Long Li
The word in the comment is misspelled, it should be "include". Signed-off-by: Long Li --- mm/migrate.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/migrate.c b/mm/migrate.c index 5ca5842df5db..d79640ab8aa1 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -1694

[PATCH v4] mm, slab: Check GFP_SLAB_BUG_MASK before alloc_pages in kmalloc_order

2020-07-02 Thread Long Li
rder to make the code clear, the warning message is put in one place. Signed-off-by: Long Li --- changes in V4: -Change the check function name to kmalloc_check_flags() -Put the flags check into the kmalloc_check_flags() changes in V3: -Put the warning message in one place -updage the change

[PATCH v3] mm, slab: Check GFP_SLAB_BUG_MASK before alloc_pages in kmalloc_order

2020-07-01 Thread Long Li
rder to make the code clear, the warning message is put in one place. Signed-off-by: Long Li --- changes in V3: -Put the warning message in one place -updage the change log to be clear mm/slab.c| 10 +++--- mm/slab.h| 1 + mm/slab_common.c | 17 + mm/sl

[PATCH v2] mm, slab: Check GFP_SLAB_BUG_MASK before alloc_pages in kmalloc_order

2020-06-30 Thread Long Li
to a virtual address, kmalloc_order() will return NULL and the page has been allocated. After modification, GFP_SLAB_BUG_MASK has been checked before allocating pages, refer to the new_slab(). Signed-off-by: Long Li --- Changes in v2: - patch is rebased againest "[PATCH] mm: Free unused

[PATCH v1] mm:free unused pages in kmalloc_order

2020-06-26 Thread Long Li
NULL, the pages has not been released. Usually driver developers will not use the GFP_HIGHUSER flag to allocate memory in kmalloc, but I think this memory leak is not perfect, it is best to be fixed. This is the first time I have posted a patch, there may be something wrong. Signed-off-by: Long Li

RE: [Patch v4] storvsc: setup 1:1 mapping between hardware queue and CPU queue

2019-09-23 Thread Long Li
>Subject: RE: [Patch v4] storvsc: setup 1:1 mapping between hardware queue >and CPU queue > >>Subject: Re: [Patch v4] storvsc: setup 1:1 mapping between hardware >>queue and CPU queue >> >>On Fri, Sep 06, 2019 at 10:24:20AM -0700, lon...@linuxonhyperv.com wrote:

RE: [PATCH 1/4] softirq: implement IRQ flood detection mechanism

2019-09-23 Thread Long Li
>Thanks for the clarification. > >The problem with what Ming is proposing in my mind (and its an existing >problem that exists today), is that nvme is taking precedence over anything >else until it absolutely cannot hog the cpu in hardirq. > >In the thread Ming referenced a case where today if the

RE: [PATCH 1/4] softirq: implement IRQ flood detection mechanism

2019-09-20 Thread Long Li
> >> Long, does this patch make any difference? > > > > Sagi, > > > > Sorry it took a while to bring my system back online. > > > > With the patch, the IOPS is about the same drop with the 1st patch. I think > the excessive context switches are causing the drop in IOPS. > > > > The following are

RE: [PATCH 1/4] softirq: implement IRQ flood detection mechanism

2019-09-17 Thread Long Li
>Subject: Re: [PATCH 1/4] softirq: implement IRQ flood detection mechanism > >Hey Ming, > Ok, so the real problem is per-cpu bounded tasks. I share Thomas opinion about a NAPI like approach. >>> >>> We already have that, its irq_poll, but it seems that for this >>> use-case, we get

RE: [Patch v4] storvsc: setup 1:1 mapping between hardware queue and CPU queue

2019-09-06 Thread Long Li
>Subject: Re: [Patch v4] storvsc: setup 1:1 mapping between hardware queue >and CPU queue > >On Fri, Sep 06, 2019 at 10:24:20AM -0700, lon...@linuxonhyperv.com wrote: >>From: Long Li >> >>storvsc doesn't use a dedicated hardware queue for a given CPU queue.

RE: [PATCH 1/4] softirq: implement IRQ flood detection mechanism

2019-09-06 Thread Long Li
>Subject: Re: [PATCH 1/4] softirq: implement IRQ flood detection mechanism > >On Fri, Sep 06, 2019 at 09:48:21AM +0800, Ming Lei wrote: >> When one IRQ flood happens on one CPU: >> >> 1) softirq handling on this CPU can't make progress >> >> 2) kernel thread bound to this CPU can't make progress

RE: [PATCH 1/4] softirq: implement IRQ flood detection mechanism

2019-09-05 Thread Long Li
>Subject: Re: [PATCH 1/4] softirq: implement IRQ flood detection mechanism > > >On 06/09/2019 03:22, Long Li wrote: >[ ... ] >> > >> Tracing shows that the CPU was in either hardirq or softirq all the >> time before warnings. During tests, the system was un

RE: [Patch v3] storvsc: setup 1:1 mapping between hardware queue and CPU queue

2019-09-05 Thread Long Li
>Subject: RE: [Patch v3] storvsc: setup 1:1 mapping between hardware queue >and CPU queue > >From: Long Li Sent: Thursday, September 5, 2019 3:55 >PM >> >> storvsc doesn't use a dedicated hardware queue for a given CPU queue. >> When issuing I/O, it sele

RE: [PATCH 1/4] softirq: implement IRQ flood detection mechanism

2019-09-05 Thread Long Li
>Subject: Re: [PATCH 1/4] softirq: implement IRQ flood detection mechanism > > >Hi Ming, > >On 05/09/2019 11:06, Ming Lei wrote: >> On Wed, Sep 04, 2019 at 07:31:48PM +0200, Daniel Lezcano wrote: >>> Hi, >>> >>> On 04/09/2019 19:07, Bart Van Assche wrote: On 9/3/19 12:50 AM, Daniel Lezcano

RE: [PATCH 1/4] softirq: implement IRQ flood detection mechanism

2019-08-29 Thread Long Li
interval to evaluate if IRQ flood >>>is triggered. The Exponential Weighted Moving Average(EWMA) is used to >>>compute CPU average interrupt interval. >>> >>>Cc: Long Li >>>Cc: Ingo Molnar , >>>Cc: Peter Zijlstra >>>Cc: Keith Bu

RE: [PATCH 3/3] nvme: complete request in work queue on CPU with flooded interrupts

2019-08-23 Thread Long Li
>>>Subject: Re: [PATCH 3/3] nvme: complete request in work queue on CPU >>>with flooded interrupts >>> >>> Sagi, Here are the test results. Benchmark command: fio --bs=4k --ioengine=libaio --iodepth=64 --

RE: [Patch v2] storvsc: setup 1:1 mapping between hardware queue and CPU queue

2019-08-22 Thread Long Li
>>>Subject: RE: [Patch v2] storvsc: setup 1:1 mapping between hardware >>>queue and CPU queue >>> >>>>>>Subject: RE: [Patch v2] storvsc: setup 1:1 mapping between hardware >>>>>>queue and CPU queue >>>>

RE: [Patch v2] storvsc: setup 1:1 mapping between hardware queue and CPU queue

2019-08-22 Thread Long Li
>>>Subject: RE: [Patch v2] storvsc: setup 1:1 mapping between hardware >>>queue and CPU queue >>> >>>From: Long Li Sent: Thursday, August 22, 2019 >>>1:42 PM >>>> >>>> storvsc doesn't use a dedicated hardware queue for a given

RE: [PATCH] storvsc: setup 1:1 mapping between hardware queue and CPU queue

2019-08-22 Thread Long Li
>>>Subject: Re: [PATCH] storvsc: setup 1:1 mapping between hardware queue >>>and CPU queue >>> >>>On Tue, Aug 20, 2019 at 3:36 AM wrote: >>>> >>>> From: Long Li >>>> >>>> storvsc doesn't use a dedicated h

RE: [PATCH 3/3] nvme: complete request in work queue on CPU with flooded interrupts

2019-08-21 Thread Long Li
>>>Subject: RE: [PATCH 3/3] nvme: complete request in work queue on CPU >>>with flooded interrupts >>> >>>>>>Subject: Re: [PATCH 3/3] nvme: complete request in work queue on >>>CPU >>>>>>with flooded interrupts >>

RE: [PATCH 0/3] fix interrupt swamp in NVMe

2019-08-21 Thread Long Li
>>>Subject: Re: [PATCH 0/3] fix interrupt swamp in NVMe >>> >>>On Wed, Aug 21, 2019 at 07:47:44AM +, Long Li wrote: >>>> >>>Subject: Re: [PATCH 0/3] fix interrupt swamp in NVMe >>>> >>> >>>> >>

RE: [PATCH 3/3] nvme: complete request in work queue on CPU with flooded interrupts

2019-08-21 Thread Long Li
>>>Subject: Re: [PATCH 3/3] nvme: complete request in work queue on CPU >>>with flooded interrupts >>> >>> >>>> From: Long Li >>>> >>>> When a NVMe hardware queue is mapped to several CPU queues, it is >>>> po

RE: [PATCH 3/3] nvme: complete request in work queue on CPU with flooded interrupts

2019-08-21 Thread Long Li
>>>Subject: Re: [PATCH 3/3] nvme: complete request in work queue on CPU >>>with flooded interrupts >>> >>>On Mon, Aug 19, 2019 at 11:14:29PM -0700, lon...@linuxonhyperv.com >>>wrote: >>>> From: Long Li >>>> >>>>

RE: [PATCH 1/3] sched: define a function to report the number of context switches on a CPU

2019-08-21 Thread Long Li
>>>Subject: Re: [PATCH 1/3] sched: define a function to report the number of >>>context switches on a CPU >>> >>>On Mon, Aug 19, 2019 at 11:14:27PM -0700, lon...@linuxonhyperv.com >>>wrote: >>>> From: Long Li >>>> >>>

RE: [PATCH 0/3] fix interrupt swamp in NVMe

2019-08-21 Thread Long Li
>>>Subject: Re: [PATCH 0/3] fix interrupt swamp in NVMe >>> >>>On 20/08/2019 09:25, Ming Lei wrote: >>>> On Tue, Aug 20, 2019 at 2:14 PM wrote: >>>>> >>>>> From: Long Li >>>>> >>>>> This patch set

RE: [Patch (resend) 5/5] cifs: Call MID callback before destroying transport

2019-05-13 Thread Long Li
>>>-Original Message- >>>From: Pavel Shilovsky >>>Sent: Thursday, May 9, 2019 11:01 AM >>>To: Long Li >>>Cc: Steve French ; linux-cifs >>c...@vger.kernel.org>; samba-technical ; >>>Kernel Mailing List >>>Sub

RE: [PATCH] x86/hyper-v: implement EOI assist

2019-04-15 Thread Long Li
;>hypervisor." >>>> >>>> Implement the optimization in Linux. >>>> >>> >>>Simon, Long, >>> >>>did you get a chance to run some tests with this? I have ran some tests on Azure L80s_v2. With 10 NVMe disks on raid0 and formatted to EXT4, I'm getting 2.6m max IOPS with the patch, compared to 2.55m IOPS before. The VM has been running stable. Thank you! Tested-by: Long Li >>> >>>-- >>>Vitaly

[PATCH] cifs: smbd: take an array of reqeusts when sending upper layer data

2019-04-15 Thread Long Li
From: Long Li To support compounding, __smb_send_rqst() now sends an array of requests to the transport layer. Change smbd_send() to take an array of requests, and send them in as few packets as possible. Signed-off-by: Long Li --- fs/cifs/smbdirect.c | 55

[Patch (resend) 2/5] cifs: smbd: Return EINTR when interrupted

2019-04-05 Thread Long Li
From: Long Li When packets are waiting for outbound I/O and interrupted, return the proper error code to user process. Signed-off-by: Long Li --- fs/cifs/smbdirect.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/fs/cifs/smbdirect.c b/fs/cifs/smbdirect.c index 7259427

[Patch (resend) 1/5] cifs: smbd: Don't destroy transport on RDMA disconnect

2019-04-05 Thread Long Li
From: Long Li Now upper layer is handling the transport shutdown and reconnect, remove the code that handling transport shutdown on RDMA disconnect. Signed-off-by: Long Li --- fs/cifs/cifs_debug.c | 8 ++-- fs/cifs/smbdirect.c | 120 +++ fs

[Patch (resend) 3/5] cifs: smbd: Indicate to retry on transport sending failure

2019-04-05 Thread Long Li
From: Long Li Failure to send a packet doesn't mean it's a permanent failure, it can't be returned to user process. This I/O should be retried or failed based on server packet response and transport health. This logic is handled by the upper layer. Give this decision to upper layer. Signed-off

[Patch (resend) 4/5] cifs: smbd: Retry on memory registration failure

2019-04-05 Thread Long Li
From: Long Li Memory registration failure doesn't mean this I/O has failed, it means the transport is hitting I/O error or needs reconnect. This error is not from the server. Indicate this error to upper layer, and let upper layer decide how to reconnect and proceed with this I/O. Signed-off

[Patch (resend) 5/5] cifs: Call MID callback before destroying transport

2019-04-05 Thread Long Li
From: Long Li When transport is being destroyed, it's possible that some processes may hold memory registrations that need to be deregistred. Call them first so nobody is using transport resources, and it can be destroyed. Signed-off-by: Long Li --- fs/cifs/connect.c | 36

[Patch v2 2/2] CIFS: Fix an issue with re-sending rdata when transport returning -EAGAIN

2019-03-15 Thread Long Li
From: Long Li When sending a rdata, transport may return -EAGAIN. In this case we should re-obtain credits because the session may have been reconnected. Change in v2: adjust_credits before re-sending Signed-off-by: Long Li --- fs/cifs/file.c | 71

[Patch v2 1/2] CIFS: Fix an issue with re-sending wdata when transport returning -EAGAIN

2019-03-15 Thread Long Li
From: Long Li When sending a wdata, transport may return -EAGAIN. In this case we should re-obtain credits because the session may have been reconnected. Change in v2: adjust_credits before re-sending Signed-off-by: Long Li --- fs/cifs/file.c | 77

[PATCH 1/2] CIFS: Fix a bug with re-sending wdata when transport returning -EAGAIN

2019-03-01 Thread Long Li
From: Long Li When sending a wdata, transport may return -EAGAIN. In this case we should re-obtain credits because the session may have been reconnected. Signed-off-by: Long Li --- fs/cifs/file.c | 61 +- 1 file changed, 31 insertions(+), 30

[PATCH 2/2] CIFS: Fix a bug with re-sending rdata when transport returning -EAGAIN

2019-03-01 Thread Long Li
From: Long Li When sending a rdata, transport may return -EAGAIN. In this case we should re-obtain credits because the session may have been reconnected. Signed-off-by: Long Li --- fs/cifs/file.c | 51 +- 1 file changed, 26 insertions(+), 25

[PATCH] CIFS: use the correct length when pinning memory for direct I/O for write

2018-12-16 Thread Long Li
From: Long Li The current code attempts to pin memory using the largest possible wsize based on the currect SMB credits. This doesn't cause kernel oops but this is not optimal as we may pin more pages then actually needed. Fix this by only pinning what are needed for doing this write I/O

[PATCH] CIFS: return correct errors when pinning memory failed for direct I/O

2018-12-16 Thread Long Li
From: Long Li When pinning memory failed, we should return the correct error code and rewind the SMB credits. Reported-by: Murphy Zhou Signed-off-by: Long Li Cc: sta...@vger.kernel.org Cc: Murphy Zhou --- fs/cifs/file.c | 8 +++- 1 file changed, 7 insertions(+), 1 deletion(-) diff

[PATCH] CIFS: Avoid returning EBUSY to upper layer VFS

2018-12-05 Thread Long Li
From: Long Li EBUSY is not handled by VFS, and will be passed to user-mode. This is not correct as we need to wait for more credits. This patch also fixes a bug where rsize or wsize is used uninitialized when the call to server->ops->wait_mtu_credits() fails. Reported-by: Dan Carpenter

[PATCH] CIFS: Avoid returning EBUSY to upper layer VFS

2018-12-05 Thread Long Li
From: Long Li EBUSY is not handled by VFS, and will be passed to user-mode. This is not correct as we need to wait for more credits. This patch also fixes a bug where rsize or wsize is used uninitialized when the call to server->ops->wait_mtu_credits() fails. Reported-by: Dan Carpenter

RE: [Patch v4 1/3] CIFS: Add support for direct I/O read

2018-11-29 Thread Long Li
> Subject: Re: [Patch v4 1/3] CIFS: Add support for direct I/O read > > ср, 28 нояб. 2018 г. в 15:43, Long Li : > > > > > Subject: Re: [Patch v4 1/3] CIFS: Add support for direct I/O read > > > > > > Hi Long, > > > > > > Please find my c

RE: [Patch v4 1/3] CIFS: Add support for direct I/O read

2018-11-29 Thread Long Li
> Subject: Re: [Patch v4 1/3] CIFS: Add support for direct I/O read > > ср, 28 нояб. 2018 г. в 15:43, Long Li : > > > > > Subject: Re: [Patch v4 1/3] CIFS: Add support for direct I/O read > > > > > > Hi Long, > > > > > > Please find my c

RE: [Patch v4 2/3] CIFS: Add support for direct I/O write

2018-11-29 Thread Long Li
> Subject: Re: [Patch v4 2/3] CIFS: Add support for direct I/O write > > ср, 28 нояб. 2018 г. в 18:20, Long Li : > > > > > Subject: Re: [Patch v4 2/3] CIFS: Add support for direct I/O write > > > > > > ср, 31 окт. 2018 г. в 15:2

RE: [Patch v4 2/3] CIFS: Add support for direct I/O write

2018-11-29 Thread Long Li
> Subject: Re: [Patch v4 2/3] CIFS: Add support for direct I/O write > > ср, 28 нояб. 2018 г. в 18:20, Long Li : > > > > > Subject: Re: [Patch v4 2/3] CIFS: Add support for direct I/O write > > > > > > ср, 31 окт. 2018 г. в 15:2

RE: [Patch v4 2/3] CIFS: Add support for direct I/O write

2018-11-28 Thread Long Li
> Subject: Re: [Patch v4 2/3] CIFS: Add support for direct I/O write > > ср, 31 окт. 2018 г. в 15:26, Long Li : > > > > From: Long Li > > > > With direct I/O write, user supplied buffers are pinned to the memory > > and data are transferred directly f

RE: [Patch v4 2/3] CIFS: Add support for direct I/O write

2018-11-28 Thread Long Li
> Subject: Re: [Patch v4 2/3] CIFS: Add support for direct I/O write > > ср, 31 окт. 2018 г. в 15:26, Long Li : > > > > From: Long Li > > > > With direct I/O write, user supplied buffers are pinned to the memory > > and data are transferred directly f

RE: [Patch v4 1/3] CIFS: Add support for direct I/O read

2018-11-28 Thread Long Li
> Subject: Re: [Patch v4 1/3] CIFS: Add support for direct I/O read > > Hi Long, > > Please find my comments below. > > > ср, 31 окт. 2018 г. в 15:14, Long Li : > > > > From: Long Li > > > > With direct I/O read, we transfer the data directly

RE: [Patch v4 1/3] CIFS: Add support for direct I/O read

2018-11-28 Thread Long Li
> Subject: Re: [Patch v4 1/3] CIFS: Add support for direct I/O read > > Hi Long, > > Please find my comments below. > > > ср, 31 окт. 2018 г. в 15:14, Long Li : > > > > From: Long Li > > > > With direct I/O read, we transfer the data directly

RE: [Patch v4] genirq/matrix: Choose CPU for managed IRQs based on how many of them are allocated

2018-11-06 Thread Long Li
> Subject: Re: [Patch v4] genirq/matrix: Choose CPU for managed IRQs based > on how many of them are allocated > > Long, > > On Tue, 6 Nov 2018, Long Li wrote: > > > From: Long Li > > > > On a large system with multiple devices of the same class (

RE: [Patch v4] genirq/matrix: Choose CPU for managed IRQs based on how many of them are allocated

2018-11-06 Thread Long Li
> Subject: Re: [Patch v4] genirq/matrix: Choose CPU for managed IRQs based > on how many of them are allocated > > Long, > > On Tue, 6 Nov 2018, Long Li wrote: > > > From: Long Li > > > > On a large system with multiple devices of the same class (

[tip:irq/core] genirq/matrix: Improve target CPU selection for managed interrupts.

2018-11-06 Thread tip-bot for Long Li
Commit-ID: e8da8794a7fd9eef1ec9a07f0d4897c68581c72b Gitweb: https://git.kernel.org/tip/e8da8794a7fd9eef1ec9a07f0d4897c68581c72b Author: Long Li AuthorDate: Tue, 6 Nov 2018 04:00:00 + Committer: Thomas Gleixner CommitDate: Tue, 6 Nov 2018 23:20:13 +0100 genirq/matrix: Improve

[tip:irq/core] genirq/matrix: Improve target CPU selection for managed interrupts.

2018-11-06 Thread tip-bot for Long Li
Commit-ID: e8da8794a7fd9eef1ec9a07f0d4897c68581c72b Gitweb: https://git.kernel.org/tip/e8da8794a7fd9eef1ec9a07f0d4897c68581c72b Author: Long Li AuthorDate: Tue, 6 Nov 2018 04:00:00 + Committer: Thomas Gleixner CommitDate: Tue, 6 Nov 2018 23:20:13 +0100 genirq/matrix: Improve

[Patch v4] genirq/matrix: Choose CPU for managed IRQs based on how many of them are allocated

2018-11-05 Thread Long Li
From: Long Li On a large system with multiple devices of the same class (e.g. NVMe disks, using managed IRQs), the kernel tends to concentrate their IRQs on several CPUs. The issue is that when NVMe calls irq_matrix_alloc_managed(), the assigned CPU tends to be the first several CPUs

[Patch v4] genirq/matrix: Choose CPU for managed IRQs based on how many of them are allocated

2018-11-05 Thread Long Li
From: Long Li On a large system with multiple devices of the same class (e.g. NVMe disks, using managed IRQs), the kernel tends to concentrate their IRQs on several CPUs. The issue is that when NVMe calls irq_matrix_alloc_managed(), the assigned CPU tends to be the first several CPUs

[tip:irq/core] genirq/affinity: Spread IRQs to all available NUMA nodes

2018-11-05 Thread tip-bot for Long Li
Commit-ID: b82592199032bf7c778f861b936287e37ebc9f62 Gitweb: https://git.kernel.org/tip/b82592199032bf7c778f861b936287e37ebc9f62 Author: Long Li AuthorDate: Fri, 2 Nov 2018 18:02:48 + Committer: Thomas Gleixner CommitDate: Mon, 5 Nov 2018 12:16:26 +0100 genirq/affinity: Spread IRQs

[tip:irq/core] genirq/affinity: Spread IRQs to all available NUMA nodes

2018-11-05 Thread tip-bot for Long Li
Commit-ID: b82592199032bf7c778f861b936287e37ebc9f62 Gitweb: https://git.kernel.org/tip/b82592199032bf7c778f861b936287e37ebc9f62 Author: Long Li AuthorDate: Fri, 2 Nov 2018 18:02:48 + Committer: Thomas Gleixner CommitDate: Mon, 5 Nov 2018 12:16:26 +0100 genirq/affinity: Spread IRQs

RE: [PATCH v3] genirq/matrix: Choose CPU for managed IRQs based on how many of them are allocated

2018-11-03 Thread Long Li
> Subject: Re: [PATCH v3] genirq/matrix: Choose CPU for managed IRQs based > on how many of them are allocated > > On Sat, 3 Nov 2018, Thomas Gleixner wrote: > > On Fri, 2 Nov 2018, Long Li wrote: > > > /** > > > * irq_matrix_assign_system - Assign system w

RE: [PATCH v3] genirq/matrix: Choose CPU for managed IRQs based on how many of them are allocated

2018-11-03 Thread Long Li
> Subject: Re: [PATCH v3] genirq/matrix: Choose CPU for managed IRQs based > on how many of them are allocated > > On Sat, 3 Nov 2018, Thomas Gleixner wrote: > > On Fri, 2 Nov 2018, Long Li wrote: > > > /** > > > * irq_matrix_assign_system - Assign system w

[Patch v2] genirq/affinity: Spread IRQs to all available NUMA nodes

2018-11-02 Thread Long Li
From: Long Li On systems with large number of NUMA nodes, there may be more NUMA nodes than the number of MSI/MSI-X interrupts that device requests for. The current code always picks up the NUMA nodes starting from the node 0, up to the number of interrupts requested. This may left some later

[Patch v2] genirq/affinity: Spread IRQs to all available NUMA nodes

2018-11-02 Thread Long Li
From: Long Li On systems with large number of NUMA nodes, there may be more NUMA nodes than the number of MSI/MSI-X interrupts that device requests for. The current code always picks up the NUMA nodes starting from the node 0, up to the number of interrupts requested. This may left some later

RE: [PATCH] genirq/affinity: Spread IRQs to all available NUMA nodes

2018-11-02 Thread Long Li
> Subject: RE: [PATCH] genirq/affinity: Spread IRQs to all available NUMA > nodes > > From: Long Li Sent: Thursday, November 1, 2018 > 4:52 PM > > > > --- a/kernel/irq/affinity.c > > +++ b/kernel/irq/affinity.c > > @@ -117,12 +117,13 @@ static in

RE: [PATCH] genirq/affinity: Spread IRQs to all available NUMA nodes

2018-11-02 Thread Long Li
> Subject: RE: [PATCH] genirq/affinity: Spread IRQs to all available NUMA > nodes > > From: Long Li Sent: Thursday, November 1, 2018 > 4:52 PM > > > > --- a/kernel/irq/affinity.c > > +++ b/kernel/irq/affinity.c > > @@ -117,12 +117,13 @@ static in

[PATCH v3] genirq/matrix: Choose CPU for managed IRQs based on how many of them are allocated

2018-11-01 Thread Long Li
From: Long Li On a large system with multiple devices of the same class (e.g. NVMe disks, using managed IRQs), the kernel tends to concentrate their IRQs on several CPUs. The issue is that when NVMe calls irq_matrix_alloc_managed(), the assigned CPU tends to be the first several CPUs

[PATCH v3] genirq/matrix: Choose CPU for managed IRQs based on how many of them are allocated

2018-11-01 Thread Long Li
From: Long Li On a large system with multiple devices of the same class (e.g. NVMe disks, using managed IRQs), the kernel tends to concentrate their IRQs on several CPUs. The issue is that when NVMe calls irq_matrix_alloc_managed(), the assigned CPU tends to be the first several CPUs

[PATCH] genirq/affinity: Spread IRQs to all available NUMA nodes

2018-11-01 Thread Long Li
From: Long Li On systems with large number of NUMA nodes, there may be more NUMA nodes than the number of MSI/MSI-X interrupts that device requests for. The current code always picks up the NUMA nodes starting from the node 0, up to the number of interrupts requested. This may left some later

[PATCH] genirq/affinity: Spread IRQs to all available NUMA nodes

2018-11-01 Thread Long Li
From: Long Li On systems with large number of NUMA nodes, there may be more NUMA nodes than the number of MSI/MSI-X interrupts that device requests for. The current code always picks up the NUMA nodes starting from the node 0, up to the number of interrupts requested. This may left some later

RE: [Patch v2] genirq/matrix: Choose CPU for assigning interrupts based on allocated IRQs

2018-11-01 Thread Long Li
> Subject: Re: [Patch v2] genirq/matrix: Choose CPU for assigning interrupts > based on allocated IRQs > > Long, > > On Thu, 1 Nov 2018, Long Li wrote: > > On a large system with multiple devices of the same class (e.g. NVMe > > disks, using managed IRQs), t

RE: [Patch v2] genirq/matrix: Choose CPU for assigning interrupts based on allocated IRQs

2018-11-01 Thread Long Li
> Subject: Re: [Patch v2] genirq/matrix: Choose CPU for assigning interrupts > based on allocated IRQs > > Long, > > On Thu, 1 Nov 2018, Long Li wrote: > > On a large system with multiple devices of the same class (e.g. NVMe > > disks, using managed IRQs), t

[Patch v2] genirq/matrix: Choose CPU for assigning interrupts based on allocated IRQs

2018-10-31 Thread Long Li
From: Long Li On a large system with multiple devices of the same class (e.g. NVMe disks, using managed IRQs), the kernel tends to concentrate their IRQs on several CPUs. The issue is that when NVMe calls irq_matrix_alloc_managed(), the assigned CPU tends to be the first several CPUs

[Patch v2] genirq/matrix: Choose CPU for assigning interrupts based on allocated IRQs

2018-10-31 Thread Long Li
From: Long Li On a large system with multiple devices of the same class (e.g. NVMe disks, using managed IRQs), the kernel tends to concentrate their IRQs on several CPUs. The issue is that when NVMe calls irq_matrix_alloc_managed(), the assigned CPU tends to be the first several CPUs

[Patch v4 1/3] CIFS: Add support for direct I/O read

2018-10-31 Thread Long Li
From: Long Li With direct I/O read, we transfer the data directly from transport layer to the user data buffer. Change in v3: add support for kernel AIO Change in v4: Refactor common read code to __cifs_readv for direct and non-direct I/O. Retry on direct I/O failure. Signed-off-by: Long Li

[Patch v4 2/3] CIFS: Add support for direct I/O write

2018-10-31 Thread Long Li
From: Long Li With direct I/O write, user supplied buffers are pinned to the memory and data are transferred directly from user buffers to the transport layer. Change in v3: add support for kernel AIO Change in v4: Refactor common write code to __cifs_writev for direct and non-direct I/O

[Patch v4 1/3] CIFS: Add support for direct I/O read

2018-10-31 Thread Long Li
From: Long Li With direct I/O read, we transfer the data directly from transport layer to the user data buffer. Change in v3: add support for kernel AIO Change in v4: Refactor common read code to __cifs_readv for direct and non-direct I/O. Retry on direct I/O failure. Signed-off-by: Long Li

[Patch v4 2/3] CIFS: Add support for direct I/O write

2018-10-31 Thread Long Li
From: Long Li With direct I/O write, user supplied buffers are pinned to the memory and data are transferred directly from user buffers to the transport layer. Change in v3: add support for kernel AIO Change in v4: Refactor common write code to __cifs_writev for direct and non-direct I/O

[Patch v4 3/3] CIFS: Add direct I/O functions to file_operations

2018-10-31 Thread Long Li
From: Long Li With direct read/write functions implemented, add them to file_operations. Dircet I/O is used under two conditions: 1. When mounting with "cache=none", CIFS uses direct I/O for all user file data transfer. 2. When opening a file with O_DIRECT, CIFS uses direct I/O fo

[Patch v4 3/3] CIFS: Add direct I/O functions to file_operations

2018-10-31 Thread Long Li
From: Long Li With direct read/write functions implemented, add them to file_operations. Dircet I/O is used under two conditions: 1. When mounting with "cache=none", CIFS uses direct I/O for all user file data transfer. 2. When opening a file with O_DIRECT, CIFS uses direct I/O fo

RE: [PATCH] Choose CPU based on allocated IRQs

2018-10-30 Thread Long Li
> Subject: Re: [PATCH] Choose CPU based on allocated IRQs > > Long, > > On Tue, 23 Oct 2018, Long Li wrote: > > thanks for this patch. > > A trivial formal thing ahead. The subject line > >[PATCH] Choose CPU based on allocated IRQs > > is lacking a

RE: [PATCH] Choose CPU based on allocated IRQs

2018-10-30 Thread Long Li
> Subject: Re: [PATCH] Choose CPU based on allocated IRQs > > Long, > > On Tue, 23 Oct 2018, Long Li wrote: > > thanks for this patch. > > A trivial formal thing ahead. The subject line > >[PATCH] Choose CPU based on allocated IRQs > > is lacking a

[PATCH] Choose CPU based on allocated IRQs

2018-10-22 Thread Long Li
From: Long Li In irq_matrix, "available" is set when IRQs are allocated earlier in the IRQ assigning process. Later, when IRQs are activated those values are not good indicators of what CPU to choose to assign to this IRQ. Change to choose CPU for an IRQ based on how many IRQs a

[PATCH] Choose CPU based on allocated IRQs

2018-10-22 Thread Long Li
From: Long Li In irq_matrix, "available" is set when IRQs are allocated earlier in the IRQ assigning process. Later, when IRQs are activated those values are not good indicators of what CPU to choose to assign to this IRQ. Change to choose CPU for an IRQ based on how many IRQs a

RE: [PATCH V3 (resend) 3/7] CIFS: Add support for direct I/O read

2018-09-24 Thread Long Li
> Subject: Re: [PATCH V3 (resend) 3/7] CIFS: Add support for direct I/O read > > чт, 20 сент. 2018 г. в 14:22, Long Li : > > > > From: Long Li > > > > With direct I/O read, we transfer the data directly from transport > > layer to the user data bu

RE: [PATCH V3 (resend) 3/7] CIFS: Add support for direct I/O read

2018-09-24 Thread Long Li
> Subject: Re: [PATCH V3 (resend) 3/7] CIFS: Add support for direct I/O read > > чт, 20 сент. 2018 г. в 14:22, Long Li : > > > > From: Long Li > > > > With direct I/O read, we transfer the data directly from transport > > layer to the user data bu

[PATCH V3 (resend) 3/7] CIFS: Add support for direct I/O read

2018-09-20 Thread Long Li
From: Long Li With direct I/O read, we transfer the data directly from transport layer to the user data buffer. Change in v3: add support for kernel AIO Signed-off-by: Long Li --- fs/cifs/cifsfs.h | 1 + fs/cifs/cifsglob.h | 5 ++ fs/cifs/file.c | 210

[PATCH V3 (resend) 3/7] CIFS: Add support for direct I/O read

2018-09-20 Thread Long Li
From: Long Li With direct I/O read, we transfer the data directly from transport layer to the user data buffer. Change in v3: add support for kernel AIO Signed-off-by: Long Li --- fs/cifs/cifsfs.h | 1 + fs/cifs/cifsglob.h | 5 ++ fs/cifs/file.c | 210

[PATCH V3 (resend) 4/7] CIFS: Add support for direct I/O write

2018-09-20 Thread Long Li
From: Long Li With direct I/O write, user supplied buffers are pinned to the memory and data are transferred directly from user buffers to the transport layer. Change in v3: add support for kernel AIO Signed-off-by: Long Li --- fs/cifs/cifsfs.h | 1 + fs/cifs/file.c | 196

[PATCH V3 (resend) 2/7] CIFS: SMBD: Do not call ib_dereg_mr on invalidated memory registration

2018-09-20 Thread Long Li
From: Long Li It is not necessary to deregister a memory registration after it has been successfully invalidated. Signed-off-by: Long Li --- fs/cifs/smbdirect.c | 38 +++--- 1 file changed, 19 insertions(+), 19 deletions(-) diff --git a/fs/cifs/smbdirect.c b

[PATCH V3 (resend) 5/7] CIFS: Add direct I/O functions to file_operations

2018-09-20 Thread Long Li
From: Long Li With direct read/write functions implemented, add them to file_operations. Dircet I/O is used under two conditions: 1. When mounting with "cache=none", CIFS uses direct I/O for all user file data transfer. 2. When opening a file with O_DIRECT, CIFS uses direct I/O fo

[PATCH V3 (resend) 4/7] CIFS: Add support for direct I/O write

2018-09-20 Thread Long Li
From: Long Li With direct I/O write, user supplied buffers are pinned to the memory and data are transferred directly from user buffers to the transport layer. Change in v3: add support for kernel AIO Signed-off-by: Long Li --- fs/cifs/cifsfs.h | 1 + fs/cifs/file.c | 196

[PATCH V3 (resend) 5/7] CIFS: Add direct I/O functions to file_operations

2018-09-20 Thread Long Li
From: Long Li With direct read/write functions implemented, add them to file_operations. Dircet I/O is used under two conditions: 1. When mounting with "cache=none", CIFS uses direct I/O for all user file data transfer. 2. When opening a file with O_DIRECT, CIFS uses direct I/O fo

  1   2   3   4   5   6   7   8   9   10   >