> Subject: Re: [PATCH v2 0/3] hv_netvsc: Prevent packet loss during VF
> add/remove
>
> On Fri, 8 Jan 2021 16:53:40 -0800 Long Li wrote:
> > From: Long Li
> >
> > This patch set fixes issues with packet loss on VF add/remove.
>
> These patches are for net
From: Long Li
The completion indicates if NVSP_MSG4_TYPE_SWITCH_DATA_PATH has been
processed by the VSP. The traffic is steered to VF or synthetic after we
receive this completion.
Signed-off-by: Long Li
Reported-by: kernel test robot
---
Change from v1:
Fixed warnings from kernel test robot
From: Long Li
On VF hot remove, NETDEV_GOING_DOWN is sent to notify the VF is about to
go down. At this time, the VF is still sending/receiving traffic and we
request the VSP to switch datapath.
On completion, the datapath is switched to synthetic and we can proceed
with VF hot remove.
Signed
From: Long Li
The driver needs to check if the datapath has been switched to VF before
sending traffic to VF.
Signed-off-by: Long Li
Reviewed-by: Haiyang Zhang
---
drivers/net/hyperv/netvsc_drv.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/net/hyperv
From: Long Li
This patch set fixes issues with packet loss on VF add/remove.
Long Li (3):
hv_netvsc: Check VF datapath when sending traffic to VF
hv_netvsc: Wait for completion on request SWITCH_DATA_PATH
hv_netvsc: Process NETDEV_GOING_DOWN on VF hot remove
drivers/net/hyperv/netvsc.c
C4wLjAwMDAiLCJQIj
> >
> oiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000sdata=MMXkQ
> > KENGpyfW0NJs2khBSKTuBExFSZaWHgWyyIj6UU%3Dreserved=0
> > git remote add linux-review
> >
> https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgith
> > ub.com%2F0day-
> >
review
> https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgith
> ub.com%2F0day-
> ci%2Flinuxdata=04%7C01%7Clongli%40microsoft.com%7C695cf3d454e
> b468b85fb08d8b1fb3ddd%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0
> %7C637455042608753098%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjA
>
From: Long Li
The completion indicates if NVSP_MSG4_TYPE_SWITCH_DATA_PATH has been
processed by the VSP. The traffic is steered to VF or synthetic after we
receive this completion.
Signed-off-by: Long Li
---
drivers/net/hyperv/netvsc.c | 34 +++--
drivers/net
From: Long Li
On VF hot remove, NETDEV_GOING_DOWN is sent to notify the VF is about to
go down. At this time, the VF is still sending/receiving traffic and we
request the VSP to switch datapath.
On completion, the datapath is switched to synthetic and we can proceed
with VF hot remove.
Signed
From: Long Li
The driver needs to check if the datapath has been switched to VF before
sending traffic to VF.
Signed-off-by: Long Li
---
drivers/net/hyperv/netvsc_drv.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/net/hyperv/netvsc_drv.c b/drivers/net/hyperv
The word in the comment is misspelled, it should be "include".
Signed-off-by: Long Li
---
mm/migrate.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/migrate.c b/mm/migrate.c
index 5ca5842df5db..d79640ab8aa1 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -1694
rder to make the code clear, the warning
message is put in one place.
Signed-off-by: Long Li
---
changes in V4:
-Change the check function name to kmalloc_check_flags()
-Put the flags check into the kmalloc_check_flags()
changes in V3:
-Put the warning message in one place
-updage the change
rder to make the code clear, the warning
message is put in one place.
Signed-off-by: Long Li
---
changes in V3:
-Put the warning message in one place
-updage the change log to be clear
mm/slab.c| 10 +++---
mm/slab.h| 1 +
mm/slab_common.c | 17 +
mm/sl
to a virtual address, kmalloc_order() will return
NULL and the page has been allocated.
After modification, GFP_SLAB_BUG_MASK has been checked before
allocating pages, refer to the new_slab().
Signed-off-by: Long Li
---
Changes in v2:
- patch is rebased againest "[PATCH] mm: Free unused
NULL, the pages has not
been released. Usually driver developers will not use the
GFP_HIGHUSER flag to allocate memory in kmalloc, but I think this
memory leak is not perfect, it is best to be fixed. This is the
first time I have posted a patch, there may be something wrong.
Signed-off-by: Long Li
>Subject: RE: [Patch v4] storvsc: setup 1:1 mapping between hardware queue
>and CPU queue
>
>>Subject: Re: [Patch v4] storvsc: setup 1:1 mapping between hardware
>>queue and CPU queue
>>
>>On Fri, Sep 06, 2019 at 10:24:20AM -0700, lon...@linuxonhyperv.com wrote:
>Thanks for the clarification.
>
>The problem with what Ming is proposing in my mind (and its an existing
>problem that exists today), is that nvme is taking precedence over anything
>else until it absolutely cannot hog the cpu in hardirq.
>
>In the thread Ming referenced a case where today if the
> >> Long, does this patch make any difference?
> >
> > Sagi,
> >
> > Sorry it took a while to bring my system back online.
> >
> > With the patch, the IOPS is about the same drop with the 1st patch. I think
> the excessive context switches are causing the drop in IOPS.
> >
> > The following are
>Subject: Re: [PATCH 1/4] softirq: implement IRQ flood detection mechanism
>
>Hey Ming,
>
Ok, so the real problem is per-cpu bounded tasks.
I share Thomas opinion about a NAPI like approach.
>>>
>>> We already have that, its irq_poll, but it seems that for this
>>> use-case, we get
>Subject: Re: [Patch v4] storvsc: setup 1:1 mapping between hardware queue
>and CPU queue
>
>On Fri, Sep 06, 2019 at 10:24:20AM -0700, lon...@linuxonhyperv.com wrote:
>>From: Long Li
>>
>>storvsc doesn't use a dedicated hardware queue for a given CPU queue.
>Subject: Re: [PATCH 1/4] softirq: implement IRQ flood detection mechanism
>
>On Fri, Sep 06, 2019 at 09:48:21AM +0800, Ming Lei wrote:
>> When one IRQ flood happens on one CPU:
>>
>> 1) softirq handling on this CPU can't make progress
>>
>> 2) kernel thread bound to this CPU can't make progress
>Subject: Re: [PATCH 1/4] softirq: implement IRQ flood detection mechanism
>
>
>On 06/09/2019 03:22, Long Li wrote:
>[ ... ]
>>
>
>> Tracing shows that the CPU was in either hardirq or softirq all the
>> time before warnings. During tests, the system was un
>Subject: RE: [Patch v3] storvsc: setup 1:1 mapping between hardware queue
>and CPU queue
>
>From: Long Li Sent: Thursday, September 5, 2019 3:55
>PM
>>
>> storvsc doesn't use a dedicated hardware queue for a given CPU queue.
>> When issuing I/O, it sele
>Subject: Re: [PATCH 1/4] softirq: implement IRQ flood detection mechanism
>
>
>Hi Ming,
>
>On 05/09/2019 11:06, Ming Lei wrote:
>> On Wed, Sep 04, 2019 at 07:31:48PM +0200, Daniel Lezcano wrote:
>>> Hi,
>>>
>>> On 04/09/2019 19:07, Bart Van Assche wrote:
On 9/3/19 12:50 AM, Daniel Lezcano
interval to evaluate if IRQ flood
>>>is triggered. The Exponential Weighted Moving Average(EWMA) is used to
>>>compute CPU average interrupt interval.
>>>
>>>Cc: Long Li
>>>Cc: Ingo Molnar ,
>>>Cc: Peter Zijlstra
>>>Cc: Keith Bu
>>>Subject: Re: [PATCH 3/3] nvme: complete request in work queue on CPU
>>>with flooded interrupts
>>>
>>>
Sagi,
Here are the test results.
Benchmark command:
fio --bs=4k --ioengine=libaio --iodepth=64
--
>>>Subject: RE: [Patch v2] storvsc: setup 1:1 mapping between hardware
>>>queue and CPU queue
>>>
>>>>>>Subject: RE: [Patch v2] storvsc: setup 1:1 mapping between hardware
>>>>>>queue and CPU queue
>>>>
>>>Subject: RE: [Patch v2] storvsc: setup 1:1 mapping between hardware
>>>queue and CPU queue
>>>
>>>From: Long Li Sent: Thursday, August 22, 2019
>>>1:42 PM
>>>>
>>>> storvsc doesn't use a dedicated hardware queue for a given
>>>Subject: Re: [PATCH] storvsc: setup 1:1 mapping between hardware queue
>>>and CPU queue
>>>
>>>On Tue, Aug 20, 2019 at 3:36 AM wrote:
>>>>
>>>> From: Long Li
>>>>
>>>> storvsc doesn't use a dedicated h
>>>Subject: RE: [PATCH 3/3] nvme: complete request in work queue on CPU
>>>with flooded interrupts
>>>
>>>>>>Subject: Re: [PATCH 3/3] nvme: complete request in work queue on
>>>CPU
>>>>>>with flooded interrupts
>>
>>>Subject: Re: [PATCH 0/3] fix interrupt swamp in NVMe
>>>
>>>On Wed, Aug 21, 2019 at 07:47:44AM +, Long Li wrote:
>>>> >>>Subject: Re: [PATCH 0/3] fix interrupt swamp in NVMe
>>>> >>>
>>>> >>
>>>Subject: Re: [PATCH 3/3] nvme: complete request in work queue on CPU
>>>with flooded interrupts
>>>
>>>
>>>> From: Long Li
>>>>
>>>> When a NVMe hardware queue is mapped to several CPU queues, it is
>>>> po
>>>Subject: Re: [PATCH 3/3] nvme: complete request in work queue on CPU
>>>with flooded interrupts
>>>
>>>On Mon, Aug 19, 2019 at 11:14:29PM -0700, lon...@linuxonhyperv.com
>>>wrote:
>>>> From: Long Li
>>>>
>>>>
>>>Subject: Re: [PATCH 1/3] sched: define a function to report the number of
>>>context switches on a CPU
>>>
>>>On Mon, Aug 19, 2019 at 11:14:27PM -0700, lon...@linuxonhyperv.com
>>>wrote:
>>>> From: Long Li
>>>>
>>>
>>>Subject: Re: [PATCH 0/3] fix interrupt swamp in NVMe
>>>
>>>On 20/08/2019 09:25, Ming Lei wrote:
>>>> On Tue, Aug 20, 2019 at 2:14 PM wrote:
>>>>>
>>>>> From: Long Li
>>>>>
>>>>> This patch set
>>>-Original Message-
>>>From: Pavel Shilovsky
>>>Sent: Thursday, May 9, 2019 11:01 AM
>>>To: Long Li
>>>Cc: Steve French ; linux-cifs >>c...@vger.kernel.org>; samba-technical ;
>>>Kernel Mailing List
>>>Sub
;>hypervisor."
>>>>
>>>> Implement the optimization in Linux.
>>>>
>>>
>>>Simon, Long,
>>>
>>>did you get a chance to run some tests with this?
I have ran some tests on Azure L80s_v2.
With 10 NVMe disks on raid0 and formatted to EXT4, I'm getting 2.6m max IOPS
with the patch, compared to 2.55m IOPS before.
The VM has been running stable. Thank you!
Tested-by: Long Li
>>>
>>>--
>>>Vitaly
From: Long Li
To support compounding, __smb_send_rqst() now sends an array of requests to
the transport layer.
Change smbd_send() to take an array of requests, and send them in as few
packets as possible.
Signed-off-by: Long Li
---
fs/cifs/smbdirect.c | 55
From: Long Li
When packets are waiting for outbound I/O and interrupted, return the
proper error code to user process.
Signed-off-by: Long Li
---
fs/cifs/smbdirect.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/fs/cifs/smbdirect.c b/fs/cifs/smbdirect.c
index 7259427
From: Long Li
Now upper layer is handling the transport shutdown and reconnect, remove
the code that handling transport shutdown on RDMA disconnect.
Signed-off-by: Long Li
---
fs/cifs/cifs_debug.c | 8 ++--
fs/cifs/smbdirect.c | 120 +++
fs
From: Long Li
Failure to send a packet doesn't mean it's a permanent failure, it can't be
returned to user process. This I/O should be retried or failed based on
server packet response and transport health. This logic is handled by the
upper layer.
Give this decision to upper layer.
Signed-off
From: Long Li
Memory registration failure doesn't mean this I/O has failed, it means the
transport is hitting I/O error or needs reconnect. This error is not from
the server.
Indicate this error to upper layer, and let upper layer decide how to
reconnect and proceed with this I/O.
Signed-off
From: Long Li
When transport is being destroyed, it's possible that some processes may
hold memory registrations that need to be deregistred.
Call them first so nobody is using transport resources, and it can be
destroyed.
Signed-off-by: Long Li
---
fs/cifs/connect.c | 36
From: Long Li
When sending a rdata, transport may return -EAGAIN. In this case
we should re-obtain credits because the session may have been
reconnected.
Change in v2: adjust_credits before re-sending
Signed-off-by: Long Li
---
fs/cifs/file.c | 71
From: Long Li
When sending a wdata, transport may return -EAGAIN. In this case
we should re-obtain credits because the session may have been
reconnected.
Change in v2: adjust_credits before re-sending
Signed-off-by: Long Li
---
fs/cifs/file.c | 77
From: Long Li
When sending a wdata, transport may return -EAGAIN. In this case
we should re-obtain credits because the session may have been
reconnected.
Signed-off-by: Long Li
---
fs/cifs/file.c | 61 +-
1 file changed, 31 insertions(+), 30
From: Long Li
When sending a rdata, transport may return -EAGAIN. In this case
we should re-obtain credits because the session may have been
reconnected.
Signed-off-by: Long Li
---
fs/cifs/file.c | 51 +-
1 file changed, 26 insertions(+), 25
From: Long Li
The current code attempts to pin memory using the largest possible wsize
based on the currect SMB credits. This doesn't cause kernel oops but this is
not optimal as we may pin more pages then actually needed.
Fix this by only pinning what are needed for doing this write I/O
From: Long Li
When pinning memory failed, we should return the correct error code and
rewind the SMB credits.
Reported-by: Murphy Zhou
Signed-off-by: Long Li
Cc: sta...@vger.kernel.org
Cc: Murphy Zhou
---
fs/cifs/file.c | 8 +++-
1 file changed, 7 insertions(+), 1 deletion(-)
diff
From: Long Li
EBUSY is not handled by VFS, and will be passed to user-mode. This is not
correct as we need to wait for more credits.
This patch also fixes a bug where rsize or wsize is used uninitialized when
the call to server->ops->wait_mtu_credits() fails.
Reported-by: Dan Carpenter
From: Long Li
EBUSY is not handled by VFS, and will be passed to user-mode. This is not
correct as we need to wait for more credits.
This patch also fixes a bug where rsize or wsize is used uninitialized when
the call to server->ops->wait_mtu_credits() fails.
Reported-by: Dan Carpenter
> Subject: Re: [Patch v4 1/3] CIFS: Add support for direct I/O read
>
> ср, 28 нояб. 2018 г. в 15:43, Long Li :
> >
> > > Subject: Re: [Patch v4 1/3] CIFS: Add support for direct I/O read
> > >
> > > Hi Long,
> > >
> > > Please find my c
> Subject: Re: [Patch v4 1/3] CIFS: Add support for direct I/O read
>
> ср, 28 нояб. 2018 г. в 15:43, Long Li :
> >
> > > Subject: Re: [Patch v4 1/3] CIFS: Add support for direct I/O read
> > >
> > > Hi Long,
> > >
> > > Please find my c
> Subject: Re: [Patch v4 2/3] CIFS: Add support for direct I/O write
>
> ср, 28 нояб. 2018 г. в 18:20, Long Li :
> >
> > > Subject: Re: [Patch v4 2/3] CIFS: Add support for direct I/O write
> > >
> > > ср, 31 окт. 2018 г. в 15:2
> Subject: Re: [Patch v4 2/3] CIFS: Add support for direct I/O write
>
> ср, 28 нояб. 2018 г. в 18:20, Long Li :
> >
> > > Subject: Re: [Patch v4 2/3] CIFS: Add support for direct I/O write
> > >
> > > ср, 31 окт. 2018 г. в 15:2
> Subject: Re: [Patch v4 2/3] CIFS: Add support for direct I/O write
>
> ср, 31 окт. 2018 г. в 15:26, Long Li :
> >
> > From: Long Li
> >
> > With direct I/O write, user supplied buffers are pinned to the memory
> > and data are transferred directly f
> Subject: Re: [Patch v4 2/3] CIFS: Add support for direct I/O write
>
> ср, 31 окт. 2018 г. в 15:26, Long Li :
> >
> > From: Long Li
> >
> > With direct I/O write, user supplied buffers are pinned to the memory
> > and data are transferred directly f
> Subject: Re: [Patch v4 1/3] CIFS: Add support for direct I/O read
>
> Hi Long,
>
> Please find my comments below.
>
>
> ср, 31 окт. 2018 г. в 15:14, Long Li :
> >
> > From: Long Li
> >
> > With direct I/O read, we transfer the data directly
> Subject: Re: [Patch v4 1/3] CIFS: Add support for direct I/O read
>
> Hi Long,
>
> Please find my comments below.
>
>
> ср, 31 окт. 2018 г. в 15:14, Long Li :
> >
> > From: Long Li
> >
> > With direct I/O read, we transfer the data directly
> Subject: Re: [Patch v4] genirq/matrix: Choose CPU for managed IRQs based
> on how many of them are allocated
>
> Long,
>
> On Tue, 6 Nov 2018, Long Li wrote:
>
> > From: Long Li
> >
> > On a large system with multiple devices of the same class (
> Subject: Re: [Patch v4] genirq/matrix: Choose CPU for managed IRQs based
> on how many of them are allocated
>
> Long,
>
> On Tue, 6 Nov 2018, Long Li wrote:
>
> > From: Long Li
> >
> > On a large system with multiple devices of the same class (
Commit-ID: e8da8794a7fd9eef1ec9a07f0d4897c68581c72b
Gitweb: https://git.kernel.org/tip/e8da8794a7fd9eef1ec9a07f0d4897c68581c72b
Author: Long Li
AuthorDate: Tue, 6 Nov 2018 04:00:00 +
Committer: Thomas Gleixner
CommitDate: Tue, 6 Nov 2018 23:20:13 +0100
genirq/matrix: Improve
Commit-ID: e8da8794a7fd9eef1ec9a07f0d4897c68581c72b
Gitweb: https://git.kernel.org/tip/e8da8794a7fd9eef1ec9a07f0d4897c68581c72b
Author: Long Li
AuthorDate: Tue, 6 Nov 2018 04:00:00 +
Committer: Thomas Gleixner
CommitDate: Tue, 6 Nov 2018 23:20:13 +0100
genirq/matrix: Improve
From: Long Li
On a large system with multiple devices of the same class (e.g. NVMe disks,
using managed IRQs), the kernel tends to concentrate their IRQs on several
CPUs.
The issue is that when NVMe calls irq_matrix_alloc_managed(), the assigned
CPU tends to be the first several CPUs
From: Long Li
On a large system with multiple devices of the same class (e.g. NVMe disks,
using managed IRQs), the kernel tends to concentrate their IRQs on several
CPUs.
The issue is that when NVMe calls irq_matrix_alloc_managed(), the assigned
CPU tends to be the first several CPUs
Commit-ID: b82592199032bf7c778f861b936287e37ebc9f62
Gitweb: https://git.kernel.org/tip/b82592199032bf7c778f861b936287e37ebc9f62
Author: Long Li
AuthorDate: Fri, 2 Nov 2018 18:02:48 +
Committer: Thomas Gleixner
CommitDate: Mon, 5 Nov 2018 12:16:26 +0100
genirq/affinity: Spread IRQs
Commit-ID: b82592199032bf7c778f861b936287e37ebc9f62
Gitweb: https://git.kernel.org/tip/b82592199032bf7c778f861b936287e37ebc9f62
Author: Long Li
AuthorDate: Fri, 2 Nov 2018 18:02:48 +
Committer: Thomas Gleixner
CommitDate: Mon, 5 Nov 2018 12:16:26 +0100
genirq/affinity: Spread IRQs
> Subject: Re: [PATCH v3] genirq/matrix: Choose CPU for managed IRQs based
> on how many of them are allocated
>
> On Sat, 3 Nov 2018, Thomas Gleixner wrote:
> > On Fri, 2 Nov 2018, Long Li wrote:
> > > /**
> > > * irq_matrix_assign_system - Assign system w
> Subject: Re: [PATCH v3] genirq/matrix: Choose CPU for managed IRQs based
> on how many of them are allocated
>
> On Sat, 3 Nov 2018, Thomas Gleixner wrote:
> > On Fri, 2 Nov 2018, Long Li wrote:
> > > /**
> > > * irq_matrix_assign_system - Assign system w
From: Long Li
On systems with large number of NUMA nodes, there may be more NUMA nodes than
the number of MSI/MSI-X interrupts that device requests for. The current code
always picks up the NUMA nodes starting from the node 0, up to the number of
interrupts requested. This may left some later
From: Long Li
On systems with large number of NUMA nodes, there may be more NUMA nodes than
the number of MSI/MSI-X interrupts that device requests for. The current code
always picks up the NUMA nodes starting from the node 0, up to the number of
interrupts requested. This may left some later
> Subject: RE: [PATCH] genirq/affinity: Spread IRQs to all available NUMA
> nodes
>
> From: Long Li Sent: Thursday, November 1, 2018
> 4:52 PM
> >
> > --- a/kernel/irq/affinity.c
> > +++ b/kernel/irq/affinity.c
> > @@ -117,12 +117,13 @@ static in
> Subject: RE: [PATCH] genirq/affinity: Spread IRQs to all available NUMA
> nodes
>
> From: Long Li Sent: Thursday, November 1, 2018
> 4:52 PM
> >
> > --- a/kernel/irq/affinity.c
> > +++ b/kernel/irq/affinity.c
> > @@ -117,12 +117,13 @@ static in
From: Long Li
On a large system with multiple devices of the same class (e.g. NVMe disks,
using managed IRQs), the kernel tends to concentrate their IRQs on several
CPUs.
The issue is that when NVMe calls irq_matrix_alloc_managed(), the assigned
CPU tends to be the first several CPUs
From: Long Li
On a large system with multiple devices of the same class (e.g. NVMe disks,
using managed IRQs), the kernel tends to concentrate their IRQs on several
CPUs.
The issue is that when NVMe calls irq_matrix_alloc_managed(), the assigned
CPU tends to be the first several CPUs
From: Long Li
On systems with large number of NUMA nodes, there may be more NUMA nodes than
the number of MSI/MSI-X interrupts that device requests for. The current code
always picks up the NUMA nodes starting from the node 0, up to the number of
interrupts requested. This may left some later
From: Long Li
On systems with large number of NUMA nodes, there may be more NUMA nodes than
the number of MSI/MSI-X interrupts that device requests for. The current code
always picks up the NUMA nodes starting from the node 0, up to the number of
interrupts requested. This may left some later
> Subject: Re: [Patch v2] genirq/matrix: Choose CPU for assigning interrupts
> based on allocated IRQs
>
> Long,
>
> On Thu, 1 Nov 2018, Long Li wrote:
> > On a large system with multiple devices of the same class (e.g. NVMe
> > disks, using managed IRQs), t
> Subject: Re: [Patch v2] genirq/matrix: Choose CPU for assigning interrupts
> based on allocated IRQs
>
> Long,
>
> On Thu, 1 Nov 2018, Long Li wrote:
> > On a large system with multiple devices of the same class (e.g. NVMe
> > disks, using managed IRQs), t
From: Long Li
On a large system with multiple devices of the same class (e.g. NVMe disks,
using managed IRQs), the kernel tends to concentrate their IRQs on several
CPUs.
The issue is that when NVMe calls irq_matrix_alloc_managed(), the assigned
CPU tends to be the first several CPUs
From: Long Li
On a large system with multiple devices of the same class (e.g. NVMe disks,
using managed IRQs), the kernel tends to concentrate their IRQs on several
CPUs.
The issue is that when NVMe calls irq_matrix_alloc_managed(), the assigned
CPU tends to be the first several CPUs
From: Long Li
With direct I/O read, we transfer the data directly from transport layer to
the user data buffer.
Change in v3: add support for kernel AIO
Change in v4:
Refactor common read code to __cifs_readv for direct and non-direct I/O.
Retry on direct I/O failure.
Signed-off-by: Long Li
From: Long Li
With direct I/O write, user supplied buffers are pinned to the memory and data
are transferred directly from user buffers to the transport layer.
Change in v3: add support for kernel AIO
Change in v4:
Refactor common write code to __cifs_writev for direct and non-direct I/O
From: Long Li
With direct I/O read, we transfer the data directly from transport layer to
the user data buffer.
Change in v3: add support for kernel AIO
Change in v4:
Refactor common read code to __cifs_readv for direct and non-direct I/O.
Retry on direct I/O failure.
Signed-off-by: Long Li
From: Long Li
With direct I/O write, user supplied buffers are pinned to the memory and data
are transferred directly from user buffers to the transport layer.
Change in v3: add support for kernel AIO
Change in v4:
Refactor common write code to __cifs_writev for direct and non-direct I/O
From: Long Li
With direct read/write functions implemented, add them to file_operations.
Dircet I/O is used under two conditions:
1. When mounting with "cache=none", CIFS uses direct I/O for all user file
data transfer.
2. When opening a file with O_DIRECT, CIFS uses direct I/O fo
From: Long Li
With direct read/write functions implemented, add them to file_operations.
Dircet I/O is used under two conditions:
1. When mounting with "cache=none", CIFS uses direct I/O for all user file
data transfer.
2. When opening a file with O_DIRECT, CIFS uses direct I/O fo
> Subject: Re: [PATCH] Choose CPU based on allocated IRQs
>
> Long,
>
> On Tue, 23 Oct 2018, Long Li wrote:
>
> thanks for this patch.
>
> A trivial formal thing ahead. The subject line
>
>[PATCH] Choose CPU based on allocated IRQs
>
> is lacking a
> Subject: Re: [PATCH] Choose CPU based on allocated IRQs
>
> Long,
>
> On Tue, 23 Oct 2018, Long Li wrote:
>
> thanks for this patch.
>
> A trivial formal thing ahead. The subject line
>
>[PATCH] Choose CPU based on allocated IRQs
>
> is lacking a
From: Long Li
In irq_matrix, "available" is set when IRQs are allocated earlier in the IRQ
assigning process.
Later, when IRQs are activated those values are not good indicators of what
CPU to choose to assign to this IRQ.
Change to choose CPU for an IRQ based on how many IRQs a
From: Long Li
In irq_matrix, "available" is set when IRQs are allocated earlier in the IRQ
assigning process.
Later, when IRQs are activated those values are not good indicators of what
CPU to choose to assign to this IRQ.
Change to choose CPU for an IRQ based on how many IRQs a
> Subject: Re: [PATCH V3 (resend) 3/7] CIFS: Add support for direct I/O read
>
> чт, 20 сент. 2018 г. в 14:22, Long Li :
> >
> > From: Long Li
> >
> > With direct I/O read, we transfer the data directly from transport
> > layer to the user data bu
> Subject: Re: [PATCH V3 (resend) 3/7] CIFS: Add support for direct I/O read
>
> чт, 20 сент. 2018 г. в 14:22, Long Li :
> >
> > From: Long Li
> >
> > With direct I/O read, we transfer the data directly from transport
> > layer to the user data bu
From: Long Li
With direct I/O read, we transfer the data directly from transport layer to
the user data buffer.
Change in v3: add support for kernel AIO
Signed-off-by: Long Li
---
fs/cifs/cifsfs.h | 1 +
fs/cifs/cifsglob.h | 5 ++
fs/cifs/file.c | 210
From: Long Li
With direct I/O read, we transfer the data directly from transport layer to
the user data buffer.
Change in v3: add support for kernel AIO
Signed-off-by: Long Li
---
fs/cifs/cifsfs.h | 1 +
fs/cifs/cifsglob.h | 5 ++
fs/cifs/file.c | 210
From: Long Li
With direct I/O write, user supplied buffers are pinned to the memory and data
are transferred directly from user buffers to the transport layer.
Change in v3: add support for kernel AIO
Signed-off-by: Long Li
---
fs/cifs/cifsfs.h | 1 +
fs/cifs/file.c | 196
From: Long Li
It is not necessary to deregister a memory registration after it has been
successfully invalidated.
Signed-off-by: Long Li
---
fs/cifs/smbdirect.c | 38 +++---
1 file changed, 19 insertions(+), 19 deletions(-)
diff --git a/fs/cifs/smbdirect.c b
From: Long Li
With direct read/write functions implemented, add them to file_operations.
Dircet I/O is used under two conditions:
1. When mounting with "cache=none", CIFS uses direct I/O for all user file
data transfer.
2. When opening a file with O_DIRECT, CIFS uses direct I/O fo
From: Long Li
With direct I/O write, user supplied buffers are pinned to the memory and data
are transferred directly from user buffers to the transport layer.
Change in v3: add support for kernel AIO
Signed-off-by: Long Li
---
fs/cifs/cifsfs.h | 1 +
fs/cifs/file.c | 196
From: Long Li
With direct read/write functions implemented, add them to file_operations.
Dircet I/O is used under two conditions:
1. When mounting with "cache=none", CIFS uses direct I/O for all user file
data transfer.
2. When opening a file with O_DIRECT, CIFS uses direct I/O fo
1 - 100 of 1071 matches
Mail list logo