Narayan Lal b44...@freescale.com
---
drivers/crypto/caam/ctrl.c | 114 ++-
drivers/crypto/caam/intern.h | 9 ++--
drivers/crypto/caam/regs.h | 38 +++
3 files changed, 81 insertions(+), 80 deletions(-)
diff --git a/drivers/crypto/caam/ctrl.c
On 1/7/21 4:33 AM, Vitaly Kuznetsov wrote:
> Sean Christopherson writes:
>
>> On Wed, Jan 06, 2021, Vitaly Kuznetsov wrote:
>>> Looking back, I don't quite understand why we wanted to account ticks
>>> between vmexit and exiting guest context as 'guest' in the first place;
>>> to my
On 10/4/20 7:14 PM, Frederic Weisbecker wrote:
> On Sun, Oct 04, 2020 at 02:44:39PM +, Alex Belits wrote:
>> On Thu, 2020-10-01 at 15:56 +0200, Frederic Weisbecker wrote:
>>> External Email
>>>
>>> ---
>>> ---
>>> On Wed, Jul 22,
On 10/1/20 11:49 AM, Frederic Weisbecker wrote:
> On Mon, Sep 28, 2020 at 02:35:25PM -0400, Nitesh Narayan Lal wrote:
>> Nitesh Narayan Lal (4):
>> sched/isolation: API to get number of housekeeping CPUs
>> sched/isolation: Extend nohz_full to isolate managed IRQs
On 9/17/20 2:23 PM, Jesse Brandeburg wrote:
> Nitesh Narayan Lal wrote:
>
>> In a realtime environment, it is essential to isolate unwanted IRQs from
>> isolated CPUs to prevent latency overheads. Creating MSIX vectors only
>> based on the online CPUs could lead to a
On 9/17/20 2:18 PM, Jesse Brandeburg wrote:
> Nitesh Narayan Lal wrote:
>
>> Introduce a new API num_housekeeping_cpus(), that can be used to retrieve
>> the number of housekeeping CPUs by reading an atomic variable
>> __num_housekeeping_cpus. This variable is set
On 9/17/20 4:11 PM, Bjorn Helgaas wrote:
> [+cc Ingo, Peter, Juri, Vincent (scheduler maintainers)]
>
> s/hosekeeping/housekeeping/ (in subject)
>
> On Wed, Sep 09, 2020 at 11:08:16AM -0400, Nitesh Narayan Lal wrote:
>> Introduce a new API num_housekeeping_cpus(), that ca
kernel.org/patchwork/patch/1256308/
Nitesh Narayan Lal (3):
sched/isolation: API to get num of hosekeeping CPUs
i40e: limit msix vectors based on housekeeping CPUs
PCI: Limit pci_alloc_irq_vectors as per housekeeping CPUs
drivers/net/ethernet/intel/i40e/i40e_main.c | 3 ++-
include/l
run into failures while moving these IRQs to housekeeping due to
per CPU vector limit.
Signed-off-by: Nitesh Narayan Lal
---
include/linux/pci.h | 16
1 file changed, 16 insertions(+)
diff --git a/include/linux/pci.h b/include/linux/pci.h
index 835530605c0d..750ba927d963 100644
vectors only based on available
housekeeping CPUs by using num_housekeeping_cpus().
Signed-off-by: Nitesh Narayan Lal
---
drivers/net/ethernet/intel/i40e/i40e_main.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c
b/drivers/net
limit.
If there are no isolated CPUs specified then the API returns the number
of all online CPUs.
Signed-off-by: Nitesh Narayan Lal
---
include/linux/sched/isolation.h | 7 +++
kernel/sched/isolation.c| 23 +++
2 files changed, 30 insertions(+)
diff --git
On 9/10/20 3:22 PM, Marcelo Tosatti wrote:
> On Wed, Sep 09, 2020 at 11:08:18AM -0400, Nitesh Narayan Lal wrote:
>> This patch limits the pci_alloc_irq_vectors max vectors that is passed on
>> by the caller based on the available housekeeping CPUs by only using the
>&g
Hi,
Last year I reported an issue of "suspicious RCU usage" [1] with the debug
kernel which was fixed with the patch:
87fa7f3e98 "x86/kvm: Move context tracking where it belongs"
Recently I have come across a possible regression because of this
patch in the cpuacct.stats system time.
With
not be required
within vcpu_enter_guest anymore.
Conflicts:
arch/x86/kvm/svm.c
Signed-off-by: Nitesh Narayan Lal
---
arch/x86/kvm/svm/svm.c | 9 +
arch/x86/kvm/x86.c | 11 ---
2 files changed, 9 insertions(+), 11 deletions(-)
diff --git a/arch/x86/kvm/svm/svm.c b/arch
On 1/5/21 7:47 PM, Sean Christopherson wrote:
> +tglx
>
> On Tue, Jan 05, 2021, Nitesh Narayan Lal wrote:
>> This reverts commit d7a08882a0a4b4e176691331ee3f492996579534.
>>
>> After the introduction of the patch:
>>
>> 87fa7f3e9: x86/kvm: Move contex
On 10/23/20 4:58 AM, Peter Zijlstra wrote:
> On Thu, Oct 22, 2020 at 01:47:14PM -0400, Nitesh Narayan Lal wrote:
>
>> Hi Peter,
>>
>> So based on the suggestions from you and Thomas, I think something like the
>> following should do the job within
On 10/23/20 9:25 AM, Peter Zijlstra wrote:
> On Mon, Sep 28, 2020 at 02:35:27PM -0400, Nitesh Narayan Lal wrote:
>> Extend nohz_full feature set to include isolation from managed IRQS. This
> So you say it's for managed-irqs, the feature is actually called
> MANAGED_IRQ, but,
On 10/23/20 9:29 AM, Frederic Weisbecker wrote:
> On Fri, Oct 23, 2020 at 03:25:05PM +0200, Peter Zijlstra wrote:
>> On Mon, Sep 28, 2020 at 02:35:27PM -0400, Nitesh Narayan Lal wrote:
>>> Extend nohz_full feature set to include isolation from managed IRQS. This
>> So
On 10/20/20 3:30 AM, Peter Zijlstra wrote:
> On Mon, Oct 19, 2020 at 11:00:05AM -0300, Marcelo Tosatti wrote:
>>> So I think it is important to figure out what that driver really wants
>>> in the nohz_full case. If it wants to retain N interrupts per CPU, and
>>> only reduce the number of CPUs,
On 10/20/20 9:41 AM, Peter Zijlstra wrote:
> On Tue, Oct 20, 2020 at 09:00:01AM -0400, Nitesh Narayan Lal wrote:
>> On 10/20/20 3:30 AM, Peter Zijlstra wrote:
>>> On Mon, Oct 19, 2020 at 11:00:05AM -0300, Marcelo Tosatti wrote:
>>>>> So I think it is importa
On 10/20/20 10:16 AM, Thomas Gleixner wrote:
> On Mon, Sep 28 2020 at 14:35, Nitesh Narayan Lal wrote:
>>
>> +hk_cpus = housekeeping_num_online_cpus(HK_FLAG_MANAGED_IRQ);
>> +
>> +/*
>> + * If we have isolated CPUs for use by real-time tasks, to kee
On 10/21/20 4:25 PM, Thomas Gleixner wrote:
> On Tue, Oct 20 2020 at 20:07, Thomas Gleixner wrote:
>> On Tue, Oct 20 2020 at 12:18, Nitesh Narayan Lal wrote:
>>> However, IMHO we would still need a logic to prevent the devices from
>>> creating excess vectors.
>>
On 10/20/20 10:39 AM, Nitesh Narayan Lal wrote:
> On 10/20/20 9:41 AM, Peter Zijlstra wrote:
>> On Tue, Oct 20, 2020 at 09:00:01AM -0400, Nitesh Narayan Lal wrote:
>>> On 10/20/20 3:30 AM, Peter Zijlstra wrote:
>>>> On Mon, Oct 19, 2020 at 11:00:05AM -0300, Marc
On 10/23/20 5:00 PM, Thomas Gleixner wrote:
> On Fri, Oct 23 2020 at 09:10, Nitesh Narayan Lal wrote:
>> On 10/23/20 4:58 AM, Peter Zijlstra wrote:
>>> On Thu, Oct 22, 2020 at 01:47:14PM -0400, Nitesh Narayan Lal wrote:
>>> So shouldn't we then fix the drivers /
On 10/26/20 5:50 PM, Thomas Gleixner wrote:
> On Mon, Oct 26 2020 at 14:11, Jacob Keller wrote:
>> On 10/26/2020 1:11 PM, Thomas Gleixner wrote:
>>> On Mon, Oct 26 2020 at 12:21, Jacob Keller wrote:
Are there drivers which use more than one interrupt per queue? I know
drivers have
On 8/12/19 4:04 PM, Nitesh Narayan Lal wrote:
> On 8/12/19 2:47 PM, Alexander Duyck wrote:
>> On Mon, Aug 12, 2019 at 6:13 AM Nitesh Narayan Lal wrote:
>>> This patch introduces the core infrastructure for free page reporting in
>>> virtual environments. It enables t
On 7/12/19 12:22 PM, Alexander Duyck wrote:
> On Thu, Jul 11, 2019 at 6:13 PM Nitesh Narayan Lal wrote:
>>
>> On 7/11/19 7:20 PM, Alexander Duyck wrote:
>>> On Thu, Jul 11, 2019 at 10:58 AM Nitesh Narayan Lal
>>> wrote:
>>>> On 7/10/19 5:56 PM
On 8/7/19 6:42 PM, Alexander Duyck wrote:
> From: Alexander Duyck
>
> In order to pave the way for free page reporting in virtualized
> environments we will need a way to get pages out of the free lists and
> identify those pages after they have been returned. To accomplish this,
> this patch
On 8/14/19 12:11 PM, Alexander Duyck wrote:
> On Wed, Aug 14, 2019 at 8:49 AM Nitesh Narayan Lal wrote:
>>
>> On 8/12/19 2:47 PM, Alexander Duyck wrote:
>>> On Mon, Aug 12, 2019 at 6:13 AM Nitesh Narayan Lal
>>> wrote:
>>>> This patch introduces th
On 8/15/19 9:15 AM, Nitesh Narayan Lal wrote:
> On 8/14/19 12:11 PM, Alexander Duyck wrote:
>> On Wed, Aug 14, 2019 at 8:49 AM Nitesh Narayan Lal wrote:
>>> On 8/12/19 2:47 PM, Alexander Duyck wrote:
>>>> On Mon, Aug 12, 2019 at 6:13 AM Nitesh Narayan Lal
On 8/15/19 7:00 PM, Alexander Duyck wrote:
> On Thu, Aug 15, 2019 at 12:23 PM Nitesh Narayan Lal wrote:
[...]
>>>>>>> +}
>>>>>>> +
>>>>>>> +/**
>>>>>>> + * __page_reporting_enqueue - tracks the freed page in
On 5/30/19 5:53 PM, Alexander Duyck wrote:
> This series provides an asynchronous means of hinting to a hypervisor
> that a guest page is no longer in use and can have the data associated
> with it dropped. To do this I have implemented functionality that allows
> for what I am referring to as
On 7/24/19 12:54 PM, Alexander Duyck wrote:
> This series provides an asynchronous means of hinting to a hypervisor
> that a guest page is no longer in use and can have the data associated
> with it dropped. To do this I have implemented functionality that allows
> for what I am referring to as
On 7/24/19 3:02 PM, Michael S. Tsirkin wrote:
> On Wed, Jul 24, 2019 at 10:05:14AM -0700, Alexander Duyck wrote:
>> From: Alexander Duyck
>>
>> Add support for the page hinting feature provided by virtio-balloon.
>> Hinting differs from the regular balloon functionality in that is is
>> much
On 9/24/20 8:09 AM, Frederic Weisbecker wrote:
> On Thu, Sep 24, 2020 at 10:40:29AM +0200, pet...@infradead.org wrote:
>> On Wed, Sep 23, 2020 at 02:11:23PM -0400, Nitesh Narayan Lal wrote:
>>> Introduce a new API hk_num_online_cpus(), that can be used to
>>> re
On 9/24/20 8:11 AM, Frederic Weisbecker wrote:
> On Wed, Sep 23, 2020 at 02:11:23PM -0400, Nitesh Narayan Lal wrote:
>> Introduce a new API hk_num_online_cpus(), that can be used to
>> retrieve the number of online housekeeping CPUs that are meant to handle
>> managed IRQ
On 9/24/20 8:46 AM, Peter Zijlstra wrote:
>
> FWIW, cross-posting to moderated lists is annoying. I don't know why we
> allow them in MAINTAINERS :-(
Yeah, it sends out an acknowledgment for every email.
I had to include it because sending the patches to it apparently allows them
to get tested
On 9/24/20 4:45 PM, Bjorn Helgaas wrote:
> Possible subject:
>
> PCI: Limit pci_alloc_irq_vectors() to housekeeping CPUs
Will switch to this.
>
> On Wed, Sep 23, 2020 at 02:11:26PM -0400, Nitesh Narayan Lal wrote:
>> This patch limits the pci_alloc_irq_vecto
On 9/24/20 4:47 PM, Bjorn Helgaas wrote:
> On Wed, Sep 23, 2020 at 02:11:23PM -0400, Nitesh Narayan Lal wrote:
>> Introduce a new API hk_num_online_cpus(), that can be used to
>> retrieve the number of online housekeeping CPUs that are meant to handle
>> managed IRQ
On 9/24/20 6:59 PM, Bjorn Helgaas wrote:
> On Thu, Sep 24, 2020 at 05:39:07PM -0400, Nitesh Narayan Lal wrote:
>> On 9/24/20 4:45 PM, Bjorn Helgaas wrote:
>>> Possible subject:
>>>
>>> PCI: Limit pci_alloc_irq_vectors() to housekeeping CPUs
>> Will s
in
such environments.
This patch prevents i40e to create vectors only based on online CPUs by
retrieving the online housekeeping CPUs that are designated to perform
managed IRQ jobs.
Signed-off-by: Nitesh Narayan Lal
Reviewed-by: Marcelo Tosatti
Acked-by: Jesse Brandeburg
---
drivers/net
Extend nohz_full feature set to include isolation from managed IRQS. This
is required specifically for setups that only uses nohz_full and still
requires isolation for maintaining lower latency for the listed CPUs.
Suggested-by: Frederic Weisbecker
Signed-off-by: Nitesh Narayan Lal
---
kernel
() to determine the number of
MSIX vectors to create. In real-time environments to minimize interruptions
to isolated CPUs, all device-specific IRQ vectors are often moved to the
housekeeping CPUs, having excess vectors could cause housekeeping CPU to
run out of IRQ vectors.
Signed-off-by: Nitesh Narayan Lal
0200909150818.313699-1-nit...@redhat.com/
Nitesh Narayan Lal (4):
sched/isolation: API to get number of housekeeping CPUs
sched/isolation: Extend nohz_full to isolate managed IRQs
i40e: Limit msix vectors to housekeeping CPUs
PCI: Limit pci_alloc_irq_vectors() to housekeeping CPUs
driver
isolated CPUs, limit the number of vectors allocated by
pci_alloc_irq_vectors() to the minimum number required by the driver, or
to one per housekeeping CPU if that is larger.
Signed-off-by: Nitesh Narayan Lal
---
include/linux/pci.h | 17 +
1 file changed, 17 insertions(+)
diff
On 9/25/20 4:23 PM, Bjorn Helgaas wrote:
> On Fri, Sep 25, 2020 at 02:26:54PM -0400, Nitesh Narayan Lal wrote:
>> If we have isolated CPUs dedicated for use by real-time tasks, we try to
>> move IRQs to housekeeping CPUs from the userspace to reduce latency
>> overhead
On 9/25/20 5:38 PM, Nitesh Narayan Lal wrote:
> On 9/25/20 4:23 PM, Bjorn Helgaas wrote:
[...]
>>> + /*
>>> +* If we have isolated CPUs for use by real-time tasks, to keep the
>>> +* latency overhead to a minimum, device-specific IRQ vectors are moved
On 9/21/20 6:58 PM, Frederic Weisbecker wrote:
> On Thu, Sep 17, 2020 at 11:23:59AM -0700, Jesse Brandeburg wrote:
>> Nitesh Narayan Lal wrote:
>>
>>> In a realtime environment, it is essential to isolate unwanted IRQs from
>>> isolated CPUs to prevent latency
On 9/21/20 7:40 PM, Frederic Weisbecker wrote:
> On Wed, Sep 09, 2020 at 11:08:16AM -0400, Nitesh Narayan Lal wrote:
>> +/*
>> + * num_housekeeping_cpus() - Read the number of housekeeping CPUs.
>> + *
>> + * This function returns the number of available hou
On 9/22/20 5:54 AM, Frederic Weisbecker wrote:
> On Mon, Sep 21, 2020 at 11:08:20PM -0400, Nitesh Narayan Lal wrote:
>> On 9/21/20 6:58 PM, Frederic Weisbecker wrote:
>>> On Thu, Sep 17, 2020 at 11:23:59AM -0700, Jesse Brandeburg wrote:
>>>> Nitesh Narayan Lal w
On 9/22/20 6:08 AM, Frederic Weisbecker wrote:
> On Mon, Sep 21, 2020 at 11:16:51PM -0400, Nitesh Narayan Lal wrote:
>> On 9/21/20 7:40 PM, Frederic Weisbecker wrote:
>>> On Wed, Sep 09, 2020 at 11:08:16AM -0400, Nitesh Narayan Lal wrote:
>>>> +/*
>>&
On 9/10/20 3:31 PM, Nitesh Narayan Lal wrote:
> On 9/10/20 3:22 PM, Marcelo Tosatti wrote:
>> On Wed, Sep 09, 2020 at 11:08:18AM -0400, Nitesh Narayan Lal wrote:
>>> This patch limits the pci_alloc_irq_vectors max vectors that is passed on
>>> by the caller based on
On 9/22/20 4:44 PM, Frederic Weisbecker wrote:
> On Tue, Sep 22, 2020 at 09:34:02AM -0400, Nitesh Narayan Lal wrote:
>> On 9/22/20 5:54 AM, Frederic Weisbecker wrote:
>>> But I don't also want to push toward a complicated solution to handle CPU
>>> hotplug
>&g
On 9/22/20 4:58 PM, Frederic Weisbecker wrote:
> On Tue, Sep 22, 2020 at 09:50:55AM -0400, Nitesh Narayan Lal wrote:
>> On 9/22/20 6:08 AM, Frederic Weisbecker wrote:
>> TBH I don't have a very strong case here at the moment.
>> But still, IMHO, this will force the user to h
On 9/22/20 5:26 PM, Andrew Lunn wrote:
>> Subject: Re: [RFC][Patch v1 1/3] sched/isolation: API to get num of
>> hosekeeping CPUs
> Hosekeeping? Are these CPUs out gardening in the weeds?
Bjorn has already highlighted the typo, so I will be fixing it in the next
version.
Do you find the commit
only based on online CPUs by
using hk_num_online_cpus() instead.
Signed-off-by: Nitesh Narayan Lal
Reviewed-by: Marcelo Tosatti
Acked-by: Jesse Brandeburg
---
drivers/net/ethernet/intel/i40e/i40e_main.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/net/ethernet
[1] https://lore.kernel.org/lkml/20200922095440.GA5217@lenoir/
[2] https://lore.kernel.org/lkml/20200909150818.313699-1-nit...@redhat.com/
Nitesh Narayan Lal (4):
sched/isolation: API to get housekeeping online CPUs
sched/isolation: Extend nohz_full to isolate managed IRQs
i40e: limit msix vec
HK_FLAG_MANAGED_IRQ to derive cpumask for pinning various jobs/IRQs do not
enqueue anything on the CPUs listed under nohz_full.
Suggested-by: Frederic Weisbecker
Signed-off-by: Nitesh Narayan Lal
---
kernel/sched/isolation.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/sched
.
In an RT environment with large isolated but fewer housekeeping CPUs this
was leading to a situation where an attempt to move all of the vectors
corresponding to isolated CPUs to housekeeping CPUs were failing due to
per CPU vector limit.
Signed-off-by: Nitesh Narayan Lal
---
include/linux/sched
isolated CPUs to
keep the latency overhead to a minimum. If the number of housekeeping CPUs
is significantly lower than that of the isolated CPUs we can run into
failures while moving these IRQs to housekeeping CPUs due to per CPU
vector limit.
Signed-off-by: Nitesh Narayan Lal
---
include/linux
in
such environments.
This patch prevents i40e to create vectors only based on online CPUs by
retrieving the online housekeeping CPUs that are designated to perform
managed IRQ jobs.
Signed-off-by: Nitesh Narayan Lal
Reviewed-by: Marcelo Tosatti
Acked-by: Jesse Brandeburg
---
drivers/net
edhat.com/
[4] https://lore.kernel.org/lkml/20200909150818.313699-1-nit...@redhat.com/
Nitesh Narayan Lal (4):
sched/isolation: API to get number of housekeeping CPUs
sched/isolation: Extend nohz_full to isolate managed IRQs
i40e: Limit msix vectors to housekeeping CPUs
PCI: Limit pci_alloc_irq_vecto
Extend nohz_full feature set to include isolation from managed IRQS. This
is required specifically for setups that only uses nohz_full and still
requires isolation for maintaining lower latency for the listed CPUs.
Suggested-by: Frederic Weisbecker
Signed-off-by: Nitesh Narayan Lal
---
kernel
isolated CPUs, limit the number of vectors allocated by
pci_alloc_irq_vectors() to the minimum number required by the driver, or
to one per housekeeping CPU if that is larger.
Signed-off-by: Nitesh Narayan Lal
---
drivers/pci/msi.c | 18 ++
1 file changed, 18 insertions(+)
diff
() to determine the number of
MSIX vectors to create. In real-time environments to minimize interruptions
to isolated CPUs, all device-specific IRQ vectors are often moved to the
housekeeping CPUs, having excess vectors could cause housekeeping CPU to
run out of IRQ vectors.
Signed-off-by: Nitesh Narayan Lal
On 1/27/21 8:09 AM, Marcelo Tosatti wrote:
> On Wed, Jan 27, 2021 at 12:36:30PM +, Robin Murphy wrote:
>> On 2021-01-27 12:19, Marcelo Tosatti wrote:
>>> On Wed, Jan 27, 2021 at 11:57:16AM +, Robin Murphy wrote:
>>>> Hi,
>>>>
>>>> On
On 1/28/21 11:59 AM, Marcelo Tosatti wrote:
> On Thu, Jan 28, 2021 at 05:02:41PM +0100, Thomas Gleixner wrote:
>> On Wed, Jan 27 2021 at 09:19, Marcelo Tosatti wrote:
>>> On Wed, Jan 27, 2021 at 11:57:16AM +, Robin Murphy wrote:
> + hk_flags = HK_FLAG_DOMAIN | HK_FLAG_MANAGED_IRQ;
>
On 9/11/19 8:42 AM, David Hildenbrand wrote:
> On 11.09.19 14:25, Michal Hocko wrote:
>> On Wed 11-09-19 14:19:41, Michal Hocko wrote:
>>> On Wed 11-09-19 08:08:38, Michael S. Tsirkin wrote:
On Wed, Sep 11, 2019 at 01:36:19PM +0200, Michal Hocko wrote:
> On Tue 10-09-19 14:23:40,
On 9/11/19 8:54 AM, Michal Hocko wrote:
> On Wed 11-09-19 14:42:41, David Hildenbrand wrote:
>> On 11.09.19 14:25, Michal Hocko wrote:
>>> On Wed 11-09-19 14:19:41, Michal Hocko wrote:
On Wed 11-09-19 08:08:38, Michael S. Tsirkin wrote:
> On Wed, Sep 11, 2019 at 01:36:19PM +0200, Michal
On 9/11/19 9:20 AM, Michal Hocko wrote:
> On Wed 11-09-19 15:03:39, David Hildenbrand wrote:
>> On 11.09.19 14:54, Michal Hocko wrote:
>>> On Wed 11-09-19 14:42:41, David Hildenbrand wrote:
On 11.09.19 14:25, Michal Hocko wrote:
> On Wed 11-09-19 14:19:41, Michal Hocko wrote:
>> On
s required for page hinting are minimal.
[1] https://lkml.org/lkml/2019/6/19/926
Nitesh Narayan Lal (2):
mm: page_hinting: core infrastructure
virtio-balloon: page_hinting: reporting to the host
drivers/virtio/Kconfig | 1 +
drivers/virtio/virtio_balloon.c | 91 +-
i
Signed-off-by: Nitesh Narayan Lal
---
include/linux/page_hinting.h | 45 +++
mm/Kconfig | 6 +
mm/Makefile | 1 +
mm/page_alloc.c | 18 +--
mm/page_hinting.c| 250 +++
5 files changed, 312 insert
ing the page_hinting_flag which is a virtio-balloon parameter.
Signed-off-by: Nitesh Narayan Lal
---
drivers/virtio/Kconfig | 1 +
drivers/virtio/virtio_balloon.c | 91 -
include/uapi/linux/virtio_balloon.h | 11
3 files changed, 102 insertions(+), 1 delet
Enables QEMU to perform madvise free on the memory range reported
by the vm.
Signed-off-by: Nitesh Narayan Lal
---
hw/virtio/trace-events| 1 +
hw/virtio/virtio-balloon.c| 59 +++
include/hw/virtio/virtio-balloon.h| 2
On 7/11/19 4:49 AM, Cornelia Huck wrote:
> On Wed, 10 Jul 2019 15:53:03 -0400
> Nitesh Narayan Lal wrote:
>
>
> $SUBJECT: s/baloon/balloon/
>
>> Enables QEMU to perform madvise free on the memory range reported
>> by the vm.
> [No comments on the actual functio
On 7/10/19 7:40 PM, Alexander Duyck wrote:
> On Wed, Jul 10, 2019 at 12:52 PM Nitesh Narayan Lal wrote:
>
> The results up here were redundant with what is below so I am just
> dropping them. I would suggest only including one set of results in
> any future cover page as
On 7/10/19 4:19 PM, Dave Hansen wrote:
> On 7/10/19 12:51 PM, Nitesh Narayan Lal wrote:
>> This patch series proposes an efficient mechanism for reporting free memory
>> from a guest to its hypervisor. It especially enables guests with no page
>> cache
>>
On 7/10/19 4:45 PM, Dave Hansen wrote:
> On 7/10/19 12:51 PM, Nitesh Narayan Lal wrote:
>> +struct zone_free_area {
>> +unsigned long *bitmap;
>> +unsigned long base_pfn;
>> +unsigned long end_pfn;
>> +atomic_t free_pages;
>> +unsigned l
On 7/10/19 4:17 PM, Alexander Duyck wrote:
> On Wed, Jul 10, 2019 at 12:53 PM Nitesh Narayan Lal wrote:
>> Enables QEMU to perform madvise free on the memory range reported
>> by the vm.
>>
>> Signed-off-by: Nitesh Narayan Lal
>> ---
>> hw/virtio/t
On 7/11/19 10:58 AM, Alexander Duyck wrote:
> On Thu, Jul 11, 2019 at 4:31 AM Nitesh Narayan Lal wrote:
>>
>> On 7/10/19 7:40 PM, Alexander Duyck wrote:
>>> On Wed, Jul 10, 2019 at 12:52 PM Nitesh Narayan Lal
>>> wrote:
>>>
>>> The result
On 7/11/19 11:08 AM, Alexander Duyck wrote:
> On Thu, Jul 11, 2019 at 8:04 AM Nitesh Narayan Lal wrote:
>>
>> On 7/11/19 10:58 AM, Alexander Duyck wrote:
>>> On Thu, Jul 11, 2019 at 4:31 AM Nitesh Narayan Lal
>>> wrote:
>>>> On 7/10/19 7:40 PM
On 7/10/19 4:45 PM, Dave Hansen wrote:
> On 7/10/19 12:51 PM, Nitesh Narayan Lal wrote:
>> +struct zone_free_area {
>> +unsigned long *bitmap;
>> +unsigned long base_pfn;
>> +unsigned long end_pfn;
>> +atomic_t free_pages;
>> +unsigned l
On 7/11/19 11:25 AM, Nitesh Narayan Lal wrote:
> On 7/10/19 4:45 PM, Dave Hansen wrote:
>> On 7/10/19 12:51 PM, Nitesh Narayan Lal wrote:
>>> +struct zone_free_area {
>>> + unsigned long *bitmap;
>>> + unsigned long base_pfn;
>>> + unsig
On 7/11/19 12:22 PM, Dave Hansen wrote:
> On 7/11/19 8:25 AM, Nitesh Narayan Lal wrote:
>> On 7/10/19 4:45 PM, Dave Hansen wrote:
>>> On 7/10/19 12:51 PM, Nitesh Narayan Lal wrote:
>>>> +struct zone_free_area {
>>>> + unsigned long *bitmap;
>>>
On 7/11/19 12:45 PM, Dave Hansen wrote:
> On 7/11/19 9:36 AM, Nitesh Narayan Lal wrote:
>>>>>> +struct zone_free_area {
>>>>>> +unsigned long *bitmap;
>>>>>> +unsigned long base_pfn;
>>>>&
On 7/10/19 5:56 PM, Alexander Duyck wrote:
> On Wed, Jul 10, 2019 at 12:52 PM Nitesh Narayan Lal wrote:
>> This patch introduces the core infrastructure for free page hinting in
>> virtual environments. It enables the kernel to track the free pages which
>> can be reported
On 7/11/19 2:55 PM, Michael S. Tsirkin wrote:
> On Wed, Jul 10, 2019 at 03:53:03PM -0400, Nitesh Narayan Lal wrote:
>> Enables QEMU to perform madvise free on the memory range reported
>> by the vm.
>>
>> Signed-off-by: Nitesh Narayan Lal
> Missing second "l
On 7/11/19 7:20 PM, Alexander Duyck wrote:
> On Thu, Jul 11, 2019 at 10:58 AM Nitesh Narayan Lal wrote:
>>
>> On 7/10/19 5:56 PM, Alexander Duyck wrote:
>>> On Wed, Jul 10, 2019 at 12:52 PM Nitesh Narayan Lal
>>> wrote:
>>>> This patch introduces
On 7/12/19 12:22 PM, Alexander Duyck wrote:
> On Thu, Jul 11, 2019 at 6:13 PM Nitesh Narayan Lal wrote:
>>
>> On 7/11/19 7:20 PM, Alexander Duyck wrote:
>>> On Thu, Jul 11, 2019 at 10:58 AM Nitesh Narayan Lal
>>> wrote:
>>>> On 7/10/19 5:56 PM
On 7/24/19 3:47 PM, Michael S. Tsirkin wrote:
> On Wed, Jul 10, 2019 at 03:51:58PM -0400, Nitesh Narayan Lal wrote:
>> Enables the kernel to negotiate VIRTIO_BALLOON_F_HINTING feature with the
>> host. If it is available and page_hinting_flag is set to true, page_hinting
>&g
On 7/24/19 3:56 PM, David Hildenbrand wrote:
> On 24.07.19 21:47, Michael S. Tsirkin wrote:
>> On Wed, Jul 10, 2019 at 03:51:58PM -0400, Nitesh Narayan Lal wrote:
>>> Enables the kernel to negotiate VIRTIO_BALLOON_F_HINTING feature with the
>>> host. If it is avai
On 7/24/19 3:47 PM, David Hildenbrand wrote:
> On 24.07.19 21:31, Michael S. Tsirkin wrote:
>> On Wed, Jul 24, 2019 at 08:41:33PM +0200, David Hildenbrand wrote:
>>> On 24.07.19 20:40, Nitesh Narayan Lal wrote:
>>>> On 7/24/19 12:54 PM, Alexander Duyck wro
On 7/24/19 4:18 PM, Alexander Duyck wrote:
> On Wed, 2019-07-24 at 15:02 -0400, Michael S. Tsirkin wrote:
>> On Wed, Jul 24, 2019 at 10:12:10AM -0700, Alexander Duyck wrote:
>>> From: Alexander Duyck
>>>
>>> Add support for what I am referring to as "bubble hinting". Basically the
>>> idea is
On 7/24/19 4:27 PM, Alexander Duyck wrote:
> On Wed, 2019-07-24 at 14:40 -0400, Nitesh Narayan Lal wrote:
>> On 7/24/19 12:54 PM, Alexander Duyck wrote:
>>> This series provides an asynchronous means of hinting to a hypervisor
>>> that a guest page is no longer i
On 7/24/19 6:03 PM, Alexander Duyck wrote:
> On Wed, 2019-07-24 at 17:38 -0400, Michael S. Tsirkin wrote:
>> On Wed, Jul 24, 2019 at 10:12:10AM -0700, Alexander Duyck wrote:
>>> From: Alexander Duyck
>>>
>>> Add support for what I am referring to as "bubble hinting". Basically the
>>> idea is
On 7/25/19 4:53 AM, David Hildenbrand wrote:
> On 24.07.19 19:03, Alexander Duyck wrote:
>> From: Alexander Duyck
>>
>> In order to pave the way for free page hinting in virtualized environments
>> we will need a way to get pages out of the free lists and identify those
>> pages after they have
On 7/24/19 4:18 PM, Alexander Duyck wrote:
> On Wed, 2019-07-24 at 15:02 -0400, Michael S. Tsirkin wrote:
>> On Wed, Jul 24, 2019 at 10:12:10AM -0700, Alexander Duyck wrote:
>>> From: Alexander Duyck
>>>
>>> Add support for what I am referring to as "bubble hinting". Basically the
>>> idea is
On 7/24/19 5:00 PM, Alexander Duyck wrote:
> On Wed, 2019-07-24 at 16:38 -0400, Nitesh Narayan Lal wrote:
>> On 7/24/19 4:27 PM, Alexander Duyck wrote:
>>> On Wed, 2019-07-24 at 14:40 -0400, Nitesh Narayan Lal wrote:
>>>> On 7/24/19 12:54 PM, Alexander Duyck wro
On 7/24/19 3:02 PM, Michael S. Tsirkin wrote:
> On Wed, Jul 24, 2019 at 10:05:14AM -0700, Alexander Duyck wrote:
>> From: Alexander Duyck
>>
>> Add support for the page hinting feature provided by virtio-balloon.
>> Hinting differs from the regular balloon functionality in that is is
>> much
On 7/24/19 1:05 PM, Alexander Duyck wrote:
> From: Alexander Duyck
>
> Add support for the page hinting feature provided by virtio-balloon.
> Hinting differs from the regular balloon functionality in that is is
> much less durable than a standard memory balloon. Instead of creating a
> list of
1 - 100 of 281 matches
Mail list logo