[PATCH v6] hrtimer: avoid retrigger_next_event IPI

2021-04-19 Thread Marcelo Tosatti
it has active timers in the CLOCK_REALTIME and CLOCK_TAI bases. If that's not the case, update the realtime and TAI base offsets remotely and skip the IPI. This ensures that any subsequently armed timers on CLOCK_REALTIME and CLOCK_TAI are evaluated with the correct offsets. Signed-off-by: Marcelo

Re: [PATCH v5] hrtimer: avoid retrigger_next_event IPI

2021-04-19 Thread Marcelo Tosatti
On Sat, Apr 17, 2021 at 06:51:08PM +0200, Thomas Gleixner wrote: > On Sat, Apr 17 2021 at 18:24, Thomas Gleixner wrote: > > On Fri, Apr 16 2021 at 13:13, Peter Xu wrote: > >> On Fri, Apr 16, 2021 at 01:00:23PM -0300, Marcelo Tosatti wrote: > >>> >

[PATCH v5] hrtimer: avoid retrigger_next_event IPI

2021-04-16 Thread Marcelo Tosatti
it has active timers in the CLOCK_REALTIME and CLOCK_TAI bases. If that's not the case, update the realtime and TAI base offsets remotely and skip the IPI. This ensures that any subsequently armed timers on CLOCK_REALTIME and CLOCK_TAI are evaluated with the correct offsets. Signed-off-by: Marcelo

[PATCH v4] hrtimer: avoid retrigger_next_event IPI

2021-04-15 Thread Marcelo Tosatti
active timers in the CLOCK_REALTIME and CLOCK_TAI bases. If that's not the case, update the realtime and TAI base offsets remotely and skip the IPI. This ensures that any subsequently armed timers on CLOCK_REALTIME and CLOCK_TAI are evaluated with the correct offsets. Signed-off-by: Marcelo Tosatti

[PATCH v3] hrtimer: avoid retrigger_next_event IPI

2021-04-15 Thread Marcelo Tosatti
it has active timers in the CLOCK_REALTIME and CLOCK_TAI bases. If that's not the case, update the realtime and TAI base offsets remotely and skip the IPI. This ensures that any subsequently armed timers on CLOCK_REALTIME and CLOCK_TAI are evaluated with the correct offsets. Signed-off-by: Marcelo

[PATCH v2] hrtimer: avoid retrigger_next_event IPI

2021-04-13 Thread Marcelo Tosatti
offsets remotely, skipping the IPI. This reduces interruptions to latency sensitive applications. Signed-off-by: Marcelo Tosatti --- v2: - Only REALTIME and TAI bases are affected by offset-to-monotonic changes (Thomas). - Don't special case nohz_full CPUs (Thomas). diff --git a/kernel

Re: [PATCH] hrtimer: avoid retrigger_next_event IPI

2021-04-09 Thread Marcelo Tosatti
+CC Anna-Maria. On Fri, Apr 09, 2021 at 04:15:13PM +0200, Thomas Gleixner wrote: > On Wed, Apr 07 2021 at 10:53, Marcelo Tosatti wrote: > > Setting the realtime clock triggers an IPI to all CPUs to reprogram > > hrtimers. > > > > However, only base, boottime and ta

Re: [PATCH] hrtimer: avoid retrigger_next_event IPI

2021-04-08 Thread Marcelo Tosatti
On Thu, Apr 08, 2021 at 12:14:57AM +0200, Frederic Weisbecker wrote: > On Wed, Apr 07, 2021 at 10:53:01AM -0300, Marcelo Tosatti wrote: > > > > Setting the realtime clock triggers an IPI to all CPUs to reprogram > > hrtimers. > > > > However, only base, boottime

Re: [PATCH 1/2] KVM: x86: reduce pvclock_gtod_sync_lock critical sections

2021-04-08 Thread Marcelo Tosatti
Hi Paolo, On Thu, Apr 08, 2021 at 10:15:16AM +0200, Paolo Bonzini wrote: > On 07/04/21 19:40, Marcelo Tosatti wrote: > > > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c > > > index fe806e894212..0a83eff40b43 100644 > > > --- a/arch/x86/kvm/x86.c

Re: [PATCH 1/2] KVM: x86: reduce pvclock_gtod_sync_lock critical sections

2021-04-07 Thread Marcelo Tosatti
> > Cc: David Woodhouse > Cc: Marcelo Tosatti > Signed-off-by: Paolo Bonzini > --- > arch/x86/kvm/x86.c | 10 -- > 1 file changed, 4 insertions(+), 6 deletions(-) > > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c > index fe806e894212..0a83eff40b43 1

[PATCH] hrtimer: avoid retrigger_next_event IPI

2021-04-07 Thread Marcelo Tosatti
update the realtime base offsets, skipping the IPI. This reduces interruptions to nohz_full CPUs. Signed-off-by: Marcelo Tosatti diff --git a/kernel/time/hrtimer.c b/kernel/time/hrtimer.c index 743c852e10f2..b42b1a434b22 100644 --- a/kernel/time/hrtimer.c +++ b/kernel/time/hrtimer.c @@ -853,6

Re: [patch 2/3] nohz: change signal tick dependency to wakeup CPUs of member tasks

2021-02-12 Thread Marcelo Tosatti
On Fri, Feb 12, 2021 at 01:25:21PM +0100, Frederic Weisbecker wrote: > On Thu, Jan 28, 2021 at 05:21:36PM -0300, Marcelo Tosatti wrote: > > Rather than waking up all nohz_full CPUs on the system, only wakeup > > the target CPUs of member threads of the signal. > > >

Re: [Patch v4 1/3] lib: Restrict cpumask_local_spread to houskeeping CPUs

2021-02-04 Thread Marcelo Tosatti
On Thu, Feb 04, 2021 at 01:47:38PM -0500, Nitesh Narayan Lal wrote: > > On 2/4/21 1:15 PM, Marcelo Tosatti wrote: > > On Thu, Jan 28, 2021 at 09:01:37PM +0100, Thomas Gleixner wrote: > >> On Thu, Jan 28 2021 at 13:59, Marcelo Tosatti wrote: > >>>> The whole pi

Re: [Patch v4 1/3] lib: Restrict cpumask_local_spread to houskeeping CPUs

2021-02-04 Thread Marcelo Tosatti
On Thu, Jan 28, 2021 at 09:01:37PM +0100, Thomas Gleixner wrote: > On Thu, Jan 28 2021 at 13:59, Marcelo Tosatti wrote: > >> The whole pile wants to be reverted. It's simply broken in several ways. > > > > I was asking for your comments on interaction with CPU hotplug :

Re: [EXT] Re: [Patch v4 1/3] lib: Restrict cpumask_local_spread to houskeeping CPUs

2021-02-01 Thread Marcelo Tosatti
On Fri, Jan 29, 2021 at 07:41:27AM -0800, Alex Belits wrote: > On 1/28/21 07:56, Thomas Gleixner wrote: > > External Email > > > > -- > > On Wed, Jan 27 2021 at 10:09, Marcelo Tosatti wrote: > >

[patch 3/3] nohz: tick_nohz_kick_task: only IPI if remote task is running

2021-01-28 Thread Marcelo Tosatti
If the task is not running, run_posix_cpu_timers has nothing to elapsed, so spare IPI in that case. Suggested-by: Peter Zijlstra Signed-off-by: Marcelo Tosatti Index: linux-2.6/kernel/sched/core.c === --- linux-2.6.orig/kernel

[patch 2/3] nohz: change signal tick dependency to wakeup CPUs of member tasks

2021-01-28 Thread Marcelo Tosatti
Rather than waking up all nohz_full CPUs on the system, only wakeup the target CPUs of member threads of the signal. Reduces interruptions to nohz_full CPUs. Signed-off-by: Marcelo Tosatti Index: linux-2.6/kernel/time/tick-sched.c

[patch 1/3] nohz: only wakeup a single target cpu when kicking a task

2021-01-28 Thread Marcelo Tosatti
cpu and task->tick_dep_mask. From: Frederic Weisbecker Suggested-by: Peter Zijlstra Signed-off-by: Frederic Weisbecker Signed-off-by: Marcelo Tosatti Index: linux-2.6/kernel/time/tick-sched.c === --- linux-2.6.orig/kernel/tim

[patch 0/3] nohz_full: only wakeup target CPUs when notifying new tick dependency (v5)

2021-01-28 Thread Marcelo Tosatti
When enabling per-CPU posix timers, an IPI to nohz_full CPUs might be performed (to re-read the dependencies and possibly not re-enter nohz_full on a given CPU). A common case is for applications that run on nohz_full= CPUs to not use POSIX timers (eg DPDK). This patch changes the notification to

[patch 0/3] nohz_full: only wakeup target CPUs when notifying new tick dependency (v4)

2021-01-28 Thread Marcelo Tosatti
When enabling per-CPU posix timers, an IPI to nohz_full CPUs might be performed (to re-read the dependencies and possibly not re-enter nohz_full on a given CPU). A common case is for applications that run on nohz_full= CPUs to not use POSIX timers (eg DPDK). This patch changes the notification to

[patch 1/3] nohz: only wakeup a single target cpu when kicking a task

2021-01-28 Thread Marcelo Tosatti
cpu and task->tick_dep_mask. From: Frederic Weisbecker Suggested-by: Peter Zijlstra Signed-off-by: Frederic Weisbecker Signed-off-by: Marcelo Tosatti Index: linux-2.6/kernel/time/tick-sched.c === --- linux-2.6.orig/kernel/tim

[patch 2/3] nohz: change signal tick dependency to wakeup CPUs of member tasks

2021-01-28 Thread Marcelo Tosatti
Rather than waking up all nohz_full CPUs on the system, only wakeup the target CPUs of member threads of the signal. Reduces interruptions to nohz_full CPUs. Signed-off-by: Marcelo Tosatti Index: linux-2.6/kernel/time/tick-sched.c

[patch 3/3] nohz: tick_nohz_kick_task: only IPI if remote task is running

2021-01-28 Thread Marcelo Tosatti
If the task is not running, run_posix_cpu_timers has nothing to elapsed, so spare IPI in that case. Suggested-by: Peter Zijlstra Signed-off-by: Marcelo Tosatti Index: linux-2.6/kernel/sched/core.c === --- linux-2.6.orig/kernel

Re: [Patch v4 1/3] lib: Restrict cpumask_local_spread to houskeeping CPUs

2021-01-28 Thread Marcelo Tosatti
On Thu, Jan 28, 2021 at 04:56:07PM +0100, Thomas Gleixner wrote: > On Wed, Jan 27 2021 at 10:09, Marcelo Tosatti wrote: > > On Wed, Jan 27, 2021 at 12:36:30PM +, Robin Murphy wrote: > >> > > >/** > >> > > > * cpumask_next - get the next c

Re: [Patch v4 1/3] lib: Restrict cpumask_local_spread to houskeeping CPUs

2021-01-28 Thread Marcelo Tosatti
On Thu, Jan 28, 2021 at 05:02:41PM +0100, Thomas Gleixner wrote: > On Wed, Jan 27 2021 at 09:19, Marcelo Tosatti wrote: > > On Wed, Jan 27, 2021 at 11:57:16AM +, Robin Murphy wrote: > >> > +hk_flags = HK_FLAG_DOMAIN | HK_FLAG_MANAGED_IRQ; > >> > +

Re: [Patch v4 1/3] lib: Restrict cpumask_local_spread to houskeeping CPUs

2021-01-27 Thread Marcelo Tosatti
On Wed, Jan 27, 2021 at 12:36:30PM +, Robin Murphy wrote: > On 2021-01-27 12:19, Marcelo Tosatti wrote: > > On Wed, Jan 27, 2021 at 11:57:16AM +, Robin Murphy wrote: > > > Hi, > > > > > > On 2020-06-25 23:34, Nitesh Narayan

Re: [Patch v4 1/3] lib: Restrict cpumask_local_spread to houskeeping CPUs

2021-01-27 Thread Marcelo Tosatti
On Wed, Jan 27, 2021 at 11:57:16AM +, Robin Murphy wrote: > Hi, > > On 2020-06-25 23:34, Nitesh Narayan Lal wrote: > > From: Alex Belits > > > > The current implementation of cpumask_local_spread() does not respect the > > isolated CPUs, i.e., even if a CPU has been isolated for Real-Time

Re: [EXT] Re: [PATCH v5 9/9] task_isolation: kick_all_cpus_sync: don't kick isolated cpus

2021-01-22 Thread Marcelo Tosatti
On Tue, Nov 24, 2020 at 12:21:06AM +0100, Frederic Weisbecker wrote: > On Mon, Nov 23, 2020 at 10:39:34PM +, Alex Belits wrote: > > > > On Mon, 2020-11-23 at 23:29 +0100, Frederic Weisbecker wrote: > > > External Email > > > > > >

Re: [PATCH v4 11/13] task_isolation: net: don't flush backlog on CPUs running isolated tasks

2021-01-22 Thread Marcelo Tosatti
On Thu, Oct 01, 2020 at 04:47:31PM +0200, Frederic Weisbecker wrote: > On Wed, Jul 22, 2020 at 02:58:24PM +, Alex Belits wrote: > > From: Yuri Norov > > > > so we don't need to flush it. > > What guarantees that we have no backlog on it? >From Paolo's work to use lockless reading of

Re: [PATCH v2 1/3] KVM: x86: implement KVM_{GET|SET}_TSC_STATE

2020-12-15 Thread Marcelo Tosatti
On Fri, Dec 11, 2020 at 10:59:59PM +0100, Paolo Bonzini wrote: > On 11/12/20 22:04, Thomas Gleixner wrote: > > > Its 100ms off with migration, and can be reduced further (customers > > > complained about 5 seconds but seem happy with 0.1ms). > > What is 100ms? Guaranteed maximum migration time? >

Re: [PATCH v2 1/3] KVM: x86: implement KVM_{GET|SET}_TSC_STATE

2020-12-11 Thread Marcelo Tosatti
On Fri, Dec 11, 2020 at 02:30:34PM +0100, Thomas Gleixner wrote: > On Thu, Dec 10 2020 at 21:27, Marcelo Tosatti wrote: > > On Thu, Dec 10, 2020 at 10:48:10PM +0100, Thomas Gleixner wrote: > >> You really all live in a seperate universe creating your own rules how > >>

Re: [PATCH v2 1/3] KVM: x86: implement KVM_{GET|SET}_TSC_STATE

2020-12-10 Thread Marcelo Tosatti
On Thu, Dec 10, 2020 at 10:48:10PM +0100, Thomas Gleixner wrote: > On Thu, Dec 10 2020 at 12:26, Marcelo Tosatti wrote: > > On Wed, Dec 09, 2020 at 09:58:23PM +0100, Thomas Gleixner wrote: > >> Marcelo, > >> > >> On Wed, Dec 09 2020 at 13:34, Marcelo Tosatt

Re: [PATCH v2 1/3] KVM: x86: implement KVM_{GET|SET}_TSC_STATE

2020-12-10 Thread Marcelo Tosatti
On Wed, Dec 09, 2020 at 09:58:23PM +0100, Thomas Gleixner wrote: > Marcelo, > > On Wed, Dec 09 2020 at 13:34, Marcelo Tosatti wrote: > > On Tue, Dec 08, 2020 at 10:33:15PM +0100, Thomas Gleixner wrote: > >> On Tue, Dec 08 2020 at 15:11, Marcelo Tosatti wrote: > >

Re: [PATCH v2 1/3] KVM: x86: implement KVM_{GET|SET}_TSC_STATE

2020-12-09 Thread Marcelo Tosatti
On Tue, Dec 08, 2020 at 10:33:15PM +0100, Thomas Gleixner wrote: > On Tue, Dec 08 2020 at 15:11, Marcelo Tosatti wrote: > > On Tue, Dec 08, 2020 at 05:02:07PM +0100, Thomas Gleixner wrote: > >> On Tue, Dec 08 2020 at 16:50, Maxim Levitsky wrote: > >> > On Mon, 2020

Re: [PATCH v2 1/3] KVM: x86: implement KVM_{GET|SET}_TSC_STATE

2020-12-08 Thread Marcelo Tosatti
On Tue, Dec 08, 2020 at 06:25:13PM +0200, Maxim Levitsky wrote: > On Tue, 2020-12-08 at 17:02 +0100, Thomas Gleixner wrote: > > On Tue, Dec 08 2020 at 16:50, Maxim Levitsky wrote: > > > On Mon, 2020-12-07 at 20:29 -0300, Marcelo Tosatti wrote: > > > > >

Re: [PATCH v2 1/3] KVM: x86: implement KVM_{GET|SET}_TSC_STATE

2020-12-08 Thread Marcelo Tosatti
On Tue, Dec 08, 2020 at 05:02:07PM +0100, Thomas Gleixner wrote: > On Tue, Dec 08 2020 at 16:50, Maxim Levitsky wrote: > > On Mon, 2020-12-07 at 20:29 -0300, Marcelo Tosatti wrote: > >> > +This ioctl allows to reconstruct the guest's IA32_TSC and TSC_ADJUST > >>

Re: [PATCH v2 1/3] KVM: x86: implement KVM_{GET|SET}_TSC_STATE

2020-12-08 Thread Marcelo Tosatti
On Tue, Dec 08, 2020 at 04:50:53PM +0200, Maxim Levitsky wrote: > On Mon, 2020-12-07 at 20:29 -0300, Marcelo Tosatti wrote: > > On Thu, Dec 03, 2020 at 07:11:16PM +0200, Maxim Levitsky wrote: > > > These two new ioctls allow to more precisly capture and > > >

Re: [PATCH v2 1/3] KVM: x86: implement KVM_{GET|SET}_TSC_STATE

2020-12-08 Thread Marcelo Tosatti
On Mon, Dec 07, 2020 at 10:04:45AM -0800, Andy Lutomirski wrote: > > > On Dec 7, 2020, at 9:00 AM, Maxim Levitsky wrote: > > > > On Mon, 2020-12-07 at 08:53 -0800, Andy Lutomirski wrote: > On Dec 7, 2020, at 8:38 AM, Thomas Gleixner wrote: > >>> > >>> On Mon, Dec 07 2020 at 14:16,

Re: [PATCH v2 1/3] KVM: x86: implement KVM_{GET|SET}_TSC_STATE

2020-12-08 Thread Marcelo Tosatti
On Thu, Dec 03, 2020 at 07:11:16PM +0200, Maxim Levitsky wrote: > These two new ioctls allow to more precisly capture and > restore guest's TSC state. > > Both ioctls are meant to be used to accurately migrate guest TSC > even when there is a significant downtime during the migration. > >

Re: [PATCH v2 0/3] RFC: Precise TSC migration

2020-12-08 Thread Marcelo Tosatti
On Thu, Dec 03, 2020 at 07:11:15PM +0200, Maxim Levitsky wrote: > Hi! > > This is the second version of the work to make TSC migration more accurate, > as was defined by Paulo at: > https://www.spinics.net/lists/kvm/msg225525.html Maxim, Can you please make a description of what is the

Re: [PATCH v2 1/3] KVM: x86: implement KVM_{GET|SET}_TSC_STATE

2020-12-08 Thread Marcelo Tosatti
On Sun, Dec 06, 2020 at 05:19:16PM +0100, Thomas Gleixner wrote: > On Thu, Dec 03 2020 at 19:11, Maxim Levitsky wrote: > > + case KVM_SET_TSC_STATE: { > > + struct kvm_tsc_state __user *user_tsc_state = argp; > > + struct kvm_tsc_state tsc_state; > > + u64 host_tsc,

Re: [PATCH 0/2] RFC: Precise TSC migration

2020-12-04 Thread Marcelo Tosatti
On Thu, Dec 03, 2020 at 01:39:42PM +0200, Maxim Levitsky wrote: > On Tue, 2020-12-01 at 16:48 -0300, Marcelo Tosatti wrote: > > On Tue, Dec 01, 2020 at 02:30:39PM +0200, Maxim Levitsky wrote: > > > On Mon, 2020-11-30 at 16:16 -0300, Marcelo Tosatti wrote

Re: [PATCH 0/2] RFC: Precise TSC migration

2020-12-01 Thread Marcelo Tosatti
On Tue, Dec 01, 2020 at 02:30:39PM +0200, Maxim Levitsky wrote: > On Mon, 2020-11-30 at 16:16 -0300, Marcelo Tosatti wrote: > > Hi Maxim, > > > > On Mon, Nov 30, 2020 at 03:35:57PM +0200, Maxim Levitsky wrote: > > > Hi! > > > > > > This is the firs

Re: [PATCH 0/2] RFC: Precise TSC migration

2020-12-01 Thread Marcelo Tosatti
On Tue, Dec 01, 2020 at 02:48:11PM +0100, Thomas Gleixner wrote: > On Mon, Nov 30 2020 at 16:16, Marcelo Tosatti wrote: > >> Besides, Linux guests don't sync the TSC via IA32_TSC write, > >> but rather use IA32_TSC_ADJUST which currently doesn't participate > >&

Re: [PATCH 0/2] RFC: Precise TSC migration

2020-11-30 Thread Marcelo Tosatti
Hi Maxim, On Mon, Nov 30, 2020 at 03:35:57PM +0200, Maxim Levitsky wrote: > Hi! > > This is the first version of the work to make TSC migration more accurate, > as was defined by Paulo at: > https://www.spinics.net/lists/kvm/msg225525.html Description from Oliver's patch: "To date, VMMs have

Re: [PATCH] cpuidle: Allow configuration of the polling interval before cpuidle enters a c-state

2020-11-27 Thread Marcelo Tosatti
On Thu, Nov 26, 2020 at 07:24:41PM +0100, Rafael J. Wysocki wrote: > On Thu, Nov 26, 2020 at 6:25 PM Mel Gorman > wrote: > > > > It was noted that a few workloads that idle rapidly regressed when commit > > 36fcb4292473 ("cpuidle: use first valid target residency as poll time") > > was merged.

Re: [PATCH v4 4/4] PCI: Limit pci_alloc_irq_vectors() to housekeeping CPUs

2020-10-27 Thread Marcelo Tosatti
On Mon, Oct 26, 2020 at 06:22:29PM -0400, Nitesh Narayan Lal wrote: > > On 10/26/20 5:50 PM, Thomas Gleixner wrote: > > On Mon, Oct 26 2020 at 14:11, Jacob Keller wrote: > >> On 10/26/2020 1:11 PM, Thomas Gleixner wrote: > >>> On Mon, Oct 26 2020 at 12:21, Jacob Keller wrote: > Are there

Re: [PATCH v4 4/4] PCI: Limit pci_alloc_irq_vectors() to housekeeping CPUs

2020-10-27 Thread Marcelo Tosatti
On Mon, Oct 26, 2020 at 08:00:39PM +0100, Thomas Gleixner wrote: > On Mon, Oct 26 2020 at 14:30, Marcelo Tosatti wrote: > > On Fri, Oct 23, 2020 at 11:00:52PM +0200, Thomas Gleixner wrote: > >> So without information from the driver which tells what the best number

Re: [PATCH v4 4/4] PCI: Limit pci_alloc_irq_vectors() to housekeeping CPUs

2020-10-26 Thread Marcelo Tosatti
On Fri, Oct 23, 2020 at 11:00:52PM +0200, Thomas Gleixner wrote: > On Fri, Oct 23 2020 at 09:10, Nitesh Narayan Lal wrote: > > On 10/23/20 4:58 AM, Peter Zijlstra wrote: > >> On Thu, Oct 22, 2020 at 01:47:14PM -0400, Nitesh Narayan Lal wrote: > >> So shouldn't we then fix the drivers / interface

Re: [PATCH v4 4/4] PCI: Limit pci_alloc_irq_vectors() to housekeeping CPUs

2020-10-22 Thread Marcelo Tosatti
On Wed, Oct 21, 2020 at 10:25:48PM +0200, Thomas Gleixner wrote: > On Tue, Oct 20 2020 at 20:07, Thomas Gleixner wrote: > > On Tue, Oct 20 2020 at 12:18, Nitesh Narayan Lal wrote: > >> However, IMHO we would still need a logic to prevent the devices from > >> creating excess vectors. > > > >

Re: [patch 1/2] nohz: only wakeup a single target cpu when kicking a task

2020-10-20 Thread Marcelo Tosatti
On Thu, Oct 15, 2020 at 01:40:53AM +0200, Frederic Weisbecker wrote: > On Wed, Oct 14, 2020 at 10:33:21AM +0200, Peter Zijlstra wrote: > > On Tue, Oct 13, 2020 at 02:13:28PM -0300, Marcelo Tosatti wrote: > > > > > > Yes but if the task isn't running, run_posi

Re: [PATCH v4 4/4] PCI: Limit pci_alloc_irq_vectors() to housekeeping CPUs

2020-10-19 Thread Marcelo Tosatti
On Mon, Oct 19, 2020 at 01:11:37PM +0200, Peter Zijlstra wrote: > On Sun, Oct 18, 2020 at 02:14:46PM -0400, Nitesh Narayan Lal wrote: > > >> +hk_cpus = housekeeping_num_online_cpus(HK_FLAG_MANAGED_IRQ); > > >> + > > >> +/* > > >> + * If we have isolated CPUs for use by

Re: [patch 1/2] nohz: only wakeup a single target cpu when kicking a task

2020-10-13 Thread Marcelo Tosatti
On Thu, Oct 08, 2020 at 09:54:44PM +0200, Frederic Weisbecker wrote: > On Thu, Oct 08, 2020 at 02:54:09PM -0300, Marcelo Tosatti wrote: > > On Thu, Oct 08, 2020 at 02:22:56PM +0200, Peter Zijlstra wrote: > > > On Wed, Oct 07, 2020 at 03:01:52PM -0300, Marcelo Tosatti wrote: &

[patch 2/2] nohz: change signal tick dependency to wakeup CPUs of member tasks

2020-10-08 Thread Marcelo Tosatti
Rather than waking up all nohz_full CPUs on the system, only wakeup the target CPUs of member threads of the signal. Reduces interruptions to nohz_full CPUs. Signed-off-by: Marcelo Tosatti Index: linux-2.6/kernel/time/tick-sched.c

[patch 1/2] nohz: only wakeup a single target cpu when kicking a task

2020-10-08 Thread Marcelo Tosatti
cpu and task->tick_dep_mask. From: Frederic Weisbecker Suggested-by: Peter Zijlstra Signed-off-by: Frederic Weisbecker Signed-off-by: Marcelo Tosatti Index: linux-2.6/kernel/time/tick-sched.c === --- linux-2.6.orig/kernel/tim

[patch 0/2] nohz_full: only wakeup target CPUs when notifying new tick dependency (v3)

2020-10-08 Thread Marcelo Tosatti
When enabling per-CPU posix timers, an IPI to nohz_full CPUs might be performed (to re-read the dependencies and possibly not re-enter nohz_full on a given CPU). A common case is for applications that run on nohz_full= CPUs to not use POSIX timers (eg DPDK). This patch changes the notification to

Re: [patch 1/2] nohz: only wakeup a single target cpu when kicking a task

2020-10-08 Thread Marcelo Tosatti
On Thu, Oct 08, 2020 at 02:22:56PM +0200, Peter Zijlstra wrote: > On Wed, Oct 07, 2020 at 03:01:52PM -0300, Marcelo Tosatti wrote: > > When adding a tick dependency to a task, its necessary to > > wakeup the CPU where the task resides to reevaluate tick > > dep

Re: [patch 2/2] nohz: change signal tick dependency to wakeup CPUs of member tasks

2020-10-08 Thread Marcelo Tosatti
On Thu, Oct 08, 2020 at 02:35:44PM +0200, Peter Zijlstra wrote: > On Wed, Oct 07, 2020 at 03:01:53PM -0300, Marcelo Tosatti wrote: > > Rather than waking up all nohz_full CPUs on the system, only wakeup > > the target CPUs of member threads of the signal. > > >

Re: [patch 1/2] nohz: only wakeup a single target cpu when kicking a task

2020-10-08 Thread Marcelo Tosatti
On Thu, Oct 08, 2020 at 10:59:40AM -0400, Peter Xu wrote: > On Wed, Oct 07, 2020 at 03:01:52PM -0300, Marcelo Tosatti wrote: > > +static void tick_nohz_kick_task(struct task_struct *tsk) > > +{ > > + int cpu = task_cpu(tsk); > > + > > + /* > > +

[patch 2/2] nohz: change signal tick dependency to wakeup CPUs of member tasks

2020-10-07 Thread Marcelo Tosatti
Rather than waking up all nohz_full CPUs on the system, only wakeup the target CPUs of member threads of the signal. Reduces interruptions to nohz_full CPUs. Signed-off-by: Marcelo Tosatti Index: linux-2.6/kernel/time/tick-sched.c

[patch 1/2] nohz: only wakeup a single target cpu when kicking a task

2020-10-07 Thread Marcelo Tosatti
cpu and task->tick_dep_mask. From: Frederic Weisbecker Suggested-by: Peter Zijlstra Signed-off-by: Frederic Weisbecker Signed-off-by: Marcelo Tosatti Index: linux-2.6/kernel/time/tick-sched.c === --- linux-2.6.orig/kernel/tim

[patch 0/2] nohz_full: only wakeup target CPUs when notifying new tick dependency (v2)

2020-10-07 Thread Marcelo Tosatti
When enabling per-CPU posix timers, an IPI to nohz_full CPUs might be performed (to re-read the dependencies and possibly not re-enter nohz_full on a given CPU). A common case is for applications that run on nohz_full= CPUs to not use POSIX timers (eg DPDK). This patch changes the notification to

Re: [RFC][Patch v1 2/3] i40e: limit msix vectors based on housekeeping CPUs

2020-09-11 Thread Marcelo Tosatti
rs to reach as close as we can to the >* number of online CPUs. >*/ > - cpus = num_online_cpus(); > + cpus = num_housekeeping_cpus(); > pf->num_lan_msix = min_t(int, cpus, vectors_left / 2); > vectors_left -= pf->num_lan_msix; > > -- > 2.27.0 For patches 1 and 2: Reviewed-by: Marcelo Tosatti

Re: [RFC][Patch v1 3/3] PCI: Limit pci_alloc_irq_vectors as per housekeeping CPUs

2020-09-10 Thread Marcelo Tosatti
On Wed, Sep 09, 2020 at 11:08:18AM -0400, Nitesh Narayan Lal wrote: > This patch limits the pci_alloc_irq_vectors max vectors that is passed on > by the caller based on the available housekeeping CPUs by only using the > minimum of the two. > > A minimum of the max_vecs passed and available

Re: [patch 2/2] nohz: try to avoid IPI when setting tick dependency for task

2020-09-10 Thread Marcelo Tosatti
On Thu, Sep 03, 2020 at 05:01:53PM +0200, Frederic Weisbecker wrote: > On Tue, Aug 25, 2020 at 03:41:49PM -0300, Marcelo Tosatti wrote: > > When enabling per-CPU posix timers, an IPI to nohz_full CPUs might be > > performed (to re-read the dependencies and possibly not re-ente

Re: Requirements to control kernel isolation/nohz_full at runtime

2020-09-03 Thread Marcelo Tosatti
On Thu, Sep 03, 2020 at 02:36:36PM -0400, Phil Auld wrote: > On Thu, Sep 03, 2020 at 03:30:15PM -0300 Marcelo Tosatti wrote: > > On Thu, Sep 03, 2020 at 03:23:59PM -0300, Marcelo Tosatti wrote: > > > On Tue, Sep 01, 2020 at 12:46:41PM +0200, Frederic Weisbecker wrote: > >

Re: Requirements to control kernel isolation/nohz_full at runtime

2020-09-03 Thread Marcelo Tosatti
On Thu, Sep 03, 2020 at 03:23:59PM -0300, Marcelo Tosatti wrote: > On Tue, Sep 01, 2020 at 12:46:41PM +0200, Frederic Weisbecker wrote: > > Hi, > > Hi Frederic, > > Thanks for the summary! Looking forward to your comments... > > > I'm currently working on maki

Re: Requirements to control kernel isolation/nohz_full at runtime

2020-09-03 Thread Marcelo Tosatti
On Tue, Sep 01, 2020 at 12:46:41PM +0200, Frederic Weisbecker wrote: > Hi, Hi Frederic, Thanks for the summary! Looking forward to your comments... > I'm currently working on making nohz_full/nohz_idle runtime toggable > and some other people seem to be interested as well. So I've dumped > a

Re: [patch 1/2] nohz: try to avoid IPI when configuring per-CPU posix timer

2020-09-03 Thread Marcelo Tosatti
On Wed, Sep 02, 2020 at 01:38:59AM +0200, Frederic Weisbecker wrote: > On Tue, Aug 25, 2020 at 03:41:48PM -0300, Marcelo Tosatti wrote: > > When enabling per-CPU posix timers, an IPI to nohz_full CPUs might be > > performed (to re-read the dependencies and possibly not re-ente

[patch 1/2] nohz: try to avoid IPI when configuring per-CPU posix timer

2020-08-25 Thread Marcelo Tosatti
the task allowed mask does not intersect with nohz_full= CPU mask, when going through tick_nohz_dep_set_signal. This reduces interruptions to nohz_full= CPUs. Signed-off-by: Marcelo Tosatti --- include/linux/tick.h | 11 +++ kernel/time/posix-cpu-timers.c |4 ++-- kernel

[patch 2/2] nohz: try to avoid IPI when setting tick dependency for task

2020-08-25 Thread Marcelo Tosatti
When enabling per-CPU posix timers, an IPI to nohz_full CPUs might be performed (to re-read the dependencies and possibly not re-enter nohz_full on a given CPU). A common case is for applications that run on nohz_full= CPUs to not use POSIX timers (eg DPDK). This patch optimizes

[patch 0/2] posix-timers: avoid nohz_full= IPIs via task cpu masks

2020-08-25 Thread Marcelo Tosatti
This patchset avoids IPIs to nohz_full= CPUs when the intersection between the set of nohz_full CPUs and task allowed cpus is null. See individual patches for details.

Re: [PATCH v1 0/3] Preventing job distribution to isolated CPUs

2020-06-16 Thread Marcelo Tosatti
Hi Nitesh, On Wed, Jun 10, 2020 at 12:12:23PM -0400, Nitesh Narayan Lal wrote: > This patch-set is originated from one of the patches that have been > posted earlier as a part of "Task_isolation" mode [1] patch series > by Alex Belits . There are only a couple of > changes that I am proposing in

[tip: sched/core] kthread: Switch to cpu_possible_mask

2020-06-16 Thread tip-bot2 for Marcelo Tosatti
The following commit has been merged into the sched/core branch of tip: Commit-ID: 043eb8e1051143a24811e6f35c276e35ae8247b6 Gitweb: https://git.kernel.org/tip/043eb8e1051143a24811e6f35c276e35ae8247b6 Author:Marcelo Tosatti AuthorDate:Wed, 27 May 2020 16:29:08 +02:00

[tip: sched/core] isolcpus: Affine unbound kernel threads to housekeeping cpus

2020-06-16 Thread tip-bot2 for Marcelo Tosatti
The following commit has been merged into the sched/core branch of tip: Commit-ID: 9cc5b8656892a72438ee7deb5e80f5be47643b8b Gitweb: https://git.kernel.org/tip/9cc5b8656892a72438ee7deb5e80f5be47643b8b Author:Marcelo Tosatti AuthorDate:Wed, 27 May 2020 16:29:09 +02:00

Re: [PATCH v5 2/7] fpga: dfl: pci: add irq info for feature devices enumeration

2020-05-25 Thread Marcelo Tosatti
it: > dfl_fpga_enum_info_free(info); > > @@ -211,12 +275,10 @@ int cci_pci_probe(struct pci_dev *pcidev, const struct > pci_device_id *pcidevid) > } > > ret = cci_enumerate_feature_devs(pcidev); > - if (ret) { > - dev_err(>dev, "enumeration failure %d.\n", ret); > - goto disable_error_report_exit; > - } > + if (!ret) > + return ret; > > - return ret; > + dev_err(>dev, "enumeration failure %d.\n", ret); > > disable_error_report_exit: > pci_disable_pcie_error_reporting(pcidev); > -- > 2.7.4 Reviewed-by: Marcelo Tosatti

Re: [PATCH v5 4/7] fpga: dfl: afu: add interrupt support for port error reporting

2020-05-25 Thread Marcelo Tosatti
DFL_PORT_BASE + 5, __u32) > + > +/** > + * DFL_FPGA_PORT_ERR_SET_IRQ - _IOW(DFL_FPGA_MAGIC, DFL_PORT_BASE + 6, > + * struct dfl_fpga_irq_set) > + * > + * Set fpga port error reporting interrupt trigger if evtfds[n] is valid. &g

Re: [PATCH v5 6/7] fpga: dfl: afu: add AFU interrupt support

2020-05-25 Thread Marcelo Tosatti
_IRQ - _IOW(DFL_FPGA_MAGIC, DFL_PORT_BASE + 8, > + * struct dfl_fpga_irq_set) > + * > + * Set fpga AFU interrupt trigger if evtfds[n] is valid. > + * Unset related interrupt trigger if evtfds[n] is a negative value. > + * Return: 0 on success, -errno on failure. > + */ > +#define DFL_FPGA_PORT_UINT_SET_IRQ _IOW(DFL_FPGA_MAGIC,\ > + DFL_PORT_BASE + 8, \ > + struct dfl_fpga_irq_set) > + > /* IOCTLs for FME file descriptor */ > > /** > -- > 2.7.4 Reviewed-by: Marcelo Tosatti

Re: [PATCH v5 7/7] Documentation: fpga: dfl: add descriptions for interrupt related interfaces.

2020-05-25 Thread Marcelo Tosatti
upport interrupts. > + > + > Add new FIUs support > > It's possible that developers made some new function blocks (FIUs) under this > -- > 2.7.4 Reviewed-by: Marcelo Tosatti

Re: [PATCH v5 1/7] fpga: dfl: parse interrupt info for feature devices on enumeration

2020-05-25 Thread Marcelo Tosatti
esource from the > * feature dev (platform device)'s reources. > * @ioaddr: mapped mmio resource address. > + * @irq_ctx: interrupt context list. > + * @nr_irqs: number of interrupt contexts. > * @ops: ops of this sub feature. > */ > struct dfl_feature { > u64 id; > int resource_index; > void __iomem *ioaddr; > + struct dfl_feature_irq_ctx *irq_ctx; > + unsigned int nr_irqs; > const struct dfl_feature_ops *ops; > }; > > @@ -388,10 +422,14 @@ static inline u8 dfl_feature_revision(void __iomem > *base) > * > * @dev: parent device. > * @dfls: list of device feature lists. > + * @nr_irqs: number of irqs for all feature devices. > + * @irq_table: Linux IRQ numbers for all irqs, indexed by hw irq numbers. > */ > struct dfl_fpga_enum_info { > struct device *dev; > struct list_head dfls; > + unsigned int nr_irqs; > + int *irq_table; > }; > > /** > @@ -415,6 +453,8 @@ struct dfl_fpga_enum_info > *dfl_fpga_enum_info_alloc(struct device *dev); > int dfl_fpga_enum_info_add_dfl(struct dfl_fpga_enum_info *info, > resource_size_t start, resource_size_t len, > void __iomem *ioaddr); > +int dfl_fpga_enum_info_add_irq(struct dfl_fpga_enum_info *info, > +unsigned int nr_irqs, int *irq_table); > void dfl_fpga_enum_info_free(struct dfl_fpga_enum_info *info); > > /** > -- > 2.7.4 Reviewed-by: Marcelo Tosatti

Re: [PATCH v5 3/7] fpga: dfl: introduce interrupt trigger setting API

2020-05-25 Thread Marcelo Tosatti
start, > + unsigned int count, int32_t *fds); > +long dfl_feature_ioctl_get_num_irqs(struct platform_device *pdev, > + struct dfl_feature *feature, > + unsigned long arg); > +long dfl_feature_ioctl_set_irq(struct platform_device *pdev, > +struct dfl_feature *feature, > +unsigned long arg); > + > #endif /* __FPGA_DFL_H */ > diff --git a/include/uapi/linux/fpga-dfl.h b/include/uapi/linux/fpga-dfl.h > index ec70a0746..7331350 100644 > --- a/include/uapi/linux/fpga-dfl.h > +++ b/include/uapi/linux/fpga-dfl.h > @@ -151,6 +151,19 @@ struct dfl_fpga_port_dma_unmap { > > #define DFL_FPGA_PORT_DMA_UNMAP _IO(DFL_FPGA_MAGIC, > DFL_PORT_BASE + 4) > > +/** > + * struct dfl_fpga_irq_set - the argument for DFL_FPGA_XXX_SET_IRQ ioctl. > + * > + * @start: Index of the first irq. > + * @count: The number of eventfd handler. > + * @evtfds: Eventfd handlers. > + */ > +struct dfl_fpga_irq_set { > + __u32 start; > + __u32 count; > + __s32 evtfds[]; > +}; > + > /* IOCTLs for FME file descriptor */ > > /** > -- > 2.7.4 Reviewed-by: Marcelo Tosatti

Re: [PATCH v5 5/7] fpga: dfl: fme: add interrupt support for global error reporting

2020-05-25 Thread Marcelo Tosatti
pga fme error reporting interrupt trigger if evtfds[n] is valid. > + * Unset related interrupt trigger if evtfds[n] is a negative value. > + * Return: 0 on success, -errno on failure. > + */ > +#define DFL_FPGA_FME_ERR_SET_IRQ _IOW(DFL_FPGA_MAGIC,\ > + DFL_FME_BASE + 4, \ > + struct dfl_fpga_irq_set) > + > #endif /* _UAPI_LINUX_FPGA_DFL_H */ > -- > 2.7.4 Reviewed-by: Marcelo Tosatti

Re: [PATCH 03/12] task_isolation: userspace hard isolation from kernel

2020-04-28 Thread Marcelo Tosatti
I like the idea as well, especially the reporting infrastructure, and would like to see something like this integrated upstream. On Thu, Mar 05, 2020 at 07:33:13PM +0100, Frederic Weisbecker wrote: > On Wed, Mar 04, 2020 at 04:07:12PM +, Alex Belits wrote: > > The existing nohz_full mode

Re: [PATCH] KVM: Don't shrink/grow vCPU halt_poll_ns if host side polling is disabled

2019-09-27 Thread Marcelo Tosatti
On Fri, Sep 27, 2019 at 04:27:02PM +0800, Wanpeng Li wrote: > From: Wanpeng Li > > Don't waste cycles to shrink/grow vCPU halt_poll_ns if host > side polling is disabled. > > Cc: Marcelo Tosatti > Signed-off-by: Wanpeng Li > --- > virt/kvm/kvm_main.c | 28 ++

Re: [PATCH v3] cpuidle-haltpoll: vcpu hotplug support

2019-09-02 Thread Marcelo Tosatti
manually. This > > is because cpuhp_remove_state() invokes the teardown/offline callback. > > * fix subsystem name to 'cpuidle' instead of 'idle' in cpuhp_setup_state() > > Marcelo, is the R-by still applicable? > > Paolo, any comments? > > > > > v2: >

Re: Is: Default governor regardless of cpuidle driver Was: [PATCH v2] cpuidle-haltpoll: vcpu hotplug support

2019-08-29 Thread Marcelo Tosatti
On Thu, Aug 29, 2019 at 06:16:05PM +0100, Joao Martins wrote: > On 8/29/19 4:10 PM, Joao Martins wrote: > > When cpus != maxcpus cpuidle-haltpoll will fail to register all vcpus > > past the online ones and thus fail to register the idle driver. > > This is because cpuidle_add_sysfs() will return

Re: [PATCH v2] cpuidle-haltpoll: vcpu hotplug support

2019-08-29 Thread Marcelo Tosatti
le_unregister() when only > cpuidle_unregister_driver() suffices; (Marcelo Tosatti) > * cpuhp_setup_state() returns a state (> 0) on success with > CPUHP_AP_ONLINE_DYN > thus we set @ret to 0 > --- > arch/x86/include/asm/cpuidle_haltpoll.h | 4 +- > arch/x86/ker

Re: [PATCH v1] cpuidle-haltpoll: vcpu hotplug support

2019-08-29 Thread Marcelo Tosatti
On Thu, Aug 29, 2019 at 03:24:31PM +0100, Joao Martins wrote: > On 8/29/19 2:50 PM, Joao Martins wrote: > > On 8/29/19 12:56 PM, Marcelo Tosatti wrote: > >> Hi Joao, > >> > >> On Wed, Aug 28, 2019 at 07:56:50PM +0100, Joao Martins wrote: >

Re: [PATCH] cpuidle-haltpoll: Enable kvm guest polling when dedicated physical CPUs are available

2019-08-29 Thread Marcelo Tosatti
On Thu, Aug 29, 2019 at 09:53:04AM -0300, Marcelo Tosatti wrote: > On Thu, Aug 29, 2019 at 08:16:41PM +0800, Wanpeng Li wrote: > > > Current situation regarding haltpoll driver is: > > > > > > overcommit group: haltpoll driver is not loaded by default, they ar

Re: [PATCH] cpuidle-haltpoll: Enable kvm guest polling when dedicated physical CPUs are available

2019-08-29 Thread Marcelo Tosatti
On Thu, Aug 29, 2019 at 08:16:41PM +0800, Wanpeng Li wrote: > > Current situation regarding haltpoll driver is: > > > > overcommit group: haltpoll driver is not loaded by default, they are > > happy. > > > > non overcommit group: boots without "realtime hints" flag, loads haltpoll > > driver, > >

Re: [PATCH] cpuidle-haltpoll: Enable kvm guest polling when dedicated physical CPUs are available

2019-08-29 Thread Marcelo Tosatti
On Thu, Aug 29, 2019 at 01:37:35AM +0200, Rafael J. Wysocki wrote: > On Wed, Aug 28, 2019 at 4:39 PM Marcelo Tosatti wrote: > > > > On Wed, Aug 28, 2019 at 10:45:44AM +0200, Rafael J. Wysocki wrote: > > > On Wed, Aug 28, 2019 at 10:34 AM Wanpeng Li wrote: > > &

Re: [PATCH v1] cpuidle-haltpoll: vcpu hotplug support

2019-08-29 Thread Marcelo Tosatti
Hi Joao, On Wed, Aug 28, 2019 at 07:56:50PM +0100, Joao Martins wrote: > When cpus != maxcpus cpuidle-haltpoll will fail to register all vcpus > past the online ones and thus fail to register the idle driver. > This is because cpuidle_add_sysfs() will return with -ENODEV as a > consequence from

Re: [PATCH] cpuidle-haltpoll: Enable kvm guest polling when dedicated physical CPUs are available

2019-08-28 Thread Marcelo Tosatti
On Wed, Aug 28, 2019 at 11:48:58AM -0300, Marcelo Tosatti wrote: > On Tue, Aug 27, 2019 at 08:43:13AM +0800, Wanpeng Li wrote: > > > > kvm adaptive halt-polling will compete with > > > > vhost-kthreads, however, poll in guest unaware other runnable tasks in > > >

Re: [PATCH] cpuidle-haltpoll: Enable kvm guest polling when dedicated physical CPUs are available

2019-08-28 Thread Marcelo Tosatti
On Tue, Aug 27, 2019 at 08:43:13AM +0800, Wanpeng Li wrote: > > > kvm adaptive halt-polling will compete with > > > vhost-kthreads, however, poll in guest unaware other runnable tasks in > > > the host which will defeat vhost-kthreads. > > > > It depends on how much work vhost-kthreads needs to

Re: [PATCH] cpuidle-haltpoll: Enable kvm guest polling when dedicated physical CPUs are available

2019-08-28 Thread Marcelo Tosatti
On Wed, Aug 28, 2019 at 10:45:44AM +0200, Rafael J. Wysocki wrote: > On Wed, Aug 28, 2019 at 10:34 AM Wanpeng Li wrote: > > > > On Tue, 27 Aug 2019 at 08:43, Wanpeng Li wrote: > > > > > > Cc Michael S. Tsirkin, > > > On Tue, 27 Aug 2019 at 04:42, Marc

Re: [PATCH] cpuidle-haltpoll: Enable kvm guest polling when dedicated physical CPUs are available

2019-08-26 Thread Marcelo Tosatti
On Tue, Aug 13, 2019 at 08:55:29AM +0800, Wanpeng Li wrote: > On Sun, 4 Aug 2019 at 04:21, Marcelo Tosatti wrote: > > > > On Thu, Aug 01, 2019 at 06:54:49PM +0200, Paolo Bonzini wrote: > > > On 01/08/19 18:51, Rafael J. Wysocki wrote: > > > > On 8/1/2019 9:06

Re: [PATCH] cpuidle-haltpoll: Enable kvm guest polling when dedicated physical CPUs are available

2019-08-03 Thread Marcelo Tosatti
s are available. > >> > >> Cc: Rafael J. Wysocki > >> Cc: Paolo Bonzini > >> Cc: Radim Krčmář > >> Cc: Marcelo Tosatti > >> Signed-off-by: Wanpeng Li > > > > Paolo, Marcelo, any comments? > > Yes, it's a good idea. >

Re: [PATCH] Documentation: kvm: document CPUID bit for MSR_KVM_POLL_CONTROL

2019-07-02 Thread Marcelo Tosatti
On Tue, Jul 02, 2019 at 06:57:53PM +0200, Paolo Bonzini wrote: > Cc: Marcelo Tosatti > Signed-off-by: Paolo Bonzini > --- > Documentation/virtual/kvm/cpuid.txt | 4 > 1 file changed, 4 insertions(+) > > diff --git a/Documentation/virtual/kvm/cpuid.txt > b/D

Re: [PATCH v5 0/4] KVM: LAPIC: Implement Exitless Timer

2019-07-02 Thread Marcelo Tosatti
On Tue, Jul 02, 2019 at 06:38:56PM +0200, Paolo Bonzini wrote: > On 21/06/19 11:39, Wanpeng Li wrote: > > Dedicated instances are currently disturbed by unnecessary jitter due > > to the emulated lapic timers fire on the same pCPUs which vCPUs resident. > > There is no hardware virtual timer on

Re: [PATCH v4 2/5] KVM: LAPIC: inject lapic timer interrupt by posted interrupt

2019-06-26 Thread Marcelo Tosatti
On Wed, Jun 26, 2019 at 07:02:13PM +0800, Wanpeng Li wrote: > On Wed, 26 Jun 2019 at 03:03, Marcelo Tosatti wrote: > > > > On Mon, Jun 24, 2019 at 04:53:53PM +0800, Wanpeng Li wrote: > > > On Sat, 22 Jun 2019 at 06:11, Marcelo Tosatti wrote: > > > > > &

  1   2   3   4   5   6   7   8   9   10   >