On Wed, Jun 21, 2023 at 09:43:37AM +0200, Peter Zijlstra wrote:
> On Tue, Jun 20, 2023 at 05:46:16PM +0300, Yair Podemsky wrote:
> > Currently the tlb_remove_table_smp_sync IPI is sent to all CPUs
> > indiscriminately, this causes unnecessary work and delays notable in
> > real-time use-cases and
On Wed, Apr 19, 2023 at 01:30:57PM +0200, David Hildenbrand wrote:
> On 06.04.23 20:27, Peter Zijlstra wrote:
> > On Thu, Apr 06, 2023 at 05:51:52PM +0200, David Hildenbrand wrote:
> > > On 06.04.23 17:02, Peter Zijlstra wrote:
> >
> > > > DavidH, what do you thikn about reviving Jann's patches
On Thu, Apr 06, 2023 at 03:32:06PM +0200, Peter Zijlstra wrote:
> On Thu, Apr 06, 2023 at 09:49:22AM -0300, Marcelo Tosatti wrote:
>
> > > > 2) Depends on the application and the definition of "occasional".
> > > >
> > > > For cer
On Wed, Apr 05, 2023 at 09:54:57PM +0200, Peter Zijlstra wrote:
> On Wed, Apr 05, 2023 at 04:43:14PM -0300, Marcelo Tosatti wrote:
>
> > Two points:
> >
> > 1) For a virtualized system, the overhead is not only of executing the
> > IPI but:
> >
> >
On Wed, Apr 05, 2023 at 09:52:26PM +0200, Peter Zijlstra wrote:
> On Wed, Apr 05, 2023 at 04:45:32PM -0300, Marcelo Tosatti wrote:
> > On Wed, Apr 05, 2023 at 01:10:07PM +0200, Frederic Weisbecker wrote:
> > > On Wed, Apr 05, 2023 at 12:44:04PM +0200, Frederic Weisbecker wrote:
&
On Wed, Apr 05, 2023 at 01:10:07PM +0200, Frederic Weisbecker wrote:
> On Wed, Apr 05, 2023 at 12:44:04PM +0200, Frederic Weisbecker wrote:
> > On Tue, Apr 04, 2023 at 04:42:24PM +0300, Yair Podemsky wrote:
> > > + int state = atomic_read(>state);
> > > + /* will return true only for cpus in
On Wed, Apr 05, 2023 at 12:43:58PM +0200, Frederic Weisbecker wrote:
> On Tue, Apr 04, 2023 at 04:42:24PM +0300, Yair Podemsky wrote:
> > @@ -191,6 +192,20 @@ static void tlb_remove_table_smp_sync(void *arg)
> > /* Simply deliver the interrupt */
> > }
> >
> > +
> > +#ifdef
Hi Valentin,
On Fri, Oct 07, 2022 at 04:41:40PM +0100, Valentin Schneider wrote:
> Background
> ==
>
> Detecting IPI *reception* is relatively easy, e.g. using
> trace_irq_handler_{entry,exit} or even just function-trace
> flush_smp_call_function_queue() for SMP calls.
>
> Figuring
Not involved in 8xx activities for years, update MAINTAINERS
to reflect it.
Signed-off-by: Marcelo Tosatti mtosa...@redhat.com
diff --git a/MAINTAINERS b/MAINTAINERS
index c47d268..ed7b606 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -5349,7 +5349,6 @@ F:arch/powerpc/*/*/*virtex
On Wed, Feb 29, 2012 at 01:09:47AM +0100, Alexander Graf wrote:
There's always a chance we're unable to read a guest instruction. The guest
could have its TLB mapped execute-, but not readable, something odd happens
and our TLB gets flushed. So it's a good idea to be prepared for that case
and
On Thu, Nov 17, 2011 at 10:55:49AM +1100, Paul Mackerras wrote:
From dfd5bcfac841f8a36593edf60d9fb15e0d633287 Mon Sep 17 00:00:00 2001
From: Paul Mackerras pau...@samba.org
Date: Mon, 14 Nov 2011 13:30:38 +1100
Subject:
Currently, kvmppc_h_enter takes a spinlock that is global to the
On Sat, Nov 19, 2011 at 08:54:24AM +1100, Paul Mackerras wrote:
On Fri, Nov 18, 2011 at 02:57:11PM +0100, Alexander Graf wrote:
This touches areas that I'm sure non-PPC people would want to see as
well. Could you please CC kvm@vger too next time?
Avi, Marcelo, mind to review some of
On Wed, May 11, 2011 at 08:43:31PM +1000, Paul Mackerras wrote:
From 964ee93b2d728e4fb16ae66eaceb6e912bf114ad Mon Sep 17 00:00:00 2001
From: Paul Mackerras pau...@samba.org
Date: Tue, 10 May 2011 22:23:18 +1000
Subject: [PATCH 08/13] kvm/powerpc: Move guest enter/exit down into
On Tue, May 17, 2011 at 03:05:12PM -0300, Marcelo Tosatti wrote:
- ret = __kvmppc_vcpu_entry(kvm_run, vcpu);
+ kvm_guest_enter();
kvm_guest_enter should run with interrupts disabled.
Its fine, please ignore message.
___
Linuxppc-dev
On Wed, Dec 29, 2010 at 01:51:25PM -0600, Peter Tyser wrote:
Previously SPRGs 4-7 were improperly read and written in
kvm_arch_vcpu_ioctl_get_regs() and kvm_arch_vcpu_ioctl_set_regs();
Signed-off-by: Peter Tyser pty...@xes-inc.com
---
I noticed this while grepping for somthing unrelated and
On Sat, Oct 30, 2010 at 01:04:24PM +0400, Vasiliy Kulikov wrote:
Structure kvm_ppc_pvinfo is copied to userland with flags and
pad fields unitialized. It leads to leaking of contents of
kernel stack memory.
Signed-off-by: Vasiliy Kulikov sego...@gmail.com
---
I cannot compile this
On Wed, Jun 30, 2010 at 03:18:44PM +0200, Alexander Graf wrote:
Book3s suffered from my really bad shadow MMU implementation so far. So
I finally got around to implement a combined hash and list mechanism that
allows for much faster lookup of mapped pages.
To show that it really is faster, I
On Mon, May 31, 2010 at 09:59:13PM +0200, Andreas Schwab wrote:
Instead of instantiating a whole thread_struct on the stack use only the
required parts of it.
Signed-off-by: Andreas Schwab sch...@linux-m68k.org
Tested-by: Alexander Graf ag...@suse.de
---
arch/powerpc/include/asm/kvm_fpu.h
On Tue, May 11, 2010 at 02:53:54PM +0900, Takuya Yoshikawa wrote:
(2010/05/11 12:43), Marcelo Tosatti wrote:
On Tue, May 04, 2010 at 10:08:21PM +0900, Takuya Yoshikawa wrote:
+How to Get
+
+Before calling this, you have to set the slot member of kvm_user_dirty_log
+to indicate the target
On Tue, May 04, 2010 at 10:07:02PM +0900, Takuya Yoshikawa wrote:
We move dirty bitmaps to user space.
- Allocation and destruction: we use do_mmap() and do_munmap().
The new bitmap space is twice longer than the original one and we
use the additional space for double buffering: this
On Tue, May 04, 2010 at 10:08:21PM +0900, Takuya Yoshikawa wrote:
Now that dirty bitmaps are accessible from user space, we export the
addresses of these to achieve zero-copy dirty log check:
KVM_GET_USER_DIRTY_LOG_ADDR
We also need an API for triggering dirty bitmap switch to take the
21 matches
Mail list logo