On Fri, Jul 02, 2010 at 01:47:56PM -0300, Marcelo Tosatti wrote:
On Thu, Jul 01, 2010 at 09:53:04PM +0800, Xiao Guangrong wrote:
Introduce gfn_to_pfn_atomic(), it's the fast path and can used in atomic
context, the later patch will use it
Signed-off-by: Xiao Guangrong
On Sun, Jun 27, 2010 at 12:17:42PM +0300, Avi Kivity wrote:
On 06/24/2010 06:14 PM, Nick Piggin wrote:
On Thu, Jun 24, 2010 at 12:19:32PM +0300, Avi Kivity wrote:
I see really slow vmalloc performance on 2.6.35-rc3:
Can you try this patch?
http://userweb.kernel.org/~akpm/mmotm/broken-out/mm
On Thu, Jun 24, 2010 at 12:19:32PM +0300, Avi Kivity wrote:
I see really slow vmalloc performance on 2.6.35-rc3:
Can you try this patch?
http://userweb.kernel.org/~akpm/mmotm/broken-out/mm-vmap-area-cache.patch
# tracer: function_graph
#
# CPU DURATION FUNCTION CALLS
# |
On Wed, Jun 16, 2010 at 10:39:41AM +0200, Ingo Molnar wrote:
(Cc:-ed various performance/optimization folks)
* Avi Kivity a...@redhat.com wrote:
On 06/16/2010 10:32 AM, H. Peter Anvin wrote:
On 06/16/2010 12:24 AM, Avi Kivity wrote:
Ingo, Peter, any feedback on this?
Conceptually,
On Thu, Jun 03, 2010 at 10:52:51AM +0200, Andi Kleen wrote:
On Thu, Jun 03, 2010 at 09:50:51AM +0530, Srivatsa Vaddagiri wrote:
On Wed, Jun 02, 2010 at 12:00:27PM +0300, Avi Kivity wrote:
There are two separate problems: the more general problem is that
the hypervisor can put a vcpu
On Thu, Jun 03, 2010 at 09:50:51AM +0530, Srivatsa Vaddagiri wrote:
On Wed, Jun 02, 2010 at 12:00:27PM +0300, Avi Kivity wrote:
There are two separate problems: the more general problem is that
the hypervisor can put a vcpu to sleep while holding a lock, causing
other vcpus to spin until
On Thu, Jun 03, 2010 at 05:34:50PM +0530, Srivatsa Vaddagiri wrote:
On Thu, Jun 03, 2010 at 08:38:55PM +1000, Nick Piggin wrote:
Guest side:
static inline void spin_lock(spinlock_t *lock)
{
raw_spin_lock(lock-rlock);
+ __get_cpu_var(gh_vcpu_ptr)-defer_preempt
On Thu, Jun 03, 2010 at 06:28:21PM +0530, Srivatsa Vaddagiri wrote:
On Thu, Jun 03, 2010 at 10:38:32PM +1000, Nick Piggin wrote:
Holding a ticket in the queue is effectively the same as holding the
lock, from the pov of processes waiting behind.
The difference of course is that CPU
On Thu, Jun 03, 2010 at 05:17:30PM +0200, Andi Kleen wrote:
On Thu, Jun 03, 2010 at 10:38:32PM +1000, Nick Piggin wrote:
And they aren't even using ticket spinlocks!!
I suppose they simply don't have unfair memory. Makes things easier.
That would certainly be a part of it, I'm sure
explicit that reads were, hence smp_rmb() using a
locked atomic.
Here is a post by Nick Piggin from 2007 with links to Intel _and_ AMD
documents asserting that reads to cacheable memory are in program order:
http://lkml.org/lkml/2007/9/28/212
Subject: [patch] x86: improved memory
On Friday 17 April 2009 17:08:07 Jared Hulbert wrote:
As everyone knows, my favourite thing is to say nasty things about any
new feature that adds complexity to common code. I feel like crying to
hear about how many more instances of MS Office we can all run, if only
we apply this patch.
On Wednesday 15 April 2009 08:09:03 Andrew Morton wrote:
On Thu, 9 Apr 2009 06:58:37 +0300
Izik Eidus iei...@redhat.com wrote:
KSM is a linux driver that allows dynamicly sharing identical memory
pages between one or more processes.
Generally looks OK to me. But that doesn't mean
On Wednesday 31 December 2008 02:32:50 Avi Kivity wrote:
Marcelo Tosatti wrote:
On Tue, Dec 30, 2008 at 02:53:36PM +1100, Nick Piggin wrote:
RSP 88011f4c7be8
---[ end trace 31811279a2e983e8 ]---
note: qemu-system-x86[4440] exited with preempt_count 2
(gdb) l
On Tuesday 30 December 2008 01:58:21 Marcelo Tosatti wrote:
On Wed, Dec 24, 2008 at 04:28:44PM +0100, Andrea Arcangeli wrote:
On Wed, Dec 24, 2008 at 02:50:57PM +0200, Avi Kivity wrote:
Marcelo Tosatti wrote:
The destructor for huge pages uses the backing inode for adjusting
hugetlbfs
On Thursday 13 November 2008 13:31, Andrea Arcangeli wrote:
On Thu, Nov 13, 2008 at 03:00:59AM +0100, Andrea Arcangeli wrote:
CPU0 migrate.c CPU1 filemap.c
--- --
find_get_page
On Fri, Nov 07, 2008 at 08:35:50PM -0200, Glauber Costa wrote:
Nick,
This is the whole set of patches I was talking about.
Patch 3 is the one that in fact fixes the problem
Patches 1 and 2 are debugging aids I made use of, and could be possibly
useful to others
Patch 4 removes guard pages
On Saturday 08 November 2008 13:13, Glauber Costa wrote:
On Sat, Nov 08, 2008 at 01:58:32AM +0100, Nick Piggin wrote:
On Fri, Nov 07, 2008 at 08:35:50PM -0200, Glauber Costa wrote:
Nick,
This is the whole set of patches I was talking about.
Patch 3 is the one that in fact fixes
: Nick Piggin [EMAIL PROTECTED]
---
mm/vmalloc.c |1 +
1 files changed, 1 insertions(+), 0 deletions(-)
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 7db493d..6fe2003 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -378,6 +378,7 @@ found:
if (!purged
On Thu, Oct 30, 2008 at 09:28:54AM -0200, Glauber Costa wrote:
On Thu, Oct 30, 2008 at 05:49:41AM +0100, Nick Piggin wrote:
On Wed, Oct 29, 2008 at 08:07:37PM -0200, Glauber Costa wrote:
On Wed, Oct 29, 2008 at 11:43:33AM +0100, Nick Piggin wrote:
On Wed, Oct 29, 2008 at 12:29:40PM +0200
On Wed, Oct 29, 2008 at 07:48:56AM -0200, Glauber Costa wrote:
0xf7bfe000-0xf7c08192 hpet_enable+0x2d/0x279 phys=fed0 ioremap
0xf7c02000-0xf7c040008192 acpi_os_map_memory+0x11/0x1a phys=7fed1000
ioremap
0xf7c06000-0xf7c080008192 acpi_os_map_memory+0x11/0x1a phys=7fef2000
On Wed, Oct 29, 2008 at 12:29:40PM +0200, Avi Kivity wrote:
Nick Piggin wrote:
Hmm, spanning 30MB of memory... how much vmalloc space do you have?
From the original report:
VmallocTotal: 122880 kB
VmallocUsed: 15184 kB
VmallocChunk: 83764 kB
So it seems there's
On Wed, Oct 29, 2008 at 08:07:37PM -0200, Glauber Costa wrote:
On Wed, Oct 29, 2008 at 11:43:33AM +0100, Nick Piggin wrote:
On Wed, Oct 29, 2008 at 12:29:40PM +0200, Avi Kivity wrote:
Nick Piggin wrote:
Hmm, spanning 30MB of memory... how much vmalloc space do you have
On Tue, Oct 28, 2008 at 08:55:13PM -0200, Glauber Costa wrote:
Commit db64fe02258f1507e13fe5212a989922323685ce broke
KVM (the symptom) for me. The cause is that vmalloc
allocations fail, despite of the fact that /proc/meminfo
shows plenty of vmalloc space available.
After some
23 matches
Mail list logo