Lai Jiangshan wrote:
Should use linux/uaccess.h instead of asm/uaccess.h
checkpatch.pl also suggests it:
./scripts/checkpatch.pl --file arch/x86/kvm/x86.c
WARNING: Use #include linux/uaccess.h instead of asm/uaccess.h
#50: FILE: x86/kvm/x86.c:50:
+#include asm/uaccess.h
--
To unsubscribe from
From: Jan Kiszka jan.kis...@siemens.com
Signed-off-by: Jan Kiszka jan.kis...@siemens.com
---
No one else checks for new build warnings?
arch/x86/kvm/mmu.c |1 -
1 files changed, 0 insertions(+), 1 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index c16c4ca..9b9d773
Jan Kiszka wrote:
From: Jan Kiszka jan.kis...@siemens.com
Signed-off-by: Jan Kiszka jan.kis...@siemens.com
---
No one else checks for new build warnings?
arch/x86/kvm/mmu.c |1 -
1 files changed, 0 insertions(+), 1 deletions(-)
diff --git a/arch/x86/kvm/mmu.c
Xiao Guangrong wrote:
Jan Kiszka wrote:
From: Jan Kiszka jan.kis...@siemens.com
Signed-off-by: Jan Kiszka jan.kis...@siemens.com
---
No one else checks for new build warnings?
arch/x86/kvm/mmu.c |1 -
1 files changed, 0 insertions(+), 1 deletions(-)
diff --git
According to SDM, we need check whether single-context INVVPID type is supported
before issuing invvpid instruction.
Signed-off-by: Gui Jianfeng guijianf...@cn.fujitsu.com
---
arch/x86/include/asm/vmx.h |2 ++
arch/x86/kvm/vmx.c |8 +++-
2 files changed, 9 insertions(+), 1
On Thu, Jun 03, 2010 at 09:50:51AM +0530, Srivatsa Vaddagiri wrote:
On Wed, Jun 02, 2010 at 12:00:27PM +0300, Avi Kivity wrote:
There are two separate problems: the more general problem is that
the hypervisor can put a vcpu to sleep while holding a lock, causing
other vcpus to spin until
On Thursday 03 June 2010 16:44:34 Gui Jianfeng wrote:
According to SDM, we need check whether single-context INVVPID type is
supported before issuing invvpid instruction.
Signed-off-by: Gui Jianfeng guijianf...@cn.fujitsu.com
---
arch/x86/include/asm/vmx.h |2 ++
arch/x86/kvm/vmx.c
Sheng Yang wrote:
On Thursday 03 June 2010 16:44:34 Gui Jianfeng wrote:
According to SDM, we need check whether single-context INVVPID type is
supported before issuing invvpid instruction.
Signed-off-by: Gui Jianfeng guijianf...@cn.fujitsu.com
---
arch/x86/include/asm/vmx.h |2 ++
On Thu, Jun 03, 2010 at 10:52:51AM +0200, Andi Kleen wrote:
Fyi - I have a early patch ready to address this issue. Basically I am using
host-kernel memory (mmap'ed into guest as io-memory via ivshmem driver) to
hint
host whenever guest is in spin-lock'ed section, which is read by host
Dave Young wrote:
On Wed, Jun 2, 2010 at 8:45 PM, Andre Przywara andre.przyw...@amd.com wrote:
Dave Young wrote:
Hi,
With today's git version (qemu-kvm), I got following message in kernel
dmesg
[168344.215605] kvm: 27289: cpu0 unhandled wrmsr: 0x198 data 0
Are you sure about that?
Sure
According to SDM, we need check whether single-context INVVPID type is supported
before issuing invvpid instruction.
Signed-off-by: Gui Jianfeng guijianf...@cn.fujitsu.com
---
arch/x86/include/asm/vmx.h |2 ++
arch/x86/kvm/vmx.c | 14 +-
2 files changed, 15
On Thu, Jun 3, 2010 at 5:30 PM, Andre Przywara andre.przyw...@amd.com wrote:
Dave Young wrote:
On Wed, Jun 2, 2010 at 8:45 PM, Andre Przywara andre.przyw...@amd.com
wrote:
Dave Young wrote:
Hi,
With today's git version (qemu-kvm), I got following message in kernel
dmesg
On Thu, Jun 3, 2010 at 5:52 PM, Dave Young hidave.darks...@gmail.com wrote:
On Thu, Jun 3, 2010 at 5:30 PM, Andre Przywara andre.przyw...@amd.com wrote:
Dave Young wrote:
On Wed, Jun 2, 2010 at 8:45 PM, Andre Przywara andre.przyw...@amd.com
wrote:
Dave Young wrote:
Hi,
With today's git
On Thu, Jun 03, 2010 at 10:52:51AM +0200, Andi Kleen wrote:
On Thu, Jun 03, 2010 at 09:50:51AM +0530, Srivatsa Vaddagiri wrote:
On Wed, Jun 02, 2010 at 12:00:27PM +0300, Avi Kivity wrote:
There are two separate problems: the more general problem is that
the hypervisor can put a vcpu
On Thu, Jun 03, 2010 at 09:50:51AM +0530, Srivatsa Vaddagiri wrote:
On Wed, Jun 02, 2010 at 12:00:27PM +0300, Avi Kivity wrote:
There are two separate problems: the more general problem is that
the hypervisor can put a vcpu to sleep while holding a lock, causing
other vcpus to spin until
Signed-off-by: Michael Goldish mgold...@redhat.com
---
client/tests/kvm/kvm_vm.py | 15 +++
1 files changed, 15 insertions(+), 0 deletions(-)
diff --git a/client/tests/kvm/kvm_vm.py b/client/tests/kvm/kvm_vm.py
index 94bacdf..f3c05f3 100755
--- a/client/tests/kvm/kvm_vm.py
+++
Should be set to yes to enable testdev.
Signed-off-by: Michael Goldish mgold...@redhat.com
---
client/tests/kvm/kvm_vm.py | 28
1 files changed, 20 insertions(+), 8 deletions(-)
diff --git a/client/tests/kvm/kvm_vm.py b/client/tests/kvm/kvm_vm.py
index
Based on Naphtali Sprei's patches.
Signed-off-by: Michael Goldish mgold...@redhat.com
---
client/tests/kvm/tests/unittest.py | 66
1 files changed, 66 insertions(+), 0 deletions(-)
create mode 100644 client/tests/kvm/tests/unittest.py
diff --git
Based on Naphtali Sprei's patches.
Signed-off-by: Michael Goldish mgold...@redhat.com
---
client/tests/kvm/unittests.cfg.sample | 98 +
1 files changed, 98 insertions(+), 0 deletions(-)
create mode 100644 client/tests/kvm/unittests.cfg.sample
diff --git
On Tue, 2010-06-01 at 21:36 +0200, Andi Kleen wrote:
Collecting the contention/usage statistics on a per spinlock
basis seems complex. I believe a practical approximation
to this are adaptive mutexes where upon hitting a spin
time threshold, punt and let the scheduler reconcile fairness.
On Thu, Jun 03, 2010 at 08:38:55PM +1000, Nick Piggin wrote:
Guest side:
static inline void spin_lock(spinlock_t *lock)
{
raw_spin_lock(lock-rlock);
+ __get_cpu_var(gh_vcpu_ptr)-defer_preempt++;
}
static inline void spin_unlock(spinlock_t *lock)
{
+
On Thu, Jun 03, 2010 at 05:34:50PM +0530, Srivatsa Vaddagiri wrote:
On Thu, Jun 03, 2010 at 08:38:55PM +1000, Nick Piggin wrote:
Guest side:
static inline void spin_lock(spinlock_t *lock)
{
raw_spin_lock(lock-rlock);
+ __get_cpu_var(gh_vcpu_ptr)-defer_preempt++;
}
On Thursday 03 June 2010 17:45:22 Gui Jianfeng wrote:
According to SDM, we need check whether single-context INVVPID type is
supported before issuing invvpid instruction.
Signed-off-by: Gui Jianfeng guijianf...@cn.fujitsu.com
---
arch/x86/include/asm/vmx.h |2 ++
arch/x86/kvm/vmx.c
On Thu, Jun 03, 2010 at 10:38:32PM +1000, Nick Piggin wrote:
Holding a ticket in the queue is effectively the same as holding the
lock, from the pov of processes waiting behind.
The difference of course is that CPU cycles do not directly reduce
latency of ticket holders (only the owner).
This makes it easy to change the way of allocating/freeing dirty bitmaps.
Signed-off-by: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
Signed-off-by: Fernando Luis Vazquez Cao ferna...@oss.ntt.co.jp
---
virt/kvm/kvm_main.c | 30 +++---
1 files changed, 23
On Thu, Jun 03, 2010 at 06:28:21PM +0530, Srivatsa Vaddagiri wrote:
Ok got it - although that approach is not advisable in some cases for ex: when
the lock holder vcpu and lock acquired vcpu are scheduled on the same pcpu by
the hypervisor (which was experimented with in [1] where they foud a
Currently x86's kvm_vm_ioctl_get_dirty_log() needs to allocate a bitmap by
vmalloc() which will be used in the next logging and this has been causing
bad effects to VGA and live-migration: vmalloc() consumes extra systime,
triggers tlb flush, etc.
This patch resolves this issue by pre-allocating
On Thu, Jun 03, 2010 at 06:28:21PM +0530, Srivatsa Vaddagiri wrote:
On Thu, Jun 03, 2010 at 10:38:32PM +1000, Nick Piggin wrote:
Holding a ticket in the queue is effectively the same as holding the
lock, from the pov of processes waiting behind.
The difference of course is that CPU
On Thu, Jun 03, 2010 at 11:45:00PM +1000, Nick Piggin wrote:
Ok got it - although that approach is not advisable in some cases for ex:
when
the lock holder vcpu and lock acquired vcpu are scheduled on the same pcpu
by
the hypervisor (which was experimented with in [1] where they foud a
On Thu, Jun 03, 2010 at 12:06:39PM +0100, David Woodhouse wrote:
On Tue, 2010-06-01 at 21:36 +0200, Andi Kleen wrote:
Collecting the contention/usage statistics on a per spinlock
basis seems complex. I believe a practical approximation
to this are adaptive mutexes where upon hitting a
On Thu, Jun 03, 2010 at 10:38:32PM +1000, Nick Piggin wrote:
And they aren't even using ticket spinlocks!!
I suppose they simply don't have unfair memory. Makes things easier.
-Andi
--
a...@linux.intel.com -- Speaking for myself only.
--
To unsubscribe from this list: send the line unsubscribe
At Wed, 02 Jun 2010 12:49:02 +0200,
Kevin Wolf wrote:
Am 28.05.2010 04:44, schrieb MORITA Kazutaka:
Hi all,
This patch adds a block driver for Sheepdog distributed storage
system. Please consider for inclusion.
Hint for next time: You should remove the RFC from the subject line if
Bugs item #1666308, was opened at 2007-02-22 10:09
Message generated for change (Settings changed) made by iggy_cav
You can respond by visiting:
https://sourceforge.net/tracker/?func=detailatid=893831aid=1666308group_id=180599
Please note that this message will contain a full copy of the comment
Does anyone used Virtio? I just want ask some questions about Virtio?
thanks.
--
Tao Ding
Institute for Computer Architecture
School of Computer Science Technology
Beijing University of Aeronautics and Astronautics
MSN: dingtao1...@live.cn
Email: dingtao1...@gmail.com
Address: G1045, New
On Thu, Jun 03, 2010 at 05:17:30PM +0200, Andi Kleen wrote:
On Thu, Jun 03, 2010 at 10:38:32PM +1000, Nick Piggin wrote:
And they aren't even using ticket spinlocks!!
I suppose they simply don't have unfair memory. Makes things easier.
That would certainly be a part of it, I'm sure they
At Wed, 02 Jun 2010 15:55:42 +0200,
Kevin Wolf wrote:
Am 28.05.2010 04:44, schrieb MORITA Kazutaka:
Sheepdog is a distributed storage system for QEMU. It provides highly
available block level storage volumes to VMs like Amazon EBS. This
patch adds a qemu block driver for Sheepdog.
That would certainly be a part of it, I'm sure they have stronger
fairness and guarantees at the expense of some performance. We saw the
spinlock starvation first on 8-16 core Opterons I think, wheras Altix
had been over 1024 cores and POWER7 1024 threads now apparently without
reported
This patch is a subset of an already upstream patch, but this portion is useful
in earlier releases.
Please consider for stable.
If the add_buf operation fails, indicate failure to the caller.
Signed-off-by: Bruce Rogers brog...@novell.com
--- a/drivers/net/virtio_net.c
+++
virtio_net: Add schedule check to napi_enable call
Under harsh testing conditions, including low memory, the guest would
stop receiving packets. With this patch applied we no longer see any
problems in the driver while performing these tests for extended periods
of time.
These are patches which we have found useful for our 2.6.32 based SLES 11 SP1
release.
The first patch is already upstream, but should be included in stable.
The second patch is a subset of another upstream patch. Again, stable material.
The third patch solves the last remaining issue we saw
Please consider this for stable:
commit 39d321577405e8e269fd238b278aaf2425fa788a
Author: Herbert Xu herb...@gondor.apana.org.au
Date: Mon Jan 25 15:51:01 2010 -0800
virtio_net: Make delayed refill more reliable
I have seen RX stalls on a machine that experienced a suspected
OOM.
On Thu, Jun 03, 2010 at 01:38:31PM -0600, Bruce Rogers wrote:
virtio_net: Add schedule check to napi_enable call
Under harsh testing conditions, including low memory, the guest would
stop receiving packets. With this patch applied we no longer see any
problems in the driver
OK, in the interest of making progress, I am about to embark on the following:
1. Create a user-iommu-domain driver - opening it will give a new empty domain.
Ultimately this can also populate sysfs with the state of its world, which
would
also be a good addition to the base iommu stuff.
On Tue, 2010-06-01 at 15:39 +0300, Avi Kivity wrote:
On 06/01/2010 02:59 PM, Steven Rostedt wrote:
I meant that viewing would be slowed down. It's an important part of
using ftrace!
How long does the Python formatter take to process 100k or 1M events?
I finally got around to testing
On 6/3/2010 at 03:02 PM, Greg KH g...@kroah.com wrote:
WHat is the git commit id of the upstream patch?
9ab86bbcf8be755256f0a5e994e0b38af6b4d399
I grabbed this from:
git://git.kernel.org/pub/scm/virt/kvm/kvm.git
I need that for all stable patches to be accepted, thanks.
Also, all
On 6/3/2010 at 03:03 PM, Greg KH g...@kroah.com wrote:
On Thu, Jun 03, 2010 at 01:38:31PM -0600, Bruce Rogers wrote:
virtio_net: Add schedule check to napi_enable call
Under harsh testing conditions, including low memory, the guest would
stop receiving packets. With this patch
On Thu, Jun 03, 2010 at 04:17:34PM -0600, Bruce Rogers wrote:
On 6/3/2010 at 03:03 PM, Greg KH g...@kroah.com wrote:
On Thu, Jun 03, 2010 at 01:38:31PM -0600, Bruce Rogers wrote:
virtio_net: Add schedule check to napi_enable call
Under harsh testing conditions, including low
On 6/3/2010 at 04:51 PM, Greg KH g...@kroah.com wrote:
On Thu, Jun 03, 2010 at 04:17:34PM -0600, Bruce Rogers wrote:
On 6/3/2010 at 03:03 PM, Greg KH g...@kroah.com wrote:
On Thu, Jun 03, 2010 at 01:38:31PM -0600, Bruce Rogers wrote:
virtio_net: Add schedule check to napi_enable
According to SDM, we need check whether single-context INVVPID type is supported
before issuing invvpid instruction.
Signed-off-by: Gui Jianfeng guijianf...@cn.fujitsu.com
---
arch/x86/include/asm/vmx.h |2 ++
arch/x86/kvm/vmx.c |8 +++-
2 files changed, 9 insertions(+), 1
Hi
We bumped into this issue with VMWare ESX 4 where it doesn't support hardware
virtualization if the processor is an AMD Athlon/Opteron
(http://communities.vmware.com/docs/DOC-9150). Does linux-kvm have a similar
issue? More specifically will the the module kvm_amd.ko support AMD-V on an
On Wed, 2 Jun 2010 12:17:12 am Michael S. Tsirkin wrote:
This adds an (unused) option to put available ring before control (avail
index, flags), and adds padding between index and flags. This avoids
cache line sharing between control and ring, and also makes it possible
to extend avail control
On Thursday 03 June 2010 21:33:24 Govender, Sashan wrote:
Hi
We bumped into this issue with VMWare ESX 4 where it doesn't support
hardware virtualization if the processor is an AMD Athlon/Opteron
(http://communities.vmware.com/docs/DOC-9150). Does linux-kvm have a
similar issue? More
On Friday 04 June 2010 08:51:39 Gui Jianfeng wrote:
According to SDM, we need check whether single-context INVVPID type is
supported before issuing invvpid instruction.
Signed-off-by: Gui Jianfeng guijianf...@cn.fujitsu.com
Reviewed-by: Sheng Yang sh...@linux.intel.com
--
regards
Yang,
53 matches
Mail list logo