One possible order is:
KVM_CREATE_IRQCHIP ioctl(took kvm->lock) -> kvm_iobus_register_dev() ->
down_write(kvm->slots_lock).
The other one is in kvm_vm_ioctl_assign_device(), which take kvm->slots_lock
first, then kvm->lock.
Observe it due to kernel locking debug warnings.
Signed-off-by: Sheng Y
Resending with proper cc list :(
On Mon, Dec 7, 2009 at 2:43 PM, sudhir kumar wrote:
> Thanks for initiating the server side implementation of migration. Few
> comments below
>
> On Fri, Dec 4, 2009 at 1:48 PM, Yolkfull Chow wrote:
>> This patch will add a server-side test namely kvm_migration.
They have no place in common code.
Signed-off-by: Avi Kivity
---
arch/x86/include/asm/kvm_host.h | 13 -
arch/x86/kvm/vmx.c | 13 +
2 files changed, 13 insertions(+), 13 deletions(-)
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm
When ept is enabled, we aren't particularly interested in cr4.pge, so allow
the guest to own it. This improves performance in vmap() intensive loads.
Avi Kivity (4):
KVM: VMX: Move some cr[04] related constants to vmx.c
KVM: Add accessor for reading cr4 (or some bits of cr4)
KVM: VMX: Make
We make no use of cr4.pge if ept is enabled, but the guest does (to flush
global mappings, as with vmap()), so give the guest ownership of this bit.
Signed-off-by: Avi Kivity
---
arch/x86/kvm/vmx.c |2 ++
1 files changed, 2 insertions(+), 0 deletions(-)
diff --git a/arch/x86/kvm/vmx.c b/arc
Instead of specifying the bits which we want to trap on, specify the bits
which we allow the guest to change transparently. This is safer wrt future
changes to cr4.
Signed-off-by: Avi Kivity
---
arch/x86/kvm/vmx.c | 10 ++
1 files changed, 6 insertions(+), 4 deletions(-)
diff --git a
Some bits of cr4 can be owned by the guest on vmx, so when we read them,
we copy them to the vcpu structure. In preparation for making the set of
guest-owned bits dynamic, use helpers to access these bits so we don't need
to know where the bit resides.
No changes to svm since all bits are host-ow
FYI, this was already incorporated to the tree, thanks Sudhir!
On Fri, 2009-12-04 at 11:19 +0530, sudhir kumar wrote:
> This patch adds the hackbench test for the KVM linux guests.
>
> Signed-off-by: Sudhir Kumar
>
> Index: kvm/autotest_control/hackbench.control
> ==
Hello,
I have the following questions regarding the KVM architecture. I looked
at the slides available at linux-kvm.org, but didn't find definitive
answers. I'm also interested to learn if given feature is or is not
planned for the near future.
The questions follow:
1) Do you have any support fo
On 12/07/2009 03:05 PM, Joanna Rutkowska wrote:
Hello,
I have the following questions regarding the KVM architecture. I looked
at the slides available at linux-kvm.org, but didn't find definitive
answers. I'm also interested to learn if given feature is or is not
planned for the near future.
Th
Avi Kivity wrote:
>> 1) Do you have any support for para-virtualized VMs?
>
> Yes, for example, we support paravirtualized timers and mmu for Linux.
> These are fairly minimal compared to Xen's pv domains.
>
Can I run a regular Linux as PV-guest? Specifically, can I get rid of
qemu totally, as
On 12/07/2009 03:30 PM, Joanna Rutkowska wrote:
Avi Kivity wrote:
1) Do you have any support for para-virtualized VMs?
Yes, for example, we support paravirtualized timers and mmu for Linux.
These are fairly minimal compared to Xen's pv domains.
Can I run a regular Linux as
Avi Kivity wrote:
> On 12/07/2009 03:05 PM, Joanna Rutkowska wrote:
>> In particular, is
>> it possible to move the qemu from the host to one of the VMs? Perhaps to
>> have a separate copy of qemu for each VM? (ala Xen's stub-domains)
>>
>
> It should be fairly easy to place qemu in a guest.
On 12/07/2009 03:55 PM, Joanna Rutkowska wrote:
It should be fairly easy to place qemu in a guest. You would leave a
simple program on the host to communicate with kvm and pass any data
written by the guest to qemu running in another guest, and feed any
replies back to the guest.
But t
Avi Kivity wrote:
> On 12/07/2009 03:30 PM, Joanna Rutkowska wrote:
>> Avi Kivity wrote:
>>
>>
1) Do you have any support for para-virtualized VMs?
>>> Yes, for example, we support paravirtualized timers and mmu for Linux.
>>> These are fairly minimal compared to Xen's pv domai
On 12/07/2009 04:06 PM, Joanna Rutkowska wrote:
Can you point to a document/source file that would list all the possible
interfaces between VM and the host? I.e. all the VMX handlers, and all
the hypercalls (PV interfaces).
arch/x86/kvm/vmx.c is the entry point for all interaction, but it
Jan Kiszka wrote:
> Kevin Wolf wrote:
>> Hi Jan,
>>
>> Am 19.11.2009 13:19, schrieb Jan Kiszka:
>>> (gdb) print ((BDRVQcowState *)bs->opaque)->cluster_allocs.lh_first
>>> $5 = (struct QCowL2Meta *) 0xcb3568
>>> (gdb) print *((BDRVQcowState *)bs->opaque)->cluster_allocs.lh_first
>>> $6 = {offset =
Jan Kiszka wrote:
> And now it happened again (qemu-kvm head, during kernel installation
> from network onto local qcow2-disk). Any clever idea how to proceed with
> this?
>
> I could try to run the step in a loop, hopefully retriggering it once in
> a (likely longer) while. But then we need some
Am 07.12.2009 15:16, schrieb Jan Kiszka:
>> Likely not. What I did was nothing special, and I did not noticed such a
>> crash in the last months.
>
> And now it happened again (qemu-kvm head, during kernel installation
> from network onto local qcow2-disk). Any clever idea how to proceed with
> th
On 12/07/2009 04:50 PM, Jan Kiszka wrote:
Maybe I'm seeing ghosts, and I don't even have a minimal clue about what
goes on in the code, but this looks fishy:
Plenty of ghosts in qcow2, of all those explorers who tried to brave the
code. Only Kevin has ever come back.
preallocate() in
Am 07.12.2009 15:50, schrieb Jan Kiszka:
> Jan Kiszka wrote:
>> And now it happened again (qemu-kvm head, during kernel installation
>> from network onto local qcow2-disk). Any clever idea how to proceed with
>> this?
>>
>> I could try to run the step in a loop, hopefully retriggering it once in
>>
Hi List,
My question is about VM-Exit & VM-Entry controls for MSRs on Intel's processors.
For VM-Exit, a VMM can specify lists of MSRs to be stored and loaded
on VM exits. But for VM-Entry, a VMM can only specify a list of MSRs
to be loaded on VM entries. Why does not the processor have the
featu
On 12/07/2009 05:07 PM, Jiaqing Du wrote:
Hi List,
My question is about VM-Exit& VM-Entry controls for MSRs on Intel's processors.
For VM-Exit, a VMM can specify lists of MSRs to be stored and loaded
on VM exits. But for VM-Entry, a VMM can only specify a list of MSRs
to be loaded on VM entrie
Kevin Wolf wrote:
> Am 07.12.2009 15:50, schrieb Jan Kiszka:
>> Jan Kiszka wrote:
>>> And now it happened again (qemu-kvm head, during kernel installation
>>> from network onto local qcow2-disk). Any clever idea how to proceed with
>>> this?
>>>
>>> I could try to run the step in a loop, hopefully
Hi Avi,
I did not get your point.
But if we want to multiplex some of the MSRs across the VMM and the
guest(s), it would be handy if the hardware provides this feature:
save host's version and load guest's version. Of course, we can do
this manually. I'm just wondering why this feature is missing
On 12/07/2009 05:32 PM, Jiaqing Du wrote:
Hi Avi,
I did not get your point.
But if we want to multiplex some of the MSRs across the VMM and the
guest(s), it would be handy if the hardware provides this feature:
save host's version and load guest's version. Of course, we can do
this manually. I'
Kevin Wolf wrote:
> Am 07.12.2009 15:16, schrieb Jan Kiszka:
>>> Likely not. What I did was nothing special, and I did not noticed such a
>>> crash in the last months.
>> And now it happened again (qemu-kvm head, during kernel installation
>> from network onto local qcow2-disk). Any clever idea how
Am 07.12.2009 17:09, schrieb Jan Kiszka:
> Kevin Wolf wrote:
>> In qcow_aio_write_cb there isn't much happening between these calls. The
>> only thing that could somehow become dangerous is the
>> qcow_aio_write_cb(req, 0); for queued requests in run_dependent_requests.
>
> If m->nb_clusters is no
Avi Kivity wrote:
No. Paravirtualization just augments the standard hardware interface,
it doesn't replace it as in Xen.
NB, unlike Xen, we can (and do) run qemu as non-root. Things like
RHEV-H and oVirt constrain the qemu process with SELinux.
Also, you can use qemu to provide the backend
Joanna Rutkowska wrote:
Avi Kivity wrote:
On 12/07/2009 03:05 PM, Joanna Rutkowska wrote:
In particular, is
it possible to move the qemu from the host to one of the VMs? Perhaps to
have a separate copy of qemu for each VM? (ala Xen's stub-domains)
It should be fairly easy to
Anthony Liguori wrote:
> Avi Kivity wrote:
>> No. Paravirtualization just augments the standard hardware interface,
>> it doesn't replace it as in Xen.
>
> NB, unlike Xen, we can (and do) run qemu as non-root. Things like
> RHEV-H and oVirt constrain the qemu process with SELinux.
>
On Xen you
On 12/07/2009 07:09 PM, Joanna Rutkowska wrote:
Also, you can use qemu to provide the backends to a Xen PV guest (see -M
xenpv). The effect is that you are moving that privileged code from the
kernel (netback/blkback) to userspace (qemu -M xenpv).
In general, KVM tends to keep code in userspa
Avi Kivity wrote:
> On 12/07/2009 07:09 PM, Joanna Rutkowska wrote:
>>
>>> Also, you can use qemu to provide the backends to a Xen PV guest (see -M
>>> xenpv). The effect is that you are moving that privileged code from the
>>> kernel (netback/blkback) to userspace (qemu -M xenpv).
>>>
>>> In gene
On 12/07/2009 07:15 PM, Joanna Rutkowska wrote:
But the difference is that in case of Xen one can *easily* move the
backends to small unprivileged VMs. In that case it doesn't matter the
code is in kernel mode, it's still only in an unprivileged domain.
They're not really unprivileged
Joanna Rutkowska wrote:
Anthony Liguori wrote:
Avi Kivity wrote:
No. Paravirtualization just augments the standard hardware interface,
it doesn't replace it as in Xen.
NB, unlike Xen, we can (and do) run qemu as non-root. Things like
RHEV-H and oVirt constrain the qemu process
Avi Kivity wrote:
> On 12/07/2009 07:15 PM, Joanna Rutkowska wrote:
But the difference is that in case of Xen one can *easily* move the
backends to small unprivileged VMs. In that case it doesn't matter the
code is in kernel mode, it's still only in an unprivileged domain.
Joanna Rutkowska wrote:
Avi Kivity wrote:
On 12/07/2009 07:09 PM, Joanna Rutkowska wrote:
Also, you can use qemu to provide the backends to a Xen PV guest (see -M
xenpv). The effect is that you are moving that privileged code from the
kernel (netback/blkback) to userspace (qemu -M xenp
Anthony Liguori wrote:
> Joanna Rutkowska wrote:
>> Avi Kivity wrote:
>>
>>> On 12/07/2009 07:09 PM, Joanna Rutkowska wrote:
>>>
> Also, you can use qemu to provide the backends to a Xen PV guest
> (see -M
> xenpv). The effect is that you are moving that privileged code
> fro
On Mon, Dec 07, 2009 at 06:09:55PM +0100, Joanna Rutkowska wrote:
>
> Also, SELinux seems to me like a step into the wrong direction. It not
> only adds complexity to the already-too-complex kernel, but requires
> complex configuration. See e.g. this paper[1] for a nice example of how
> to escape
; -monitor
unix:/tmp/monitor-20091207-120625-tyjI,server,nowait -drive
file=/usr/local/autotest/tests/kvm/images/fc11-32.qcow2,if=ide -net nic,vlan=0
-net user,vlan=0 -m 512 -smp 1 -cdrom
/usr/local/autotest/tests/kvm/isos/linux/Fedora-11-i386-DVD.iso -fda
/usr/local/autotest/tests/kvm/images/
Anthony Liguori wrote:
> Joanna Rutkowska wrote:
>> Anthony Liguori wrote:
>>
>>> Avi Kivity wrote:
>>>
No. Paravirtualization just augments the standard hardware interface,
it doesn't replace it as in Xen.
>>> NB, unlike Xen, we can (and do) run qemu as non-root. Thin
nic_mode=tap is required for making physical_resources to work
Signed-off-by: Lucas Meneghel Rodrigues
---
client/tests/kvm/kvm_tests.cfg.sample |1 +
1 files changed, 1 insertions(+), 0 deletions(-)
diff --git a/client/tests/kvm/kvm_tests.cfg.sample
b/client/tests/kvm/kvm_tests.cfg.sample
On 12/07/2009 07:33 PM, Joanna Rutkowska wrote:
AFAIK VT-d is only supported in Xen for fully virtualized guests. Maybe
it changed while I wasn't watching, though.
Negative. VT-d can be used to contain PV DomUs as well. We actually
verified it.
Ah, good for them.
It can use re
With the latest upstream qemu-kvm git tree, all the offloads are disabled
on virtio-net.
peer_has_vnet_hdr(n) in virtio_net_get_features() is failing because
n->vc->peer is NULL. Could not figure out yet why peer field is not initialized.
Do i need any new options to be specified with qemu comman
Muli Ben-Yehuda wrote:
On Mon, Dec 07, 2009 at 11:38:52AM -0600, Anthony Liguori wrote:
I'm skeptical that VT-d in its current form provides protection
against a malicious guest. The first problem is interrupt delivery.
I don't think any hypervisor has really put much thought into
mitigatin
On Sat, Dec 05, 2009 at 10:15:44PM +0200, Avi Kivity wrote:
> On 12/05/2009 09:42 PM, Marcelo Tosatti wrote:
>>
>>> I don't think the OS has "other mechanisms", though - the processor can
>>> speculate the tlb so that would be an OS bug.
>>
>> Can it? I figured it relied on the fact that no access
Hi Xiantao,
On 12.08.2009, at 06:03, Zhang, Xiantao wrote:
> From 2d3d6cf55f7fecd9a9fd7c764e43b1ee56c7eebb Mon Sep 17 00:00:00 2001
> From: Xiantao Zhang
> Date: Wed, 12 Aug 2009 11:39:33 +0800
> Subject: [PATCH] qemu-kvm: fix ia64 build breakage
>
> fix some configure issues.
Do you have any
Instead of hard coding the path to qemu-img on the
unattended_install script, let's pick it up from the
test parameters.
Signed-off-by: Lucas Meneghel Rodrigues
---
client/tests/kvm/scripts/unattended.py |2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
diff --git a/client/tests/kvm/s
If paths to CD images and qemu binaries are not correctly
configured, the tests will fail, sometimes giving to
unexperienced users little clue about what is actually
going on. So make sure we verify:
* ISO paths
* qemu binary paths
Inside kvm_preprocessing code, and give clear indications
if so
As pointed out before, the KVM reference control files
could use a little clean up. This patch implements severe
cleanup of the main control file by:
* Refactoring the code present there, moving it to the
kvm_utils.py library
* Treat the build test exactly the same way as other
tests, moving the c
RHEL-4.8 is still using 'hd[a-z]' as harddisk device name. This patch
adds 'h' to regular expression in command `pci_test_cmd'.
Signed-off-by: Yolkfull Chow
---
client/tests/kvm/kvm_tests.cfg.sample |2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
diff --git a/client/tests/kvm/kvm_te
On Thu, 3 Dec 2009 08:28:38 pm Avi Kivity wrote:
> On 12/03/2009 10:42 AM, Avishay Traeger1 wrote:
> > I previously submitted a patch to have the guest virtio-blk driver get the
> > value for the maximum I/O size from the host bdrv, rather than assume that
> > there is no limit. Avi requested that
On Mon, Dec 07, 2009 at 03:35:54PM +0530, sudhir kumar wrote:
> Resending with proper cc list :(
>
> On Mon, Dec 7, 2009 at 2:43 PM, sudhir kumar wrote:
> > Thanks for initiating the server side implementation of migration. Few
> > comments below
> >
> > On Fri, Dec 4, 2009 at 1:48 PM, Yolkfull C
On Monday 07 December 2009 18:47:10 Avi Kivity wrote:
> Some bits of cr4 can be owned by the guest on vmx, so when we read them,
> we copy them to the vcpu structure. In preparation for making the set of
> guest-owned bits dynamic, use helpers to access these bits so we don't need
> to know where
54 matches
Mail list logo