On 01/10/2011 10:23 PM, Anthony Liguori wrote:
I don't see how ioapic, pit, or pic have a system scope.
They are not bound to any CPU like the APIC which you may have in mind.
And none of the above interact with KVM.
They're implemented by kvm. What deeper interaction do you have in mind?
On 01/10/2011 10:11 PM, Anthony Liguori wrote:
On 01/08/2011 02:47 AM, Jan Kiszka wrote:
OK, but I don't want to argue about the ioeventfd API. So let's put this
case aside. :)
I often reply too quickly without explaining myself. Let me use
ioeventfd as an example to highlight why KVMState
On 01/11/2011 03:01 AM, Avi Kivity wrote:
On 01/10/2011 10:23 PM, Anthony Liguori wrote:
I don't see how ioapic, pit, or pic have a system scope.
They are not bound to any CPU like the APIC which you may have in mind.
And none of the above interact with KVM.
They're implemented by kvm.
On 11.01.2011, at 15:00, Anthony Liguori wrote:
On 01/11/2011 03:01 AM, Avi Kivity wrote:
On 01/10/2011 10:23 PM, Anthony Liguori wrote:
I don't see how ioapic, pit, or pic have a system scope.
They are not bound to any CPU like the APIC which you may have in mind.
And none of the above
On 01/11/2011 08:06 AM, Alexander Graf wrote:
On 11.01.2011, at 15:00, Anthony Liguori wrote:
On 01/11/2011 03:01 AM, Avi Kivity wrote:
On 01/10/2011 10:23 PM, Anthony Liguori wrote:
I don't see how ioapic, pit, or pic have a system scope.
They are not bound
On 01/11/2011 04:00 PM, Anthony Liguori wrote:
On 01/11/2011 03:01 AM, Avi Kivity wrote:
On 01/10/2011 10:23 PM, Anthony Liguori wrote:
I don't see how ioapic, pit, or pic have a system scope.
They are not bound to any CPU like the APIC which you may have in
mind.
And none of the above
On 01/11/2011 04:09 PM, Anthony Liguori wrote:
Disadvantages:
1) you lose migration / savevm between KVM and non-KVM VMs
This doesn't work today and it's never worked. KVM exposes things
that TCG cannot emulate (like pvclock).
If you run kvm without pvclock, or implement pvclock in qemu,
On 11.01.2011, at 15:09, Anthony Liguori wrote:
On 01/11/2011 08:06 AM, Alexander Graf wrote:
On 11.01.2011, at 15:00, Anthony Liguori wrote:
On 01/11/2011 03:01 AM, Avi Kivity wrote:
On 01/10/2011 10:23 PM, Anthony Liguori wrote:
I don't see how ioapic, pit, or pic have
On 01/11/2011 08:18 AM, Avi Kivity wrote:
On 01/11/2011 04:00 PM, Anthony Liguori wrote:
On 01/11/2011 03:01 AM, Avi Kivity wrote:
On 01/10/2011 10:23 PM, Anthony Liguori wrote:
I don't see how ioapic, pit, or pic have a system scope.
They are not bound to any CPU like the APIC which you may
On 01/11/2011 08:22 AM, Avi Kivity wrote:
On 01/11/2011 04:09 PM, Anthony Liguori wrote:
Disadvantages:
1) you lose migration / savevm between KVM and non-KVM VMs
This doesn't work today and it's never worked. KVM exposes things
that TCG cannot emulate (like pvclock).
If you run kvm
On 01/11/2011 04:28 PM, Anthony Liguori wrote:
On 01/11/2011 08:18 AM, Avi Kivity wrote:
On 01/11/2011 04:00 PM, Anthony Liguori wrote:
On 01/11/2011 03:01 AM, Avi Kivity wrote:
On 01/10/2011 10:23 PM, Anthony Liguori wrote:
I don't see how ioapic, pit, or pic have a system scope.
They are
On 01/11/2011 04:36 PM, Anthony Liguori wrote:
They need to use the same device id then. And if they share code,
that indicates that they need to be the same device even more.
No, it really doesn't :-) Cirrus VGA and std VGA share a lot of
code. But that doesn't mean that we treat them as
On 01/11/2011 08:56 AM, Avi Kivity wrote:
On 01/11/2011 04:36 PM, Anthony Liguori wrote:
They need to use the same device id then. And if they share code,
that indicates that they need to be the same device even more.
No, it really doesn't :-) Cirrus VGA and std VGA share a lot of
code.
On 11.01.2011, at 16:12, Anthony Liguori wrote:
On 01/11/2011 08:56 AM, Avi Kivity wrote:
On 01/11/2011 04:36 PM, Anthony Liguori wrote:
They need to use the same device id then. And if they share code, that
indicates that they need to be the same device even more.
No, it really
On 01/11/2011 05:12 PM, Anthony Liguori wrote:
No, it really doesn't :-) Cirrus VGA and std VGA share a lot of
code. But that doesn't mean that we treat them as one device.
Cirrus and VGA really are separate devices. They share code because
on evolved from the other, and is backwards
On 01/11/2011 09:37 AM, Avi Kivity wrote:
Why not? Whatever state the kernel keeps, we expose to userspace
and allow sending it over the wire.
What exactly is the scenario you're concerned about?
Migration between userspace HPET and in-kernel HPET?
Yes. To a lesser extent, a client doing
On 01/11/2011 05:55 PM, Anthony Liguori wrote:
One thing I've been considering is essentially migration filters.
It would be a set of rules that essentially were hpet-kvm.* =
hpet.* which would allow migration from hpet to hpet-kvm given a
translation of state. I think this sort of
Visible, yes, but not in live migration, or in 'info i8254', or
similar. We can live migrate between qcow2 and qed (using block
migration), we should be able to do the same for the two i8254
implementations.
I'm not happy about separate implementations, but that's a minor
details. We can
On 01/11/2011 06:26 PM, Anthony Liguori wrote:
Visible, yes, but not in live migration, or in 'info i8254', or
similar. We can live migrate between qcow2 and qed (using block
migration), we should be able to do the same for the two i8254
implementations.
I'm not happy about separate
On 01/08/2011 02:47 AM, Jan Kiszka wrote:
OK, but I don't want to argue about the ioeventfd API. So let's put this
case aside. :)
I often reply too quickly without explaining myself. Let me use
ioeventfd as an example to highlight why KVMState is a good thing.
In real life, PIO and
Am 10.01.2011 20:59, Anthony Liguori wrote:
On 01/08/2011 02:47 AM, Jan Kiszka wrote:
Am 08.01.2011 00:27, Anthony Liguori wrote:
On 01/07/2011 03:03 AM, Jan Kiszka wrote:
Am 06.01.2011 20:24, Anthony Liguori wrote:
On 01/06/2011 11:56 AM, Marcelo Tosatti wrote:
Am 10.01.2011 21:11, Anthony Liguori wrote:
On 01/08/2011 02:47 AM, Jan Kiszka wrote:
OK, but I don't want to argue about the ioeventfd API. So let's put this
case aside. :)
I often reply too quickly without explaining myself. Let me use
ioeventfd as an example to highlight why
On 01/10/2011 02:12 PM, Jan Kiszka wrote:
Am 10.01.2011 20:59, Anthony Liguori wrote:
On 01/08/2011 02:47 AM, Jan Kiszka wrote:
Am 08.01.2011 00:27, Anthony Liguori wrote:
On 01/07/2011 03:03 AM, Jan Kiszka wrote:
Am 06.01.2011 20:24, Anthony Liguori wrote:
Am 10.01.2011 21:23, Anthony Liguori wrote:
On 01/10/2011 02:12 PM, Jan Kiszka wrote:
Am 10.01.2011 20:59, Anthony Liguori wrote:
On 01/08/2011 02:47 AM, Jan Kiszka wrote:
Am 08.01.2011 00:27, Anthony Liguori wrote:
On 01/07/2011 03:03 AM, Jan Kiszka wrote:
Am
Am 08.01.2011 00:27, Anthony Liguori wrote:
On 01/07/2011 03:03 AM, Jan Kiszka wrote:
Am 06.01.2011 20:24, Anthony Liguori wrote:
On 01/06/2011 11:56 AM, Marcelo Tosatti wrote:
From: Jan Kiszkajan.kis...@siemens.com
QEMU supports only one VM, so there is only one kvm_state per
Am 06.01.2011 20:24, Anthony Liguori wrote:
On 01/06/2011 11:56 AM, Marcelo Tosatti wrote:
From: Jan Kiszkajan.kis...@siemens.com
QEMU supports only one VM, so there is only one kvm_state per process,
and we gain nothing passing a reference to it around. Eliminate any need
to refer to it
On 01/07/2011 03:03 AM, Jan Kiszka wrote:
Am 06.01.2011 20:24, Anthony Liguori wrote:
On 01/06/2011 11:56 AM, Marcelo Tosatti wrote:
From: Jan Kiszkajan.kis...@siemens.com
QEMU supports only one VM, so there is only one kvm_state per process,
and we gain nothing passing a reference
From: Jan Kiszka jan.kis...@siemens.com
QEMU supports only one VM, so there is only one kvm_state per process,
and we gain nothing passing a reference to it around. Eliminate any need
to refer to it outside of kvm-all.c.
Signed-off-by: Jan Kiszka jan.kis...@siemens.com
CC: Alexander Graf
On 01/06/2011 11:56 AM, Marcelo Tosatti wrote:
From: Jan Kiszkajan.kis...@siemens.com
QEMU supports only one VM, so there is only one kvm_state per process,
and we gain nothing passing a reference to it around. Eliminate any need
to refer to it outside of kvm-all.c.
Signed-off-by: Jan
29 matches
Mail list logo