We have an X driver that does minimal performance costing operations.
As we should and will have for our other drivers.
Ok, so you use your own DDX and prevent X vgacrapware to kick in ? Makes
sense.
Ben.
___
Virtualization mailing list
On Tue, 2007-08-21 at 22:23 -0700, Zachary Amsden wrote:
In general, I/O in a virtual guest is subject to performance problems.
The I/O can not be completed physically, but must be virtualized. This
means trapping and decoding port I/O instructions from the guest OS.
Not only is the trap
On Wed, 2007-08-22 at 16:25 +1000, Rusty Russell wrote:
On Wed, 2007-08-22 at 08:34 +0300, Avi Kivity wrote:
Zachary Amsden wrote:
This patch provides hypercalls for the i386 port I/O instructions,
which vastly helps guests which use native-style drivers. For certain
VMI workloads,
Benjamin Herrenschmidt wrote:
On Wed, 2007-08-22 at 16:25 +1000, Rusty Russell wrote:
On Wed, 2007-08-22 at 08:34 +0300, Avi Kivity wrote:
Zachary Amsden wrote:
This patch provides hypercalls for the i386 port I/O instructions,
which vastly helps guests which use native-style
Hi!
In general, I/O in a virtual guest is subject to
performance problems. The I/O can not be completed
physically, but must be virtualized. This
means trapping and decoding port I/O instructions from
the guest OS. Not only is the trap for a #GP
heavyweight, both in the processor and
Zachary Amsden wrote:
In general, I/O in a virtual guest is subject to performance problems.
The I/O can not be completed physically, but must be virtualized. This
means trapping and decoding port I/O instructions from the guest OS.
Not only is the trap for a #GP heavyweight, both in the
H. Peter Anvin wrote:
Zachary Amsden wrote:
In general, I/O in a virtual guest is subject to performance problems.
The I/O can not be completed physically, but must be virtualized. This
means trapping and decoding port I/O instructions from the guest OS.
Not only is the trap for a #GP
On Wed, 2007-08-22 at 08:34 +0300, Avi Kivity wrote:
Zachary Amsden wrote:
This patch provides hypercalls for the i386 port I/O instructions,
which vastly helps guests which use native-style drivers. For certain
VMI workloads, this provides a performance boost of up to 30%. We
expect
Zachary Amsden wrote:
Avi Kivity wrote:
Zachary Amsden wrote:
In general, I/O in a virtual guest is subject to performance
problems. The I/O can not be completed physically, but must be
virtualized. This means trapping and decoding port I/O instructions
from the guest OS. Not only is the
On Tue, Aug 21, 2007 at 10:23:14PM -0700, Zachary Amsden wrote:
In general, I/O in a virtual guest is subject to performance problems.
The I/O can not be completed physically, but must be virtualized. This
means trapping and decoding port I/O instructions from the guest OS.
Not only is
This patch also means I can kill off the emulation code in
drivers/lguest/core.c, which is a real relief.
But would it be faster? If not or only insignificant amount I think I would
prefer you keep it. Hooking IO is quite intrusive because it's done
by so many drivers.
-Andi
Andi Kleen wrote:
This patch also means I can kill off the emulation code in
drivers/lguest/core.c, which is a real relief.
But would it be faster? If not or only insignificant amount I think I would
prefer you keep it. Hooking IO is quite intrusive because it's done
by so many drivers.
I don't see why it's intrusive -- they all use the APIs, right?
Yes, but it still changes them. It might have a larger impact
on code size for example.
-Andi
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
Andi Kleen wrote:
I don't see why it's intrusive -- they all use the APIs, right?
Yes, but it still changes them. It might have a larger impact
on code size for example.
Only if CONFIG_PARAVIRT is defined. And even then, all the performance
sensitive stuff uses mmio, no?
--
On Wed, Aug 22, 2007 at 01:23:43PM +0300, Avi Kivity wrote:
Andi Kleen wrote:
I don't see why it's intrusive -- they all use the APIs, right?
Yes, but it still changes them. It might have a larger impact
on code size for example.
Only if CONFIG_PARAVIRT is defined.
Which
Andi Kleen wrote:
On Wed, Aug 22, 2007 at 01:23:43PM +0300, Avi Kivity wrote:
Andi Kleen wrote:
I don't see why it's intrusive -- they all use the APIs, right?
Yes, but it still changes them. It might have a larger impact
on code size for example.
Only if
Ah. But that's mostly modules, so real in-core changes should be very
Yes that's the big difference. Near all paravirt ops are concentrated
on the core kernel, but this one affects lots of people.
And why but? -- modules are as important as the core kernel. They're
not second citizens.
-Andi
Andi Kleen wrote:
Ah. But that's mostly modules, so real in-core changes should be very
Yes that's the big difference. Near all paravirt ops are concentrated
on the core kernel, but this one affects lots of people.
And why but? -- modules are as important as the core kernel. They're
not
Avi Kivity wrote:
And even then, all the performance
sensitive stuff uses mmio, no?
Depends on the hardware.
Jeff
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
Avi Kivity wrote:
Since this is only for newer kernels, won't updating the driver to use
a hypercall be more efficient? Or is this for existing out-of-tree
drivers?
Actually, it is for in-tree drivers that we emulate but don't want to
pollute, and one out of tree driver (that will
Andi Kleen wrote:
On Tue, Aug 21, 2007 at 10:23:14PM -0700, Zachary Amsden wrote:
In general, I/O in a virtual guest is subject to performance problems.
The I/O can not be completed physically, but must be virtualized. This
means trapping and decoding port I/O instructions from the guest
out is usually a single byte. Shouldn't be very expensive
to decode. In fact it should be roughly equivalent to your
hypercall multiplex.
Why is a performance critical path on a paravirt kernel even using I/O
instructions and not paravirtual device drivers ?
It clearly makes sense to
Andi Kleen wrote:
How is that measured? In a loop? In the same pipeline state?
It seems a little dubious to me.
I did the experiments in a controlled environment, with interrupts
disabled and care to get the pipeline in the same state. It was a
perfectly repeatable experiment. I don't
No, you can't ignore it. The page protections won't change between the
GP and the decoder execution, but the instruction can, causing you to
decode into the next page where the processor would not have. !P
becomes obvious, but failure to respect NX or U/S is an exploitable
bug. Put a 1
I still think it's preferable to change some drivers than everybody.
AFAIK BusLogic as real hardware is pretty much dead anyways,
so you're probably the only primary user of it anyways.
Go wild on it!
I don't believe anyone is materially maintaining the buslogic driver and
in time its going
Alan Cox wrote:
I still think it's preferable to change some drivers than everybody.
AFAIK BusLogic as real hardware is pretty much dead anyways,
so you're probably the only primary user of it anyways.
Go wild on it!
I don't believe anyone is materially maintaining the buslogic driver
Andi Kleen wrote:
We might benefit from it, but would the
BusLogic driver? It sets a nasty precedent for maintenance as different
hypervisors and emulators hack up different drivers for their own
performance.
I still think it's preferable to change some drivers than everybody.
AFAIK
Zachary Amsden wrote:
This patch provides hypercalls for the i386 port I/O instructions,
which vastly helps guests which use native-style drivers. For certain
VMI workloads, this provides a performance boost of up to 30%. We
expect KVM and lguest to be able to achieve similar gains on I/O
Jeremy Fitzhardinge wrote:
Zachary Amsden wrote:
This patch provides hypercalls for the i386 port I/O instructions,
which vastly helps guests which use native-style drivers. For certain
VMI workloads, this provides a performance boost of up to 30%. We
expect KVM and lguest to be able to
* James Courtier-Dutton ([EMAIL PROTECTED]) wrote:
If one could directly expose a device to the guest, this feature could
be extremely useful for me.
Is it possible? How would it manage to handle the DMA bus mastering?
Yes it's possible (Xen supports pci pass through). Without an IOMMU
(like
Chris Wright wrote:
* James Courtier-Dutton ([EMAIL PROTECTED]) wrote:
If one could directly expose a device to the guest, this feature could
be extremely useful for me.
Is it possible? How would it manage to handle the DMA bus mastering?
Yes it's possible (Xen supports pci pass through).
* James Courtier-Dutton ([EMAIL PROTECTED]) wrote:
Ok, so I need to get a new CPU like the Intel Core Duo that has VT
features? I have an old Pentium 4 at the moment, without any VT features.
Depends on your goals. You can certainly give a paravirt Xen guest[1]
physical hardware without any VT
James Courtier-Dutton wrote:
Ok, so I need to get a new CPU like the Intel Core Duo that has VT
features? I have an old Pentium 4 at the moment, without any VT features.
No, VT-d (as opposed to VT) is a chipset feature which allows the
hypervisor to control who's allowed to DMA where. So
On Wed, Aug 22, 2007 at 04:14:41PM -0700, Jeremy Fitzhardinge wrote:
(which would also have VT, since
all new processors do).
Not true unfortunately. The Intel low end parts like Celerons (which
are actually shipped in very large numbers) don't. Also Intel
is still shipping some CPUs that
Andi Kleen wrote:
On Wed, Aug 22, 2007 at 04:14:41PM -0700, Jeremy Fitzhardinge wrote:
(which would also have VT, since
all new processors do).
Not true unfortunately. The Intel low end parts like Celerons (which
are actually shipped in very large numbers) don't. Also Intel
is
On Wed, Aug 22, 2007 at 05:38:31PM -0700, Jeremy Fitzhardinge wrote:
Andi Kleen wrote:
On Wed, Aug 22, 2007 at 04:14:41PM -0700, Jeremy Fitzhardinge wrote:
(which would also have VT, since
all new processors do).
Not true unfortunately. The Intel low end parts like Celerons
On Wed, 2007-08-22 at 22:25 +0100, Alan Cox wrote:
I still think it's preferable to change some drivers than everybody.
AFAIK BusLogic as real hardware is pretty much dead anyways,
so you're probably the only primary user of it anyways.
Go wild on it!
I don't believe anyone is
In general, I/O in a virtual guest is subject to performance problems.
The I/O can not be completed physically, but must be virtualized. This
means trapping and decoding port I/O instructions from the guest OS.
Not only is the trap for a #GP heavyweight, both in the processor and
the
Zachary Amsden wrote:
In general, I/O in a virtual guest is subject to performance
problems. The I/O can not be completed physically, but must be
virtualized. This means trapping and decoding port I/O instructions
from the guest OS. Not only is the trap for a #GP heavyweight, both
in the
Avi Kivity wrote:
Zachary Amsden wrote:
In general, I/O in a virtual guest is subject to performance
problems. The I/O can not be completed physically, but must be
virtualized. This means trapping and decoding port I/O instructions
from the guest OS. Not only is the trap for a #GP
40 matches
Mail list logo