> We have an X driver that does minimal performance costing operations.
> As we should and will have for our other drivers.
Ok, so you use your own DDX and prevent X vgacrapware to kick in ? Makes
sense.
Ben.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the
We have an X driver that does minimal performance costing operations.
As we should and will have for our other drivers.
Ok, so you use your own DDX and prevent X vgacrapware to kick in ? Makes
sense.
Ben.
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body
Benjamin Herrenschmidt wrote:
On Wed, 2007-08-22 at 16:25 +1000, Rusty Russell wrote:
On Wed, 2007-08-22 at 08:34 +0300, Avi Kivity wrote:
Zachary Amsden wrote:
This patch provides hypercalls for the i386 port I/O instructions,
which vastly helps guests which use native-style
On Wed, 2007-08-22 at 16:25 +1000, Rusty Russell wrote:
> On Wed, 2007-08-22 at 08:34 +0300, Avi Kivity wrote:
> > Zachary Amsden wrote:
> > > This patch provides hypercalls for the i386 port I/O instructions,
> > > which vastly helps guests which use native-style drivers. For certain
> > > VMI
On Tue, 2007-08-21 at 22:23 -0700, Zachary Amsden wrote:
> In general, I/O in a virtual guest is subject to performance problems.
> The I/O can not be completed physically, but must be virtualized. This
> means trapping and decoding port I/O instructions from the guest OS.
> Not only is the
On Wed, 2007-08-22 at 16:25 +1000, Rusty Russell wrote:
On Wed, 2007-08-22 at 08:34 +0300, Avi Kivity wrote:
Zachary Amsden wrote:
This patch provides hypercalls for the i386 port I/O instructions,
which vastly helps guests which use native-style drivers. For certain
VMI workloads,
On Tue, 2007-08-21 at 22:23 -0700, Zachary Amsden wrote:
In general, I/O in a virtual guest is subject to performance problems.
The I/O can not be completed physically, but must be virtualized. This
means trapping and decoding port I/O instructions from the guest OS.
Not only is the trap
Benjamin Herrenschmidt wrote:
On Wed, 2007-08-22 at 16:25 +1000, Rusty Russell wrote:
On Wed, 2007-08-22 at 08:34 +0300, Avi Kivity wrote:
Zachary Amsden wrote:
This patch provides hypercalls for the i386 port I/O instructions,
which vastly helps guests which use native-style
Hi!
> >>In general, I/O in a virtual guest is subject to
> >>performance problems. The I/O can not be completed
> >>physically, but must be virtualized. This
> >>means trapping and decoding port I/O instructions from
> >>the guest OS. Not only is the trap for a #GP
> >>heavyweight, both in
Hi!
In general, I/O in a virtual guest is subject to
performance problems. The I/O can not be completed
physically, but must be virtualized. This
means trapping and decoding port I/O instructions from
the guest OS. Not only is the trap for a #GP
heavyweight, both in the processor and
On Wed, 2007-08-22 at 22:25 +0100, Alan Cox wrote:
> > I still think it's preferable to change some drivers than everybody.
> >
> > AFAIK BusLogic as real hardware is pretty much dead anyways,
> > so you're probably the only primary user of it anyways.
> > Go wild on it!
>
> I don't believe
On Wed, Aug 22, 2007 at 05:38:31PM -0700, Jeremy Fitzhardinge wrote:
> Andi Kleen wrote:
> > On Wed, Aug 22, 2007 at 04:14:41PM -0700, Jeremy Fitzhardinge wrote:
> >
> >> (which would also have VT, since
> >> all new processors do).
> >>
> >
> > Not true unfortunately. The Intel low end
Andi Kleen wrote:
> On Wed, Aug 22, 2007 at 04:14:41PM -0700, Jeremy Fitzhardinge wrote:
>
>> (which would also have VT, since
>> all new processors do).
>>
>
> Not true unfortunately. The Intel low end parts like Celerons (which
> are actually shipped in very large numbers) don't. Also
On Wed, Aug 22, 2007 at 04:14:41PM -0700, Jeremy Fitzhardinge wrote:
> (which would also have VT, since
> all new processors do).
Not true unfortunately. The Intel low end parts like Celerons (which
are actually shipped in very large numbers) don't. Also Intel
is still shipping some CPUs that
James Courtier-Dutton wrote:
> Ok, so I need to get a new CPU like the Intel Core Duo that has VT
> features? I have an old Pentium 4 at the moment, without any VT features.
>
No, VT-d (as opposed to VT) is a chipset feature which allows the
hypervisor to control who's allowed to DMA where.
Chris Wright wrote:
> * James Courtier-Dutton ([EMAIL PROTECTED]) wrote:
>> If one could directly expose a device to the guest, this feature could
>> be extremely useful for me.
>> Is it possible? How would it manage to handle the DMA bus mastering?
>
> Yes it's possible (Xen supports pci pass
* James Courtier-Dutton ([EMAIL PROTECTED]) wrote:
> Ok, so I need to get a new CPU like the Intel Core Duo that has VT
> features? I have an old Pentium 4 at the moment, without any VT features.
Depends on your goals. You can certainly give a paravirt Xen guest[1]
physical hardware without any
Jeremy Fitzhardinge wrote:
> Zachary Amsden wrote:
>> This patch provides hypercalls for the i386 port I/O instructions,
>> which vastly helps guests which use native-style drivers. For certain
>> VMI workloads, this provides a performance boost of up to 30%. We
>> expect KVM and lguest to be
* James Courtier-Dutton ([EMAIL PROTECTED]) wrote:
> If one could directly expose a device to the guest, this feature could
> be extremely useful for me.
> Is it possible? How would it manage to handle the DMA bus mastering?
Yes it's possible (Xen supports pci pass through). Without an IOMMU
Zachary Amsden wrote:
> This patch provides hypercalls for the i386 port I/O instructions,
> which vastly helps guests which use native-style drivers. For certain
> VMI workloads, this provides a performance boost of up to 30%. We
> expect KVM and lguest to be able to achieve similar gains on
Andi Kleen wrote:
We might benefit from it, but would the
BusLogic driver? It sets a nasty precedent for maintenance as different
hypervisors and emulators hack up different drivers for their own
performance.
I still think it's preferable to change some drivers than everybody.
AFAIK
Alan Cox wrote:
I still think it's preferable to change some drivers than everybody.
AFAIK BusLogic as real hardware is pretty much dead anyways,
so you're probably the only primary user of it anyways.
Go wild on it!
I don't believe anyone is materially maintaining the buslogic driver
> I still think it's preferable to change some drivers than everybody.
>
> AFAIK BusLogic as real hardware is pretty much dead anyways,
> so you're probably the only primary user of it anyways.
> Go wild on it!
I don't believe anyone is materially maintaining the buslogic driver and
in time its
> No, you can't ignore it. The page protections won't change between the
> GP and the decoder execution, but the instruction can, causing you to
> decode into the next page where the processor would not have. !P
> becomes obvious, but failure to respect NX or U/S is an exploitable
> bug.
Andi Kleen wrote:
How is that measured? In a loop? In the same pipeline state?
It seems a little dubious to me.
I did the experiments in a controlled environment, with interrupts
disabled and care to get the pipeline in the same state. It was a
perfectly repeatable experiment. I don't
On Wed, Aug 22, 2007 at 10:07:47AM -0700, Zachary Amsden wrote:
> >Also I fail to see the fundamental speed difference between
> >
> >mov index,register
> >int 0x...
> >...
> >switch (register)
> >case : do emulation
> >
>
> Int (on p4 == ~680 cycles).
>
> >versus
> >
> >out ...
> >#gp
>
> out is usually a single byte. Shouldn't be very expensive
> to decode. In fact it should be roughly equivalent to your
> hypercall multiplex.
Why is a performance critical path on a paravirt kernel even using I/O
instructions and not paravirtual device drivers ?
It clearly makes sense to
Andi Kleen wrote:
On Wed, Aug 22, 2007 at 09:48:25AM -0700, Zachary Amsden wrote:
Andi Kleen wrote:
On Tue, Aug 21, 2007 at 10:23:14PM -0700, Zachary Amsden wrote:
In general, I/O in a virtual guest is subject to performance problems.
The I/O can not be completed physically,
On Wed, Aug 22, 2007 at 09:48:25AM -0700, Zachary Amsden wrote:
> Andi Kleen wrote:
> >On Tue, Aug 21, 2007 at 10:23:14PM -0700, Zachary Amsden wrote:
> >
> >>In general, I/O in a virtual guest is subject to performance problems.
> >>The I/O can not be completed physically, but must be
Andi Kleen wrote:
On Tue, Aug 21, 2007 at 10:23:14PM -0700, Zachary Amsden wrote:
In general, I/O in a virtual guest is subject to performance problems.
The I/O can not be completed physically, but must be virtualized. This
means trapping and decoding port I/O instructions from the guest
Avi Kivity wrote:
Since this is only for newer kernels, won't updating the driver to use
a hypercall be more efficient? Or is this for existing out-of-tree
drivers?
Actually, it is for in-tree drivers that we emulate but don't want to
pollute, and one out of tree driver (that will
Avi Kivity wrote:
And even then, all the performance
sensitive stuff uses mmio, no?
Depends on the hardware.
Jeff
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at
Andi Kleen wrote:
Ah. But that's mostly modules, so real in-core changes should be very
Yes that's the big difference. Near all paravirt ops are concentrated
on the core kernel, but this one affects lots of people.
And why "but"? -- modules are as important as the core kernel. They're
> Ah. But that's mostly modules, so real in-core changes should be very
Yes that's the big difference. Near all paravirt ops are concentrated
on the core kernel, but this one affects lots of people.
And why "but"? -- modules are as important as the core kernel. They're
not second citizens.
Andi Kleen wrote:
On Wed, Aug 22, 2007 at 01:23:43PM +0300, Avi Kivity wrote:
Andi Kleen wrote:
I don't see why it's intrusive -- they all use the APIs, right?
Yes, but it still changes them. It might have a larger impact
on code size for example.
Only if
On Wed, Aug 22, 2007 at 01:23:43PM +0300, Avi Kivity wrote:
> Andi Kleen wrote:
> >>I don't see why it's intrusive -- they all use the APIs, right?
> >>
> >
> >Yes, but it still changes them. It might have a larger impact
> >on code size for example.
> >
>
> Only if CONFIG_PARAVIRT is
Andi Kleen wrote:
I don't see why it's intrusive -- they all use the APIs, right?
Yes, but it still changes them. It might have a larger impact
on code size for example.
Only if CONFIG_PARAVIRT is defined. And even then, all the performance
sensitive stuff uses mmio, no?
--
> I don't see why it's intrusive -- they all use the APIs, right?
Yes, but it still changes them. It might have a larger impact
on code size for example.
-Andi
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo
Andi Kleen wrote:
This patch also means I can kill off the emulation code in
drivers/lguest/core.c, which is a real relief.
But would it be faster? If not or only insignificant amount I think I would
prefer you keep it. Hooking IO is quite intrusive because it's done
by so many drivers.
> This patch also means I can kill off the emulation code in
> drivers/lguest/core.c, which is a real relief.
But would it be faster? If not or only insignificant amount I think I would
prefer you keep it. Hooking IO is quite intrusive because it's done
by so many drivers.
-Andi
-
To
On Tue, Aug 21, 2007 at 10:23:14PM -0700, Zachary Amsden wrote:
> In general, I/O in a virtual guest is subject to performance problems.
> The I/O can not be completed physically, but must be virtualized. This
> means trapping and decoding port I/O instructions from the guest OS.
> Not only
Zachary Amsden wrote:
Avi Kivity wrote:
Zachary Amsden wrote:
In general, I/O in a virtual guest is subject to performance
problems. The I/O can not be completed physically, but must be
virtualized. This means trapping and decoding port I/O instructions
from the guest OS. Not only is the
On Wed, 2007-08-22 at 08:34 +0300, Avi Kivity wrote:
> Zachary Amsden wrote:
> > This patch provides hypercalls for the i386 port I/O instructions,
> > which vastly helps guests which use native-style drivers. For certain
> > VMI workloads, this provides a performance boost of up to 30%. We
> >
H. Peter Anvin wrote:
Zachary Amsden wrote:
In general, I/O in a virtual guest is subject to performance problems.
The I/O can not be completed physically, but must be virtualized. This
means trapping and decoding port I/O instructions from the guest OS.
Not only is the trap for a #GP
Zachary Amsden wrote:
> In general, I/O in a virtual guest is subject to performance problems.
> The I/O can not be completed physically, but must be virtualized. This
> means trapping and decoding port I/O instructions from the guest OS.
> Not only is the trap for a #GP heavyweight, both in
Zachary Amsden wrote:
In general, I/O in a virtual guest is subject to performance problems.
The I/O can not be completed physically, but must be virtualized. This
means trapping and decoding port I/O instructions from the guest OS.
Not only is the trap for a #GP heavyweight, both in the
H. Peter Anvin wrote:
Zachary Amsden wrote:
In general, I/O in a virtual guest is subject to performance problems.
The I/O can not be completed physically, but must be virtualized. This
means trapping and decoding port I/O instructions from the guest OS.
Not only is the trap for a #GP
On Wed, 2007-08-22 at 08:34 +0300, Avi Kivity wrote:
Zachary Amsden wrote:
This patch provides hypercalls for the i386 port I/O instructions,
which vastly helps guests which use native-style drivers. For certain
VMI workloads, this provides a performance boost of up to 30%. We
expect
Zachary Amsden wrote:
Avi Kivity wrote:
Zachary Amsden wrote:
In general, I/O in a virtual guest is subject to performance
problems. The I/O can not be completed physically, but must be
virtualized. This means trapping and decoding port I/O instructions
from the guest OS. Not only is the
On Tue, Aug 21, 2007 at 10:23:14PM -0700, Zachary Amsden wrote:
In general, I/O in a virtual guest is subject to performance problems.
The I/O can not be completed physically, but must be virtualized. This
means trapping and decoding port I/O instructions from the guest OS.
Not only is
This patch also means I can kill off the emulation code in
drivers/lguest/core.c, which is a real relief.
But would it be faster? If not or only insignificant amount I think I would
prefer you keep it. Hooking IO is quite intrusive because it's done
by so many drivers.
-Andi
-
To unsubscribe
Andi Kleen wrote:
This patch also means I can kill off the emulation code in
drivers/lguest/core.c, which is a real relief.
But would it be faster? If not or only insignificant amount I think I would
prefer you keep it. Hooking IO is quite intrusive because it's done
by so many drivers.
I don't see why it's intrusive -- they all use the APIs, right?
Yes, but it still changes them. It might have a larger impact
on code size for example.
-Andi
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo
Andi Kleen wrote:
I don't see why it's intrusive -- they all use the APIs, right?
Yes, but it still changes them. It might have a larger impact
on code size for example.
Only if CONFIG_PARAVIRT is defined. And even then, all the performance
sensitive stuff uses mmio, no?
--
On Wed, Aug 22, 2007 at 01:23:43PM +0300, Avi Kivity wrote:
Andi Kleen wrote:
I don't see why it's intrusive -- they all use the APIs, right?
Yes, but it still changes them. It might have a larger impact
on code size for example.
Only if CONFIG_PARAVIRT is defined.
Which
Andi Kleen wrote:
On Wed, Aug 22, 2007 at 01:23:43PM +0300, Avi Kivity wrote:
Andi Kleen wrote:
I don't see why it's intrusive -- they all use the APIs, right?
Yes, but it still changes them. It might have a larger impact
on code size for example.
Only if
Ah. But that's mostly modules, so real in-core changes should be very
Yes that's the big difference. Near all paravirt ops are concentrated
on the core kernel, but this one affects lots of people.
And why but? -- modules are as important as the core kernel. They're
not second citizens.
-Andi
Andi Kleen wrote:
Ah. But that's mostly modules, so real in-core changes should be very
Yes that's the big difference. Near all paravirt ops are concentrated
on the core kernel, but this one affects lots of people.
And why but? -- modules are as important as the core kernel. They're
not
Avi Kivity wrote:
And even then, all the performance
sensitive stuff uses mmio, no?
Depends on the hardware.
Jeff
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at
Avi Kivity wrote:
Since this is only for newer kernels, won't updating the driver to use
a hypercall be more efficient? Or is this for existing out-of-tree
drivers?
Actually, it is for in-tree drivers that we emulate but don't want to
pollute, and one out of tree driver (that will
Andi Kleen wrote:
On Tue, Aug 21, 2007 at 10:23:14PM -0700, Zachary Amsden wrote:
In general, I/O in a virtual guest is subject to performance problems.
The I/O can not be completed physically, but must be virtualized. This
means trapping and decoding port I/O instructions from the guest
On Wed, Aug 22, 2007 at 09:48:25AM -0700, Zachary Amsden wrote:
Andi Kleen wrote:
On Tue, Aug 21, 2007 at 10:23:14PM -0700, Zachary Amsden wrote:
In general, I/O in a virtual guest is subject to performance problems.
The I/O can not be completed physically, but must be virtualized. This
Andi Kleen wrote:
On Wed, Aug 22, 2007 at 09:48:25AM -0700, Zachary Amsden wrote:
Andi Kleen wrote:
On Tue, Aug 21, 2007 at 10:23:14PM -0700, Zachary Amsden wrote:
In general, I/O in a virtual guest is subject to performance problems.
The I/O can not be completed physically,
out is usually a single byte. Shouldn't be very expensive
to decode. In fact it should be roughly equivalent to your
hypercall multiplex.
Why is a performance critical path on a paravirt kernel even using I/O
instructions and not paravirtual device drivers ?
It clearly makes sense to
On Wed, Aug 22, 2007 at 10:07:47AM -0700, Zachary Amsden wrote:
Also I fail to see the fundamental speed difference between
mov index,register
int 0x...
...
switch (register)
case : do emulation
Int (on p4 == ~680 cycles).
versus
out ...
#gp
- switch (*eip) {
case
Andi Kleen wrote:
How is that measured? In a loop? In the same pipeline state?
It seems a little dubious to me.
I did the experiments in a controlled environment, with interrupts
disabled and care to get the pipeline in the same state. It was a
perfectly repeatable experiment. I don't
No, you can't ignore it. The page protections won't change between the
GP and the decoder execution, but the instruction can, causing you to
decode into the next page where the processor would not have. !P
becomes obvious, but failure to respect NX or U/S is an exploitable
bug. Put a 1
I still think it's preferable to change some drivers than everybody.
AFAIK BusLogic as real hardware is pretty much dead anyways,
so you're probably the only primary user of it anyways.
Go wild on it!
I don't believe anyone is materially maintaining the buslogic driver and
in time its going
Alan Cox wrote:
I still think it's preferable to change some drivers than everybody.
AFAIK BusLogic as real hardware is pretty much dead anyways,
so you're probably the only primary user of it anyways.
Go wild on it!
I don't believe anyone is materially maintaining the buslogic driver
Andi Kleen wrote:
We might benefit from it, but would the
BusLogic driver? It sets a nasty precedent for maintenance as different
hypervisors and emulators hack up different drivers for their own
performance.
I still think it's preferable to change some drivers than everybody.
AFAIK
Zachary Amsden wrote:
This patch provides hypercalls for the i386 port I/O instructions,
which vastly helps guests which use native-style drivers. For certain
VMI workloads, this provides a performance boost of up to 30%. We
expect KVM and lguest to be able to achieve similar gains on I/O
* James Courtier-Dutton ([EMAIL PROTECTED]) wrote:
If one could directly expose a device to the guest, this feature could
be extremely useful for me.
Is it possible? How would it manage to handle the DMA bus mastering?
Yes it's possible (Xen supports pci pass through). Without an IOMMU
(like
Jeremy Fitzhardinge wrote:
Zachary Amsden wrote:
This patch provides hypercalls for the i386 port I/O instructions,
which vastly helps guests which use native-style drivers. For certain
VMI workloads, this provides a performance boost of up to 30%. We
expect KVM and lguest to be able to
* James Courtier-Dutton ([EMAIL PROTECTED]) wrote:
Ok, so I need to get a new CPU like the Intel Core Duo that has VT
features? I have an old Pentium 4 at the moment, without any VT features.
Depends on your goals. You can certainly give a paravirt Xen guest[1]
physical hardware without any VT
Chris Wright wrote:
* James Courtier-Dutton ([EMAIL PROTECTED]) wrote:
If one could directly expose a device to the guest, this feature could
be extremely useful for me.
Is it possible? How would it manage to handle the DMA bus mastering?
Yes it's possible (Xen supports pci pass through).
James Courtier-Dutton wrote:
Ok, so I need to get a new CPU like the Intel Core Duo that has VT
features? I have an old Pentium 4 at the moment, without any VT features.
No, VT-d (as opposed to VT) is a chipset feature which allows the
hypervisor to control who's allowed to DMA where. So
On Wed, Aug 22, 2007 at 04:14:41PM -0700, Jeremy Fitzhardinge wrote:
(which would also have VT, since
all new processors do).
Not true unfortunately. The Intel low end parts like Celerons (which
are actually shipped in very large numbers) don't. Also Intel
is still shipping some CPUs that
Andi Kleen wrote:
On Wed, Aug 22, 2007 at 04:14:41PM -0700, Jeremy Fitzhardinge wrote:
(which would also have VT, since
all new processors do).
Not true unfortunately. The Intel low end parts like Celerons (which
are actually shipped in very large numbers) don't. Also Intel
is
On Wed, Aug 22, 2007 at 05:38:31PM -0700, Jeremy Fitzhardinge wrote:
Andi Kleen wrote:
On Wed, Aug 22, 2007 at 04:14:41PM -0700, Jeremy Fitzhardinge wrote:
(which would also have VT, since
all new processors do).
Not true unfortunately. The Intel low end parts like Celerons
On Wed, 2007-08-22 at 22:25 +0100, Alan Cox wrote:
I still think it's preferable to change some drivers than everybody.
AFAIK BusLogic as real hardware is pretty much dead anyways,
so you're probably the only primary user of it anyways.
Go wild on it!
I don't believe anyone is
Avi Kivity wrote:
Zachary Amsden wrote:
In general, I/O in a virtual guest is subject to performance
problems. The I/O can not be completed physically, but must be
virtualized. This means trapping and decoding port I/O instructions
from the guest OS. Not only is the trap for a #GP
Zachary Amsden wrote:
> In general, I/O in a virtual guest is subject to performance
> problems. The I/O can not be completed physically, but must be
> virtualized. This means trapping and decoding port I/O instructions
> from the guest OS. Not only is the trap for a #GP heavyweight, both
> in
In general, I/O in a virtual guest is subject to performance problems.
The I/O can not be completed physically, but must be virtualized. This
means trapping and decoding port I/O instructions from the guest OS.
Not only is the trap for a #GP heavyweight, both in the processor and
the
In general, I/O in a virtual guest is subject to performance problems.
The I/O can not be completed physically, but must be virtualized. This
means trapping and decoding port I/O instructions from the guest OS.
Not only is the trap for a #GP heavyweight, both in the processor and
the
Zachary Amsden wrote:
In general, I/O in a virtual guest is subject to performance
problems. The I/O can not be completed physically, but must be
virtualized. This means trapping and decoding port I/O instructions
from the guest OS. Not only is the trap for a #GP heavyweight, both
in the
Avi Kivity wrote:
Zachary Amsden wrote:
In general, I/O in a virtual guest is subject to performance
problems. The I/O can not be completed physically, but must be
virtualized. This means trapping and decoding port I/O instructions
from the guest OS. Not only is the trap for a #GP
86 matches
Mail list logo