On 12/23/2009 11:21 PM, Gregory Haskins wrote:
That said, you are still incorrect. With what I proposed, the model
will run as an in-kernel vbus device, and no longer run in userspace.
It would therefore improve virtio-net as I stated, much in the same
way vhost-net or venet-tap do today.
On 12/24/2009 11:31 AM, Gregory Haskins wrote:
On 12/23/09 3:36 PM, Avi Kivity wrote:
On 12/23/2009 06:44 PM, Gregory Haskins wrote:
- Are a pure software concept
By design. In fact, I would describe it as software to software
optimized as opposed to trying to
On 12/24/2009 11:36 AM, Gregory Haskins wrote:
As a twist on this, the VMware paravirt driver interface is so
hardware-like that they're getting hardware vendors to supply cards that
implement it. Try that with a pure software approach.
Any hardware engineer (myself included) will tell
On 12/27/09 4:15 AM, Avi Kivity wrote:
On 12/23/2009 11:21 PM, Gregory Haskins wrote:
That said, you are still incorrect. With what I proposed, the model
will run as an in-kernel vbus device, and no longer run in userspace.
It would therefore improve virtio-net as I stated, much in the same
On 12/27/2009 03:18 PM, Gregory Haskins wrote:
On 12/27/09 4:15 AM, Avi Kivity wrote:
On 12/23/2009 11:21 PM, Gregory Haskins wrote:
That said, you are still incorrect. With what I proposed, the model
will run as an in-kernel vbus device, and no longer run in userspace.
It would
On 12/27/09 4:33 AM, Avi Kivity wrote:
On 12/24/2009 11:36 AM, Gregory Haskins wrote:
As a twist on this, the VMware paravirt driver interface is so
hardware-like that they're getting hardware vendors to supply cards that
implement it. Try that with a pure software approach.
Any
On 12/27/09 8:27 AM, Avi Kivity wrote:
On 12/27/2009 03:18 PM, Gregory Haskins wrote:
On 12/27/09 4:15 AM, Avi Kivity wrote:
On 12/23/2009 11:21 PM, Gregory Haskins wrote:
That said, you are still incorrect. With what I proposed, the model
will run as an in-kernel vbus device, and
On 12/27/2009 03:34 PM, Gregory Haskins wrote:
On 12/27/09 4:33 AM, Avi Kivity wrote:
On 12/24/2009 11:36 AM, Gregory Haskins wrote:
As a twist on this, the VMware paravirt driver interface is so
hardware-like that they're getting hardware vendors to supply cards that
implement it.
On 12/27/2009 03:39 PM, Gregory Haskins wrote:
No, where we are is at the point where we demonstrate that your original
statement that I did nothing to improve virtio was wrong.
I stand by it. virtio + your patch does nothing without a ton more work
(more or less equivalent to
On 12/27/09 8:49 AM, Avi Kivity wrote:
On 12/27/2009 03:34 PM, Gregory Haskins wrote:
On 12/27/09 4:33 AM, Avi Kivity wrote:
On 12/24/2009 11:36 AM, Gregory Haskins wrote:
As a twist on this, the VMware paravirt driver interface is so
hardware-like that they're getting hardware
On 12/27/09 8:49 AM, Avi Kivity wrote:
On 12/27/2009 03:39 PM, Gregory Haskins wrote:
No, where we are is at the point where we demonstrate that your original
statement that I did nothing to improve virtio was wrong.
I stand by it. virtio + your patch does nothing without a ton more
On 12/23/09 3:36 PM, Avi Kivity wrote:
On 12/23/2009 06:44 PM, Gregory Haskins wrote:
- Are a pure software concept
By design. In fact, I would describe it as software to software
optimized as opposed to trying to shoehorn into something that was
designed as a software-to-hardware
On 12/23/09 4:01 PM, Avi Kivity wrote:
On 12/23/2009 10:36 PM, Avi Kivity wrote:
On 12/23/2009 06:44 PM, Gregory Haskins wrote:
- Are a pure software concept
By design. In fact, I would describe it as software to software
optimized as opposed to trying to shoehorn into something that was
On Wed, Dec 23, 2009 at 11:28:08AM -0800, Ira W. Snyder wrote:
On Wed, Dec 23, 2009 at 12:34:44PM -0500, Gregory Haskins wrote:
On 12/23/09 1:15 AM, Kyle Moffett wrote:
On Tue, Dec 22, 2009 at 12:36, Gregory Haskins
gregory.hask...@gmail.com wrote:
On 12/22/09 2:57 AM, Ingo Molnar
On 12/23/2009 10:52 PM, Kyle Moffett wrote:
On Wed, Dec 23, 2009 at 17:58, Anthony Liguorianth...@codemonkey.ws wrote:
Of course, the key feature of virtio is that it makes it possible for you to
create your own enumeration mechanism if you're so inclined.
See... the thing is... a lot of us
On 12/23/2009 05:42 PM, Ira W. Snyder wrote:
I've got a single PCI Host (master) with ~20 PCI slots. Physically, it
is a backplane in a cPCI chassis, but the form factor is irrelevant. It
is regular PCI from a software perspective.
Into this backplane, I plug up to 20 PCI Agents (slaves). They
This is Linux virtualization, where _both_ the host and the guest source
code
is fully known, and bugs (if any) can be found with a high degree of
It may sound strange but Windows is very popular guest and last I
checked my HW there was no Windows sources there, but the answer to
On Thu, Dec 24, 2009 at 11:09:39AM -0600, Anthony Liguori wrote:
On 12/23/2009 05:42 PM, Ira W. Snyder wrote:
I've got a single PCI Host (master) with ~20 PCI slots. Physically, it
is a backplane in a cPCI chassis, but the form factor is irrelevant. It
is regular PCI from a software
i.e. it has all the makings of a stupid, avoidable, permanent fork. The thing
Nearly. There was no equivalent of a kernel based virtual driver host
before.
- Are a pure software concept and any compatibility mismatch is
self-inflicted. The patches
On 12/23/2009 12:13 PM, Andi Kleen wrote:
i.e. it has all the makings of a stupid, avoidable, permanent fork. The thing
Nearly. There was no equivalent of a kernel based virtual driver host
before.
These are guest drivers. We have virtio drivers, and Xen drivers (which
are
http://www.redhat.com/f/pdf/summit/cwright_11_open_source_virt.pdf
See slide 32. This is without vhost-net.
Thanks. Do you also have latency numbers?
It seems like there's definitely still potential for improvement
with messages 4K. But for the large messages they indeed
look rather good.
On 12/23/2009 02:14 PM, Andi Kleen wrote:
http://www.redhat.com/f/pdf/summit/cwright_11_open_source_virt.pdf
See slide 32. This is without vhost-net.
Thanks. Do you also have latency numbers?
No. Copying Chris. This was with the tx mitigation timer disabled, so
you won't see
On Wednesday 23 December 2009 07:51:29 am Ingo Molnar wrote:
* Anthony Liguori anth...@codemonkey.ws wrote:
On 12/22/2009 10:01 AM, Bartlomiej Zolnierkiewicz wrote:
new e1000 driver is more superior in architecture and do the required
work to make the new e1000 driver a full replacement
On 12/23/2009 03:07 PM, Bartlomiej Zolnierkiewicz wrote:
That is a very different situation from the AlacrityVM patches, which:
- Are a pure software concept and any compatibility mismatch is
self-inflicted. The patches are in fact breaking the ABI to KVM
intentionally (for better
On Wednesday 23 December 2009 02:31:11 pm Avi Kivity wrote:
On 12/23/2009 03:07 PM, Bartlomiej Zolnierkiewicz wrote:
That is a very different situation from the AlacrityVM patches, which:
- Are a pure software concept and any compatibility mismatch is
self-inflicted. The patches
On 12/23/2009 04:08 PM, Bartlomiej Zolnierkiewicz wrote:
The device model is exposed to the guest. If you change it, the guest
breaks.
Huh? Shouldn't non-vbus aware guests continue to work just fine?
Sure. But we aren't merging this code in order not to use it. If we
switch
On 12/23/2009 12:15 AM, Kyle Moffett wrote:
This is actually something that is of particular interest to me. I
have a few prototype boards right now with programmable PCI-E
host/device links on them; one of my long-term plans is to finagle
vbus into providing multiple virtual devices across
On 12/22/2009 06:02 PM, Chris Wright wrote:
* Anthony Liguori (anth...@codemonkey.ws) wrote:
The
virtio-net setup probably made extensive use of pinning and other tricks
to make things faster than a normal user would see them. It ends up
creating a perfect combination of batching which is
On 12/23/09 1:51 AM, Ingo Molnar wrote:
* Anthony Liguori anth...@codemonkey.ws wrote:
On 12/22/2009 10:01 AM, Bartlomiej Zolnierkiewicz wrote:
new e1000 driver is more superior in architecture and do the required
work to make the new e1000 driver a full replacement for the old one.
* Avi Kivity (a...@redhat.com) wrote:
On 12/23/2009 02:14 PM, Andi Kleen wrote:
http://www.redhat.com/f/pdf/summit/cwright_11_open_source_virt.pdf
See slide 32. This is without vhost-net.
Thanks. Do you also have latency numbers?
No. Copying Chris. This was with the tx mitigation timer
And its moot, anyway, as I have already retracted my one outstanding
pull request based on Linus' observation. So at this time, I am not
advocating _anything_ for upstream inclusion. And I am contemplating
_never_ doing so again. It's not worth _this_.
That certainly sounds like the wrong
On 12/23/09 12:10 PM, Andi Kleen wrote:
And its moot, anyway, as I have already retracted my one outstanding
pull request based on Linus' observation. So at this time, I am not
advocating _anything_ for upstream inclusion. And I am contemplating
_never_ doing so again. It's not worth
It seems like there's definitely still potential for improvement
with messages4K. But for the large messages they indeed
look rather good.
You are misreading the graph. At 4K it is tracking bare metal (the
green and yellow lines are bare metal, the red and blue bars are virtio).
At 4k
On Wed, 23 Dec 2009, Gregory Haskins wrote:
And upstream submission is not always like this!
I would think the process would come to a grinding halt if it were ;)
Well, in all honesty, if it had been non-virtualized drivers I would just
have pulled. The pull request all looked sane,
On 12/23/09 1:15 AM, Kyle Moffett wrote:
On Tue, Dec 22, 2009 at 12:36, Gregory Haskins
gregory.hask...@gmail.com wrote:
On 12/22/09 2:57 AM, Ingo Molnar wrote:
* Gregory Haskins gregory.hask...@gmail.com wrote:
Actually, these patches have nothing to do with the KVM folks. [...]
That claim
On Wed, 2009-12-23 at 13:14 +0100, Andi Kleen wrote:
http://www.redhat.com/f/pdf/summit/cwright_11_open_source_virt.pdf
See slide 32. This is without vhost-net.
Thanks. Do you also have latency numbers?
It seems like there's definitely still potential for improvement
with messages
On 12/23/09 5:22 AM, Avi Kivity wrote:
There was no attempt by Gregory to improve virtio-net.
If you truly do not understand why your statement is utterly wrong at
this point in the discussion, I feel sorry for you. If you are trying
to be purposely disingenuous, you should be ashamed of
On 12/23/09 12:52 PM, Peter W. Morreale wrote:
On Wed, 2009-12-23 at 13:14 +0100, Andi Kleen wrote:
http://www.redhat.com/f/pdf/summit/cwright_11_open_source_virt.pdf
See slide 32. This is without vhost-net.
Thanks. Do you also have latency numbers?
It seems like there's definitely still
* Peter W. Morreale (pmorre...@novell.com) wrote:
On Wed, 2009-12-23 at 13:14 +0100, Andi Kleen wrote:
http://www.redhat.com/f/pdf/summit/cwright_11_open_source_virt.pdf
See slide 32. This is without vhost-net.
Thanks. Do you also have latency numbers?
It seems like there's
* Anthony Liguori (anth...@codemonkey.ws) wrote:
The poor packet latency of virtio-net is a result of the fact that we
do software timer based TX mitigation. We do this such that we can
decrease the number of exits per-packet and increase throughput. We set
a timer for 250ms and
* Andi Kleen a...@firstfloor.org wrote:
- Are a pure software concept and any compatibility mismatch is
self-inflicted. The patches are in fact breaking the ABI to KVM
In practice, especially
Ingo Molnar mi...@elte.hu writes:
Yes, there's (obviously) compatibility requirements and artifacts and past
mistakes (as with any software interface), but you need to admit it to
Yes that's exactly what I meant.
yourself that your virtualization is sloppy just like hardware claim is
On Wed, Dec 23, 2009 at 12:34:44PM -0500, Gregory Haskins wrote:
On 12/23/09 1:15 AM, Kyle Moffett wrote:
On Tue, Dec 22, 2009 at 12:36, Gregory Haskins
gregory.hask...@gmail.com wrote:
On 12/22/09 2:57 AM, Ingo Molnar wrote:
* Gregory Haskins gregory.hask...@gmail.com wrote:
Actually,
Ira W. Snyder i...@ovro.caltech.edu writes:
(You'll quickly find that you must use DMA to transfer data across PCI.
AFAIK, CPU's cannot do burst accesses to the PCI bus. I get a 10+ times
AFAIK that's what write-combining on x86 does. DMA has other
advantages of course.
-Andi
--
On Wed, Dec 23, 2009 at 09:09:21AM -0600, Anthony Liguori wrote:
On 12/23/2009 12:15 AM, Kyle Moffett wrote:
This is actually something that is of particular interest to me. I
have a few prototype boards right now with programmable PCI-E
host/device links on them; one of my long-term plans
On 12/23/2009 08:15 PM, Gregory Haskins wrote:
On 12/23/09 5:22 AM, Avi Kivity wrote:
There was no attempt by Gregory to improve virtio-net.
If you truly do not understand why your statement is utterly wrong at
this point in the discussion, I feel sorry for you. If you are trying
On 12/23/2009 09:27 PM, Andi Kleen wrote:
Ingo Molnarmi...@elte.hu writes:
Yes, there's (obviously) compatibility requirements and artifacts and past
mistakes (as with any software interface), but you need to admit it to
Yes that's exactly what I meant.
And we do make plenty
On 12/23/2009 06:44 PM, Gregory Haskins wrote:
- Are a pure software concept
By design. In fact, I would describe it as software to software
optimized as opposed to trying to shoehorn into something that was
designed as a software-to-hardware interface (and therefore has
assumptions
On 12/23/2009 10:36 PM, Avi Kivity wrote:
On 12/23/2009 06:44 PM, Gregory Haskins wrote:
- Are a pure software concept
By design. In fact, I would describe it as software to software
optimized as opposed to trying to shoehorn into something that was
designed as a software-to-hardware
(Sorry for top post...on a mobile)
When someone repeatedly makes a claim you believe to be wrong and you
correct them, you start to wonder if that person has a less than
honorable agenda. In any case, I overreacted. For that, I apologize.
That said, you are still incorrect. With what I
On 12/23/2009 01:54 PM, Ira W. Snyder wrote:
On Wed, Dec 23, 2009 at 09:09:21AM -0600, Anthony Liguori wrote:
I didn't know you were interested in this as well. See my later reply to
Kyle for a lot of code that I've written with this in mind.
BTW, in the future, please CC me or CC
On 12/23/2009 11:29 AM, Linus Torvalds wrote:
On Wed, 23 Dec 2009, Gregory Haskins wrote:
And upstream submission is not always like this!
I would think the process would come to a grinding halt if it were ;)
Well, in all honesty, if it had been non-virtualized drivers I would just
have
On Wed, Dec 23, 2009 at 04:58:37PM -0600, Anthony Liguori wrote:
On 12/23/2009 01:54 PM, Ira W. Snyder wrote:
On Wed, Dec 23, 2009 at 09:09:21AM -0600, Anthony Liguori wrote:
I didn't know you were interested in this as well. See my later reply to
Kyle for a lot of code that I've written
On Wed, Dec 23, 2009 at 17:58, Anthony Liguori anth...@codemonkey.ws wrote:
On 12/23/2009 01:54 PM, Ira W. Snyder wrote:
On Wed, Dec 23, 2009 at 09:09:21AM -0600, Anthony Liguori wrote:
But both virtio-lguest and virtio-s390 use in-band enumeration and
discovery since they do not have support
On Wed, Dec 23, 2009 at 07:51:50PM +0100, Ingo Molnar wrote:
* Andi Kleen a...@firstfloor.org wrote:
- Are a pure software concept and any compatibility mismatch is
self-inflicted. The patches are in fact breaking the ABI to KVM
On Tuesday 22 December 2009 04:31:32 pm Anthony Liguori wrote:
I think the comparison would be if someone submitted a second e1000
driver that happened to do better on one netperf test than the current
e1000 driver.
You can argue, hey, choice is good, let's let a user choose if they want
On 12/22/2009 06:21 PM, Andi Kleen wrote:
So far, the only actual technical advantage I've seen is that vbus avoids
EOI exits.
The technical advantage is that it's significantly faster today.
Maybe your proposed alternative is as fast, or maybe it's not. Who knows?
We're working on
On 12/22/2009 07:36 PM, Gregory Haskins wrote:
Gregory, it would be nice if you worked _much_ harder with the KVM folks
before giving up.
I think the 5+ months that I politely tried to convince the KVM folks
that this was a good idea was pretty generous of my employer. The KVM
On 12/22/09 1:53 PM, Avi Kivity wrote:
On 12/22/2009 07:36 PM, Gregory Haskins wrote:
Gregory, it would be nice if you worked _much_ harder with the KVM folks
before giving up.
I think the 5+ months that I politely tried to convince the KVM folks
that this was a good idea was pretty
On 12/22/09 1:53 PM, Avi Kivity wrote:
I asked why the irqfd/ioeventfd mechanisms are insufficient, and you did not
reply.
BTW: the ioeventfd issue just fell through the cracks, so sorry about
that. Note that I have no specific issue with irqfd ever since the
lockless IRQ injection code was
On 12/22/2009 09:15 PM, Gregory Haskins wrote:
On 12/22/09 1:53 PM, Avi Kivity wrote:
I asked why the irqfd/ioeventfd mechanisms are insufficient, and you did not
reply.
BTW: the ioeventfd issue just fell through the cracks, so sorry about
that. Note that I have no specific issue
On 12/22/09 2:32 PM, Gregory Haskins wrote:
On 12/22/09 2:25 PM, Avi Kivity wrote:
If you're not doing something pretty minor, you're better of waking up a
thread (perhaps _sync if you want to keep on the same cpu). With the
new user return notifier thingie, that's pretty cheap.
We have
On 12/22/09 2:38 PM, Avi Kivity wrote:
On 12/22/2009 09:32 PM, Gregory Haskins wrote:
xinterface, as it turns out, is a great KVM interface for me and easy to
extend, all without conflicting with the changes in upstream. The old
way was via the kvm ioctl interface, but that sucked as the ABI
On 12/22/2009 09:32 PM, Gregory Haskins wrote:
Besides, Davide has
already expressed dissatisfaction with the KVM-isms creeping into
eventfd, so its not likely to ever be accepted regardless of your own
disposition.
Why don't you duplicate eventfd, then, should be easier than duplicating
On 12/22/2009 09:41 PM, Gregory Haskins wrote:
It means that kvm locking suddenly affects more of the kernel.
Thats ok. This would only be w.r.t. devices that are bound to the KVM
instance anyway, so they better know what they are doing (and they do).
It's okay to the author of
On 12/22/09 2:43 PM, Avi Kivity wrote:
On 12/22/2009 09:41 PM, Gregory Haskins wrote:
It means that kvm locking suddenly affects more of the kernel.
Thats ok. This would only be w.r.t. devices that are bound to the KVM
instance anyway, so they better know what they are doing (and
On 12/22/09 2:39 PM, Davide Libenzi wrote:
On Tue, 22 Dec 2009, Gregory Haskins wrote:
On 12/22/09 1:53 PM, Avi Kivity wrote:
I asked why the irqfd/ioeventfd mechanisms are insufficient, and you did
not reply.
BTW: the ioeventfd issue just fell through the cracks, so sorry about
that.
On 12/21/09 7:12 PM, Anthony Liguori wrote:
On 12/21/2009 11:44 AM, Gregory Haskins wrote:
Well, surely something like SR-IOV is moving in that direction, no?
Not really, but that's a different discussion.
Ok, but my general point still stands. At some level, some crafty
hardware
On 12/22/2009 11:33 AM, Andi Kleen wrote:
We're not talking about vaporware. vhost-net exists.
Is it as fast as the alacrityvm setup then e.g. for network traffic?
Last I heard the first could do wirespeed 10Gbit/s on standard hardware.
I'm very wary of any such claims. As far as
On Tue, 22 Dec 2009, Gregory Haskins wrote:
On 12/22/09 2:39 PM, Davide Libenzi wrote:
On Tue, 22 Dec 2009, Gregory Haskins wrote:
On 12/22/09 1:53 PM, Avi Kivity wrote:
I asked why the irqfd/ioeventfd mechanisms are insufficient, and you did
not reply.
BTW: the ioeventfd issue
On Tue, Dec 22, 2009 at 12:36, Gregory Haskins
gregory.hask...@gmail.com wrote:
On 12/22/09 2:57 AM, Ingo Molnar wrote:
* Gregory Haskins gregory.hask...@gmail.com wrote:
Actually, these patches have nothing to do with the KVM folks. [...]
That claim is curious to me - the AlacrityVM host
* Anthony Liguori anth...@codemonkey.ws wrote:
On 12/22/2009 10:01 AM, Bartlomiej Zolnierkiewicz wrote:
new e1000 driver is more superior in architecture and do the required
work to make the new e1000 driver a full replacement for the old one.
Right, like everyone actually does things this
On 12/18/09 4:51 PM, Ingo Molnar wrote:
* Gregory Haskins gregory.hask...@gmail.com wrote:
Hi Linus,
Please pull AlacrityVM guest support for 2.6.33 from:
git://git.kernel.org/pub/scm/linux/kernel/git/ghaskins/alacrityvm/linux-2.6.git
for-linus
All of these patches have stewed in
On 12/21/2009 05:34 PM, Gregory Haskins wrote:
I think it would be fair to point out that these patches have been objected to
by the KVM folks quite extensively,
Actually, these patches have nothing to do with the KVM folks. You are
perhaps confusing this with the hypervisor-side
On 12/21/09 10:43 AM, Avi Kivity wrote:
On 12/21/2009 05:34 PM, Gregory Haskins wrote:
I think it would be fair to point out that these patches have been
objected to
by the KVM folks quite extensively,
Actually, these patches have nothing to do with the KVM folks. You are
perhaps
On 12/21/2009 10:04 AM, Gregory Haskins wrote:
No, B and C definitely are, but A is lacking. And the performance
suffers as a result in my testing (vhost-net still throws a ton of exits
as its limited by virtio-pci and only adds about 1Gb/s to virtio-u, far
behind venet even with things like
On 12/21/2009 06:37 PM, Anthony Liguori wrote:
Since virtio-pci supports MSI-X, there should be no IO exits on
host-guest notification other than EOI in the virtual APIC. This is
a light weight exit today and will likely disappear entirely with
newer hardware.
I'm working on disappearing
On 12/21/09 11:37 AM, Anthony Liguori wrote:
On 12/21/2009 10:04 AM, Gregory Haskins wrote:
No, B and C definitely are, but A is lacking. And the performance
suffers as a result in my testing (vhost-net still throws a ton of exits
as its limited by virtio-pci and only adds about 1Gb/s to
On 12/21/09 11:40 AM, Avi Kivity wrote:
On 12/21/2009 06:37 PM, Anthony Liguori wrote:
Since virtio-pci supports MSI-X, there should be no IO exits on
host-guest notification other than EOI in the virtual APIC. This is
a light weight exit today and will likely disappear entirely with
newer
On 12/21/2009 06:56 PM, Gregory Haskins wrote:
I'm working on disappearing EOI exits on older hardware as well. Same
idea as the old TPR patching, without most of the magic.
While I applaud any engineering effort that results in more optimal
execution, if you are talking about what we
On 12/21/2009 10:46 AM, Gregory Haskins wrote:
The very best you can hope to achieve is 1:1 EOI per signal (though
today virtio-pci is even worse than that). As I indicated above, I can
eliminate more than 50% of even the EOIs in trivial examples, and even
more as we scale up the number of
On 12/21/09 12:05 PM, Avi Kivity wrote:
On 12/21/2009 06:56 PM, Gregory Haskins wrote:
I'm working on disappearing EOI exits on older hardware as well. Same
idea as the old TPR patching, without most of the magic.
While I applaud any engineering effort that results in more optimal
On 12/21/09 12:20 PM, Anthony Liguori wrote:
On 12/21/2009 10:46 AM, Gregory Haskins wrote:
The very best you can hope to achieve is 1:1 EOI per signal (though
today virtio-pci is even worse than that). As I indicated above, I can
eliminate more than 50% of even the EOIs in trivial examples,
On 12/21/2009 11:44 AM, Gregory Haskins wrote:
Well, surely something like SR-IOV is moving in that direction, no?
Not really, but that's a different discussion.
But let's focus on concrete data. For a given workload,
how many exits do you see due to EOI?
Its of course highly
* Gregory Haskins gregory.hask...@gmail.com wrote:
On 12/18/09 4:51 PM, Ingo Molnar wrote:
* Gregory Haskins gregory.hask...@gmail.com wrote:
Hi Linus,
Please pull AlacrityVM guest support for 2.6.33 from:
* Gregory Haskins gregory.hask...@gmail.com wrote:
Hi Linus,
Please pull AlacrityVM guest support for 2.6.33 from:
git://git.kernel.org/pub/scm/linux/kernel/git/ghaskins/alacrityvm/linux-2.6.git
for-linus
All of these patches have stewed in linux-next for quite a while now:
Gregory
86 matches
Mail list logo