On Wed, Aug 19, 2009 at 01:36:14AM -0400, Gregory Haskins wrote:
So where is the problem here?
If virtio net in guest could be improved instead, everyone would
benefit.
So if I whip up a virtio-net backend for vbus with a PCI compliant
connector, you are happy?
I'm currently worried
On Wed, Aug 19, 2009 at 11:37:16PM +0300, Avi Kivity wrote:
On 08/19/2009 09:26 PM, Gregory Haskins wrote:
This is for things like the setup of queue-pairs, and the
transport of door-bells, and ib-verbs. I am not on the team
doing that work, so I am not an expert in this area. What I do
Avi Kivity wrote:
On 08/18/2009 05:46 PM, Gregory Haskins wrote:
Can you explain how vbus achieves RDMA?
I also don't see the connection to real time guests.
Both of these are still in development. Trying to stay true to the
release early and often mantra, the core vbus technology
On 8/19/2009 at 1:48 AM, in message 4a8b9241.20...@redhat.com, Avi Kivity
a...@redhat.com wrote:
On 08/19/2009 08:36 AM, Gregory Haskins wrote:
If virtio net in guest could be improved instead, everyone would
benefit.
So if I whip up a virtio-net backend for vbus with a PCI compliant
On 08/19/2009 09:28 AM, Gregory Haskins wrote:
Avi Kivity wrote:
On 08/18/2009 05:46 PM, Gregory Haskins wrote:
Can you explain how vbus achieves RDMA?
I also don't see the connection to real time guests.
Both of these are still in development. Trying to stay
On 08/19/2009 09:40 AM, Gregory Haskins wrote:
So if I whip up a virtio-net backend for vbus with a PCI compliant
connector, you are happy?
This doesn't improve virtio-net in any way.
Any why not? (Did you notice I said PCI compliant, i.e. over virtio-pci)
Because
On 8/19/2009 at 3:13 AM, in message 4a8ba635.9010...@redhat.com, Avi
Kivity
a...@redhat.com wrote:
On 08/19/2009 09:40 AM, Gregory Haskins wrote:
So if I whip up a virtio-net backend for vbus with a PCI compliant
connector, you are happy?
This doesn't improve virtio-net in
On 08/19/2009 02:40 PM, Gregory Haskins wrote:
So if I whip up a virtio-net backend for vbus with a PCI compliant
connector, you are happy?
This doesn't improve virtio-net in any way.
Any why not? (Did you notice I said PCI compliant, i.e. over virtio-pci)
Avi Kivity wrote:
On 08/19/2009 02:40 PM, Gregory Haskins wrote:
So if I whip up a virtio-net backend for vbus with a PCI compliant
connector, you are happy?
This doesn't improve virtio-net in any way.
Any why not? (Did you notice I said PCI compliant, i.e. over
Avi Kivity wrote:
On 08/19/2009 07:27 AM, Gregory Haskins wrote:
This thread started because i asked you about your technical
arguments why we'd want vbus instead of virtio.
(You mean vbus vs pci, right? virtio works fine, is untouched, and is
out-of-scope here)
I guess he
On Wed, Aug 19, 2009 at 08:40:33AM +0300, Avi Kivity wrote:
On 08/19/2009 03:38 AM, Ira W. Snyder wrote:
On Wed, Aug 19, 2009 at 12:26:23AM +0300, Avi Kivity wrote:
On 08/18/2009 11:59 PM, Ira W. Snyder wrote:
On a non shared-memory system (where the guest's RAM is not just a chunk
On 08/19/2009 06:28 PM, Ira W. Snyder wrote:
Well, if you can't do that, you can't use virtio-pci on the host.
You'll need another virtio transport (equivalent to fake pci you
mentioned above).
Ok.
Is there something similar that I can study as an example? Should I look
at virtio-pci?
On Wed, Aug 19, 2009 at 06:37:06PM +0300, Avi Kivity wrote:
On 08/19/2009 06:28 PM, Ira W. Snyder wrote:
Well, if you can't do that, you can't use virtio-pci on the host.
You'll need another virtio transport (equivalent to fake pci you
mentioned above).
Ok.
Is there something
On 08/19/2009 07:29 PM, Ira W. Snyder wrote:
virtio-$yourhardware or maybe virtio-dma
How about virtio-phys?
Could work.
Arnd and BenH are both looking at PPC systems (similar to mine). Grant
Likely is looking at talking to an processor core running on an FPGA,
IIRC. Most
Hi Nicholas
Nicholas A. Bellinger wrote:
On Wed, 2009-08-19 at 10:11 +0300, Avi Kivity wrote:
On 08/19/2009 09:28 AM, Gregory Haskins wrote:
Avi Kivity wrote:
SNIP
Basically, what it comes down to is both vbus and vhost need
configuration/management. Vbus does it with sysfs/configfs,
On Wed, 2009-08-19 at 14:39 -0400, Gregory Haskins wrote:
Hi Nicholas
Nicholas A. Bellinger wrote:
On Wed, 2009-08-19 at 10:11 +0300, Avi Kivity wrote:
On 08/19/2009 09:28 AM, Gregory Haskins wrote:
Avi Kivity wrote:
SNIP
Basically, what it comes down to is both vbus and vhost
* Avi Kivity a...@redhat.com wrote:
IIRC we reuse the PCI IDs for non-PCI.
You already know how I feel about this gem.
The earth keeps rotating despite the widespread use of
PCI IDs.
Btw., PCI IDs are a great way to arbitrate interfaces
planet-wide, in an OS-neutral,
On 08/18/2009 04:08 AM, Anthony Liguori wrote:
I believe strongly that we should avoid putting things in the kernel
unless they absolutely have to be. I'm definitely interested in
playing with vhost to see if there are ways to put even less in the
kernel. In particular, I think it would be a
On 08/17/2009 10:33 PM, Gregory Haskins wrote:
There is a secondary question of venet (a vbus native device) verses
virtio-net (a virtio native device that works with PCI or VBUS). If
this contention is really around venet vs virtio-net, I may possibly
conceed and retract its submission to
On Mon, Aug 17, 2009 at 03:33:30PM -0400, Gregory Haskins wrote:
There is a secondary question of venet (a vbus native device) verses
virtio-net (a virtio native device that works with PCI or VBUS). If
this contention is really around venet vs virtio-net, I may possibly
conceed and retract
On 08/18/2009 12:53 PM, Michael S. Tsirkin wrote:
I'm not hung up on PCI, myself. An idea that might help you get Avi
on-board: do setup in userspace, over PCI. Negotiate hypercall support
(e.g. with a PCI capability) and then switch to that for fastpath. Hmm?
Hypercalls don't nest
On 08/18/2009 01:09 PM, Michael S. Tsirkin wrote:
mmio and pio don't have this problem since the host can use the address
to locate the destination.
So userspace could map hypercall to address during setup and tell the
host kernel?
Suppose a nested guest has two devices. One a
On Tue, Aug 18, 2009 at 01:13:57PM +0300, Avi Kivity wrote:
On 08/18/2009 01:09 PM, Michael S. Tsirkin wrote:
mmio and pio don't have this problem since the host can use the address
to locate the destination.
So userspace could map hypercall to address during setup and tell the
host
On 08/18/2009 01:28 PM, Michael S. Tsirkin wrote:
Suppose a nested guest has two devices. One a virtual device backed by
its host (our guest), and one a virtual device backed by us (the real
host), and assigned by the guest to the nested guest. If both devices
use hypercalls, there is no way
On 08/18/2009 02:07 PM, Michael S. Tsirkin wrote:
On Tue, Aug 18, 2009 at 01:45:05PM +0300, Avi Kivity wrote:
On 08/18/2009 01:28 PM, Michael S. Tsirkin wrote:
Suppose a nested guest has two devices. One a virtual device backed by
its host (our guest), and one a virtual
On Tue, Aug 18, 2009 at 02:15:57PM +0300, Avi Kivity wrote:
On 08/18/2009 02:07 PM, Michael S. Tsirkin wrote:
On Tue, Aug 18, 2009 at 01:45:05PM +0300, Avi Kivity wrote:
On 08/18/2009 01:28 PM, Michael S. Tsirkin wrote:
Suppose a nested guest has two devices. One a
On 08/18/2009 02:49 PM, Michael S. Tsirkin wrote:
The host kernel sees a hypercall vmexit. How does it know if it's a
nested-guest-to-guest hypercall or a nested-guest-to-host hypercall?
The two are equally valid at the same time.
Here is how this can work - it is similar to MSI if you
Anthony Liguori wrote:
Gregory Haskins wrote:
Note: No one has ever proposed to change the virtio-ABI.
virtio-pci is part of the virtio ABI. You are proposing changing that.
I'm sorry, but I respectfully disagree with you here.
virtio has an ABI...I am not modifying that.
virtio-pci has
Avi Kivity wrote:
On 08/17/2009 10:33 PM, Gregory Haskins wrote:
There is a secondary question of venet (a vbus native device) verses
virtio-net (a virtio native device that works with PCI or VBUS). If
this contention is really around venet vs virtio-net, I may possibly
conceed and retract
Avi Kivity wrote:
On 08/18/2009 04:16 PM, Gregory Haskins wrote:
The issue here is that vbus is designed to be a generic solution to
in-kernel virtual-IO. It will support (via abstraction of key
subsystems) a variety of environments that may or may not be similar in
facilities to KVM, and
On Tue, Aug 18, 2009 at 11:46:06AM +0300, Michael S. Tsirkin wrote:
On Mon, Aug 17, 2009 at 04:17:09PM -0400, Gregory Haskins wrote:
Michael S. Tsirkin wrote:
On Mon, Aug 17, 2009 at 10:14:56AM -0400, Gregory Haskins wrote:
Case in point: Take an upstream kernel and you can modprobe the
On Tue, Aug 18, 2009 at 11:39:25AM -0400, Gregory Haskins wrote:
Michael S. Tsirkin wrote:
On Mon, Aug 17, 2009 at 03:33:30PM -0400, Gregory Haskins wrote:
There is a secondary question of venet (a vbus native device) verses
virtio-net (a virtio native device that works with PCI or VBUS).
On 08/18/2009 06:51 PM, Gregory Haskins wrote:
It's not laughably trivial when you try to support the full feature set
of kvm (for example, live migration will require dirty memory tracking,
and exporting all state stored in the kernel to userspace).
Doesn't vhost suffer from the same
On 08/18/2009 06:53 PM, Ira W. Snyder wrote:
So, in my system, copy_(to|from)_user() is completely wrong. There is no
userspace, only a physical system. In fact, because normal x86 computers
do not have DMA controllers, the host system doesn't actually handle any
data transfer!
In fact,
On Tue, Aug 18, 2009 at 11:51:59AM -0400, Gregory Haskins wrote:
It's not laughably trivial when you try to support the full feature set
of kvm (for example, live migration will require dirty memory tracking,
and exporting all state stored in the kernel to userspace).
Doesn't vhost suffer
On 08/18/2009 05:46 PM, Gregory Haskins wrote:
Can you explain how vbus achieves RDMA?
I also don't see the connection to real time guests.
Both of these are still in development. Trying to stay true to the
release early and often mantra, the core vbus technology is being
pushed now
On Tue, Aug 18, 2009 at 07:51:21PM +0300, Avi Kivity wrote:
On 08/18/2009 06:53 PM, Ira W. Snyder wrote:
So, in my system, copy_(to|from)_user() is completely wrong. There is no
userspace, only a physical system. In fact, because normal x86 computers
do not have DMA controllers, the host
On 08/18/2009 08:27 PM, Ira W. Snyder wrote:
In fact, modern x86s do have dma engines these days (google for Intel
I/OAT), and one of our plans for vhost-net is to allow their use for
packets above a certain size. So a patch allowing vhost-net to
optionally use a dma engine is a good thing.
On Tuesday 18 August 2009, Gregory Haskins wrote:
Avi Kivity wrote:
On 08/17/2009 10:33 PM, Gregory Haskins wrote:
One point of contention is that this is all managementy stuff and should
be kept out of the host kernel. Exposing shared memory, interrupts, and
guest hypercalls can all
On Tue, Aug 18, 2009 at 08:47:04PM +0300, Avi Kivity wrote:
On 08/18/2009 08:27 PM, Ira W. Snyder wrote:
In fact, modern x86s do have dma engines these days (google for Intel
I/OAT), and one of our plans for vhost-net is to allow their use for
packets above a certain size. So a patch allowing
On 08/18/2009 09:27 PM, Ira W. Snyder wrote:
I think in this case you want one side to be virtio-net (I'm guessing
the x86) and the other side vhost-net (the ppc boards with the dma
engine). virtio-net on x86 would communicate with userspace on the ppc
board to negotiate features and get a mac
On 08/18/2009 09:20 PM, Arnd Bergmann wrote:
Well, the interrupt model to name one.
The performance aspects of your interrupt model are independent
of the vbus proxy, or at least they should be. Let's assume for
now that your event notification mechanism gives significant
performance
On Tue, Aug 18, 2009 at 11:27:35AM -0700, Ira W. Snyder wrote:
I haven't studied vhost-net very carefully yet. As soon as I saw the
copy_(to|from)_user() I stopped reading, because it seemed useless for
my case. I'll look again and try to find where vhost-net supports
setting MAC addresses and
On Tue, Aug 18, 2009 at 10:27:52AM -0700, Ira W. Snyder wrote:
On Tue, Aug 18, 2009 at 07:51:21PM +0300, Avi Kivity wrote:
On 08/18/2009 06:53 PM, Ira W. Snyder wrote:
So, in my system, copy_(to|from)_user() is completely wrong. There is no
userspace, only a physical system. In fact,
On Tue, Aug 18, 2009 at 09:52:48PM +0300, Avi Kivity wrote:
On 08/18/2009 09:27 PM, Ira W. Snyder wrote:
I think in this case you want one side to be virtio-net (I'm guessing
the x86) and the other side vhost-net (the ppc boards with the dma
engine). virtio-net on x86 would communicate with
On Tue, Aug 18, 2009 at 08:53:29AM -0700, Ira W. Snyder wrote:
I think Greg is referring to something like my virtio-over-PCI patch.
I'm pretty sure that vhost is completely useless for my situation. I'd
like to see vhost work for my use, so I'll try to explain what I'm
doing.
I've got a
On Tuesday 18 August 2009 20:35:22 Michael S. Tsirkin wrote:
On Tue, Aug 18, 2009 at 10:27:52AM -0700, Ira W. Snyder wrote:
Also, in my case I'd like to boot Linux with my rootfs over NFS. Is
vhost-net capable of this?
I've had Arnd, BenH, and Grant Likely (and others, privately) contact
On 08/18/2009 11:59 PM, Ira W. Snyder wrote:
On a non shared-memory system (where the guest's RAM is not just a chunk
of userspace RAM in the host system), virtio's management model seems to
fall apart. Feature negotiation doesn't work as one would expect.
In your case, virtio-net on the
On 08/19/2009 12:26 AM, Avi Kivity wrote:
Off the top of my head, I would think that transporting userspace
addresses in the ring (for copy_(to|from)_user()) vs. physical addresses
(for DMAEngine) might be a problem. Pinning userspace pages into memory
for DMA is a bit of a pain, though it is
On Tue, Aug 18, 2009 at 11:57:48PM +0300, Michael S. Tsirkin wrote:
On Tue, Aug 18, 2009 at 08:53:29AM -0700, Ira W. Snyder wrote:
I think Greg is referring to something like my virtio-over-PCI patch.
I'm pretty sure that vhost is completely useless for my situation. I'd
like to see vhost
On Wed, Aug 19, 2009 at 12:26:23AM +0300, Avi Kivity wrote:
On 08/18/2009 11:59 PM, Ira W. Snyder wrote:
On a non shared-memory system (where the guest's RAM is not just a chunk
of userspace RAM in the host system), virtio's management model seems to
fall apart. Feature negotiation doesn't
On Wed, Aug 19, 2009 at 01:06:45AM +0300, Avi Kivity wrote:
On 08/19/2009 12:26 AM, Avi Kivity wrote:
Off the top of my head, I would think that transporting userspace
addresses in the ring (for copy_(to|from)_user()) vs. physical addresses
(for DMAEngine) might be a problem. Pinning
Ingo Molnar wrote:
* Gregory Haskins gregory.hask...@gmail.com wrote:
You haven't convinced me that your ideas are worth the effort
of abandoning virtio/pci or maintaining both venet/vbus and
virtio/pci.
With all due respect, I didnt ask you do to anything, especially
not abandon
On 08/19/2009 07:27 AM, Gregory Haskins wrote:
This thread started because i asked you about your technical
arguments why we'd want vbus instead of virtio.
(You mean vbus vs pci, right? virtio works fine, is untouched, and is
out-of-scope here)
I guess he meant venet vs
On 08/19/2009 03:44 AM, Ira W. Snyder wrote:
You don't need in fact a third mode. You can mmap the x86 address space
into your ppc userspace and use the second mode. All you need then is
the dma engine glue and byte swapping.
Hmm, I'll have to think about that.
The ppc is a 32-bit
Michael S. Tsirkin wrote:
On Tue, Aug 18, 2009 at 11:51:59AM -0400, Gregory Haskins wrote:
It's not laughably trivial when you try to support the full feature set
of kvm (for example, live migration will require dirty memory tracking,
and exporting all state stored in the kernel to userspace).
Ingo Molnar wrote:
* Gregory Haskins ghask...@novell.com wrote:
This will generally be used for hypervisors to publish any host-side
virtual devices up to a guest. The guest will have the opportunity
to consume any devices present on the vbus-proxy as if they were
platform devices, similar
Ingo Molnar wrote:
I think the reason vbus gets better performance for networking
today is that vbus' backends are in the kernel while virtio's
backends are currently in userspace. Since Michael has a
functioning in-kernel backend for virtio-net now, I suspect we're
weeks (maybe days) away
Anthony Liguori wrote:
Ingo Molnar wrote:
* Gregory Haskins ghask...@novell.com wrote:
This will generally be used for hypervisors to publish any host-side
virtual devices up to a guest. The guest will have the opportunity
to consume any devices present on the vbus-proxy as if they were
Avi Kivity wrote:
On 08/15/2009 01:32 PM, Ingo Molnar wrote:
This will generally be used for hypervisors to publish any host-side
virtual devices up to a guest. The guest will have the opportunity
to consume any devices present on the vbus-proxy as if they were
platform devices, similar to
* Anthony Liguori anth...@codemonkey.ws wrote:
Ingo Molnar wrote:
I think the reason vbus gets better performance for networking today
is that vbus' backends are in the kernel while virtio's backends are
currently in userspace. Since Michael has a functioning in-kernel
backend for
On 08/17/2009 05:16 PM, Gregory Haskins wrote:
My opinion is that this is a duplication of effort and we'd be better
off if everyone contributed to enhancing virtio, which already has
widely deployed guest drivers and non-Linux guest support.
It may have merit if it is proven that it is
Ingo Molnar wrote:
* Gregory Haskins gregory.hask...@gmail.com wrote:
Ingo Molnar wrote:
* Gregory Haskins ghask...@novell.com wrote:
This will generally be used for hypervisors to publish any host-side
virtual devices up to a guest. The guest will have the opportunity
to consume any
* Avi Kivity a...@redhat.com wrote:
I don't have any technical objections to vbus/venet (I had in the
past re interrupts but I believe you've addressed them), and it
appears to perform very well. However I still think we should
address virtio's shortcomings (as Michael is doing) rather
On 08/17/2009 06:05 PM, Gregory Haskins wrote:
Hi Ingo,
1) First off, let me state that I have made every effort to propose this
as a solution to integrate with KVM, the most recent of which is April:
http://lkml.org/lkml/2009/4/21/408
If you read through the various vbus related threads on
On 08/17/2009 06:09 PM, Gregory Haskins wrote:
We've been through this before I believe. If you can point out specific
differences that make venet outperform virtio-net I'll be glad to hear
(and steal) them though.
You sure know how to convince someone to collaborate with you, eh?
On Mon, Aug 17, 2009 at 10:14:56AM -0400, Gregory Haskins wrote:
Case in point: Take an upstream kernel and you can modprobe the
vbus-pcibridge in and virtio devices will work over that transport
unmodified.
See http://lkml.org/lkml/2009/8/6/244 for details.
The modprobe you are talking
Ingo Molnar wrote:
* Gregory Haskins gregory.hask...@gmail.com wrote:
Hi Ingo,
1) First off, let me state that I have made every effort to
propose this as a solution to integrate with KVM, the most recent
of which is April:
http://lkml.org/lkml/2009/4/21/408
If you read through
Ingo Molnar wrote:
* Gregory Haskins gregory.hask...@gmail.com wrote:
Avi Kivity wrote:
On 08/17/2009 05:16 PM, Gregory Haskins wrote:
My opinion is that this is a duplication of effort and we'd be better
off if everyone contributed to enhancing virtio, which already has
widely deployed
Michael S. Tsirkin wrote:
On Mon, Aug 17, 2009 at 10:14:56AM -0400, Gregory Haskins wrote:
Case in point: Take an upstream kernel and you can modprobe the
vbus-pcibridge in and virtio devices will work over that transport
unmodified.
See http://lkml.org/lkml/2009/8/6/244 for details.
The
Gregory Haskins wrote:
Note: No one has ever proposed to change the virtio-ABI.
virtio-pci is part of the virtio ABI. You are proposing changing that.
You cannot add new kernel modules to guests and expect them to remain
supported. So there is value in reusing existing ABIs
I think the
* Anthony Liguori anth...@codemonkey.ws wrote:
Ingo Molnar wrote:
* Gregory Haskins ghask...@novell.com wrote:
This will generally be used for hypervisors to publish any host-side
virtual devices up to a guest. The guest will have the opportunity
to consume any devices present on the
On 08/15/2009 01:32 PM, Ingo Molnar wrote:
This will generally be used for hypervisors to publish any host-side
virtual devices up to a guest. The guest will have the opportunity
to consume any devices present on the vbus-proxy as if they were
platform devices, similar to existing buses like
* Gregory Haskins ghask...@novell.com wrote:
This will generally be used for hypervisors to publish any host-side
virtual devices up to a guest. The guest will have the opportunity
to consume any devices present on the vbus-proxy as if they were
platform devices, similar to existing buses
Ingo Molnar wrote:
* Gregory Haskins ghask...@novell.com wrote:
This will generally be used for hypervisors to publish any host-side
virtual devices up to a guest. The guest will have the opportunity
to consume any devices present on the vbus-proxy as if they were
platform devices, similar
75 matches
Mail list logo