Re: [RFC v1] virtio: add virtio-over-PCI driver
On Thursday 19 February 2009 03:08:35 Ira Snyder wrote: On Wed, Feb 18, 2009 at 05:13:03PM +1030, Rusty Russell wrote: don't restrict yourself to 32 feature bits (only PCI does this, and they're going to have to hack when we reach feature 32). There isn't any problem adding more feature bits. Do you think 128 bits is enough? Probably. We have unlimited bits in lguest and s390, but 128 is reasonable for the forseeable future (if not, you end up using bit 128 to mean look somewhere else for the rest of the bits). How about prepending a 4 byte length on the host buffers? Allows host to specify length (for host-guest), and guest writes it to allow truncated buffers on guest-host. That won't allow you to transfer *more* than one buffersize to the host, but you could use a different method (perhaps the 4 bytes indicates the *total* length?). I don't understand how this will help. I looked at virtio_net's implemention with VIRTIO_NET_F_MRG_RXBUF, which seems like it could really help performance. The problems with that are: 1) virtio_net doesn't write the merged header's num_buffers field 2) virtio_net doesn't actually split packets in xmit ... I'm using two instances of virtio_net to talk to each other, rather than a special userspace implementation like lguest and kvm use. Is this a good approach? Well, virtio in general is guest-host asymmetric. I originally explored symmetry, but it didn't seem to offer any concrete advantages, so we didn't require it. You aren't actually directly connecting two guests, are you? So this is just a simplification for your implementation? You could always add a VIRTIO_NET_F_MRG_TXBUF which did what you want, but note that symmetry breaks down for other virtio uses, too: block definitely isn't symmetric of course, but I haven't audited the others. So I'd recommend asymmetry; hack your host to understand chained buffers. Cheers, Rusty. ___ Linuxppc-dev mailing list Linuxppc-dev@ozlabs.org https://ozlabs.org/mailman/listinfo/linuxppc-dev
Re: [RFC v1] virtio: add virtio-over-PCI driver
On Thu, Feb 19, 2009 at 02:10:08PM +0800, Zang Roy-R61911 wrote: -Original Message- From: linuxppc-dev-bounces+tie-fei.zang=freescale@ozlabs.org [mailto:linuxppc-dev-bounces+tie-fei.zang=freescale@ozlabs .org] On Behalf Of Ira Snyder Sent: Wednesday, February 18, 2009 6:24 AM To: linux-ker...@vger.kernel.org Cc: linuxppc-dev@ozlabs.org; net...@vger.kernel.org; Rusty Russell; Arnd Bergmann; Jan-Bernd Themann Subject: [RFC v1] virtio: add virtio-over-PCI driver snip diff --git a/drivers/virtio/Kconfig b/drivers/virtio/Kconfig index 3dd6294..efcf56b 100644 --- a/drivers/virtio/Kconfig +++ b/drivers/virtio/Kconfig @@ -33,3 +33,25 @@ config VIRTIO_BALLOON If unsure, say M. +config VIRTIO_OVER_PCI_HOST + tristate Virtio-over-PCI Host support (EXPERIMENTAL) + depends on PCI EXPERIMENTAL + select VIRTIO + ---help--- + This driver provides the host support necessary for using virtio + over the PCI bus with a Freescale MPC8349EMDS evaluation board. + + If unsure, say N. + +config VIRTIO_OVER_PCI_FSL + tristate Virtio-over-PCI Guest support (EXPERIMENTAL) + depends on MPC834x_MDS EXPERIMENTAL + select VIRTIO + select DMA_ENGINE + select FSL_DMA + ---help--- + This driver provides the guest support necessary for using virtio + over the PCI bus. + + If unsure, say N. + diff --git a/drivers/virtio/Makefile b/drivers/virtio/Makefile index 6738c44..f31afaa 100644 --- a/drivers/virtio/Makefile +++ b/drivers/virtio/Makefile @@ -2,3 +2,5 @@ obj-$(CONFIG_VIRTIO) += virtio.o obj-$(CONFIG_VIRTIO_RING) += virtio_ring.o obj-$(CONFIG_VIRTIO_PCI) += virtio_pci.o obj-$(CONFIG_VIRTIO_BALLOON) += virtio_balloon.o +obj-$(CONFIG_VIRTIO_OVER_PCI_HOST) += vop_host.o +obj-$(CONFIG_VIRTIO_OVER_PCI_FSL) += vop_fsl.o I suppose we need to build the kernel twice. one for vop_host (on host with pci enabled) and the other is for vop_fsl ( on agent with pci disabled). Is it possible to build one image for both host and agent. We do not scan the pci bus if the controller is configured to agent. You should be able to build a kernel with support for both host and guest operation, and then use the device tree to switch which driver you get. The host driver won't be used without a PCI bus, and the guest driver won't be used without the message unit. Also, is it possible to include mpc85xx architecture? They should be same. There is some code for 85xx in Fresscale BSP. http://www.bitshrine.org/gpp/linux-fsl-2.6.23-MPC8568MDS_PCI_Agent_PCIe_ EP_Drvier.patch I looked at the cardnet driver before I implemented my PCINet driver. I hunch it would be rejected for the same reasons, but maybe not. Also, it makes no use of DMA, which is critical for good transfer speed. Using memcpy() in PCINet gives performance around 10 mbit/sec, which is terrible. I'm sure the driver isn't very hard to port to 85xx, I just don't have any 85xx boards to test with. The driver only directly interacts with the messaging unit, which is a pretty simple piece of hardware. Thanks, Ira ___ Linuxppc-dev mailing list Linuxppc-dev@ozlabs.org https://ozlabs.org/mailman/listinfo/linuxppc-dev
Re: [RFC v1] virtio: add virtio-over-PCI driver
On Thu, Feb 19, 2009 at 09:48:04PM +1030, Rusty Russell wrote: On Thursday 19 February 2009 03:08:35 Ira Snyder wrote: On Wed, Feb 18, 2009 at 05:13:03PM +1030, Rusty Russell wrote: don't restrict yourself to 32 feature bits (only PCI does this, and they're going to have to hack when we reach feature 32). There isn't any problem adding more feature bits. Do you think 128 bits is enough? Probably. We have unlimited bits in lguest and s390, but 128 is reasonable for the forseeable future (if not, you end up using bit 128 to mean look somewhere else for the rest of the bits). Ok. There currently isn't an interface to access more than 32 bits anyway. How about prepending a 4 byte length on the host buffers? Allows host to specify length (for host-guest), and guest writes it to allow truncated buffers on guest-host. That won't allow you to transfer *more* than one buffersize to the host, but you could use a different method (perhaps the 4 bytes indicates the *total* length?). I don't understand how this will help. I looked at virtio_net's implemention with VIRTIO_NET_F_MRG_RXBUF, which seems like it could really help performance. The problems with that are: 1) virtio_net doesn't write the merged header's num_buffers field 2) virtio_net doesn't actually split packets in xmit ... I'm using two instances of virtio_net to talk to each other, rather than a special userspace implementation like lguest and kvm use. Is this a good approach? Well, virtio in general is guest-host asymmetric. I originally explored symmetry, but it didn't seem to offer any concrete advantages, so we didn't require it. You aren't actually directly connecting two guests, are you? So this is just a simplification for your implementation? I'm not connecting two guests directly. My eventual setup will have a single x86 computer (the host) and many guest systems. I don't care if the guests cannot communicate between each other, just that they can communicate with the host. I wanted to avoid the extra trip to userspace, so I just connected two instances of virtio_net together. This way you just recv packets in the kernel, rather than jumping to userspace and then using TAP/TUN to drive packets back into the kernel. Plus, I have no idea how I would do a userspace interface. I'd definitely need help. You could always add a VIRTIO_NET_F_MRG_TXBUF which did what you want, but note that symmetry breaks down for other virtio uses, too: block definitely isn't symmetric of course, but I haven't audited the others. I have no need to use virtio_blk, so I pretty much ignored it. In fact, I didn't make any attempt to support RO and WO buffers in the same queue. Virtio_net only uses queues this way, and it was much easier for me to wrap my head around. I don't think that virtio_console is symmetric either, but I haven't really studied it. I was thinking about implementing a virtio_uart which would be symmetric. That would be plenty for my needs. So I'd recommend asymmetry; hack your host to understand chained buffers. It's not that virtio_net doesn't understand chained buffers, it just doesn't write them. Grep for uses of the num_buffers field in virtio_net. It uses them in recv, it just doesn't write them in xmit. It assumes that add_buf() can accept something like: idx address len flags next 0 XXX 12 N1 1 XXX 8000 -2 That would mean it can shove an 8000 byte packet into the virtqueue. It doesn't have any way of knowing to split packets up into chunks, nor how many chunks are available. It assumes that the receiver can read from any address on the sender. I think that this is a perfectly reasonable assumption in a shared memory system, but it breaks down in my case. I cannot just tell the host the packet data is at this address because it cannot do DMA. I have to use the guest system to do DMA. The host has to have pre-allocated the recv memory so the DMA engine has somewhere to copy the data to. Maybe I'm explaining this poorly, but try to think about it this way: 1) Unlike a virtual machine, both systems are NOT sharing memory 2) Both systems have some limited access to each other's memory 3) Both systems can write descriptors equally fast 4) Copying payload data is extremely slow for the host 5) Copying payload data is extremely fast for the guest It would be possible to just alter virtio_net's headers in-flight to set the number of buffers actually used. This would split the 8000 byte packet up into two chunks, 4096 byte and 3904 byte, then set num_buffers to 2. This would add some complexity, but I think it is probably reasonable. Ira ___ Linuxppc-dev mailing list Linuxppc-dev@ozlabs.org https://ozlabs.org/mailman/listinfo/linuxppc-dev
Re: [RFC v1] virtio: add virtio-over-PCI driver
On Feb 19, 2009, at 12:13 AM, Zang Roy-R61911 wrote: -Original Message- From: linuxppc-dev-bounces+tie-fei.zang=freescale@ozlabs.org [mailto:linuxppc-dev-bounces+tie-fei.zang=freescale@ozlabs .org] On Behalf Of Kumar Gala Sent: Thursday, February 19, 2009 0:47 AM To: Ira Snyder Cc: Arnd Bergmann; Jan-Bernd Themann; net...@vger.kernel.org; Rusty Russell; linux-ker...@vger.kernel.org; linuxppc-dev@ozlabs.org Subject: Re: [RFC v1] virtio: add virtio-over-PCI driver On Feb 17, 2009, at 4:24 PM, Ira Snyder wrote: Documentation/virtio-over-PCI.txt | 61 ++ arch/powerpc/boot/dts/mpc834x_mds.dts |7 + we'll have to review the .dts and expect a documentation update for the node. But that's pretty minor at this point. drivers/virtio/Kconfig| 22 + drivers/virtio/Makefile |2 + drivers/virtio/vop.h | 119 ++ drivers/virtio/vop_fsl.c | 1911 + make this vop_fsl_mpc83xx.c or something along those lines. why? so we can deal with 85xx as well. We just need to isolate the 83xx specific bits (message usage) - k ___ Linuxppc-dev mailing list Linuxppc-dev@ozlabs.org https://ozlabs.org/mailman/listinfo/linuxppc-dev
Re: [RFC v1] virtio: add virtio-over-PCI driver
On Thu, Feb 19, 2009 at 10:51:43AM -0600, Kumar Gala wrote: On Feb 19, 2009, at 12:13 AM, Zang Roy-R61911 wrote: -Original Message- From: linuxppc-dev-bounces+tie-fei.zang=freescale@ozlabs.org [mailto:linuxppc-dev-bounces+tie-fei.zang=freescale@ozlabs .org] On Behalf Of Kumar Gala Sent: Thursday, February 19, 2009 0:47 AM To: Ira Snyder Cc: Arnd Bergmann; Jan-Bernd Themann; net...@vger.kernel.org; Rusty Russell; linux-ker...@vger.kernel.org; linuxppc-dev@ozlabs.org Subject: Re: [RFC v1] virtio: add virtio-over-PCI driver On Feb 17, 2009, at 4:24 PM, Ira Snyder wrote: Documentation/virtio-over-PCI.txt | 61 ++ arch/powerpc/boot/dts/mpc834x_mds.dts |7 + we'll have to review the .dts and expect a documentation update for the node. But that's pretty minor at this point. drivers/virtio/Kconfig| 22 + drivers/virtio/Makefile |2 + drivers/virtio/vop.h | 119 ++ drivers/virtio/vop_fsl.c | 1911 + make this vop_fsl_mpc83xx.c or something along those lines. why? so we can deal with 85xx as well. We just need to isolate the 83xx specific bits (message usage) In fact, most of the driver has nothing to do with hardware, and everything to do with managing memory. Most of the driver could be shared between implementations. The only things that would be hardware specific are setting up the descriptor memory, and raising/handling interrupts. I just wanted to get something working and out here to discuss. I figured that more hardware support, features, etc. could come later. Just like everything else in the kernel, I'm sure this will have to evolve over time as well. Thanks, Ira ___ Linuxppc-dev mailing list Linuxppc-dev@ozlabs.org https://ozlabs.org/mailman/listinfo/linuxppc-dev
RE: [RFC v1] virtio: add virtio-over-PCI driver
-Original Message- From: Ira Snyder [mailto:i...@ovro.caltech.edu] Sent: Friday, February 20, 2009 0:15 AM To: Zang Roy-R61911 Cc: linux-ker...@vger.kernel.org; linuxppc-dev@ozlabs.org; net...@vger.kernel.org; Rusty Russell; Arnd Bergmann; Jan-Bernd Themann Subject: Re: [RFC v1] virtio: add virtio-over-PCI driver On Thu, Feb 19, 2009 at 02:10:08PM +0800, Zang Roy-R61911 wrote: -Original Message- From: linuxppc-dev-bounces+tie-fei.zang=freescale@ozlabs.org [mailto:linuxppc-dev-bounces+tie-fei.zang=freescale@ozlabs .org] On Behalf Of Ira Snyder Sent: Wednesday, February 18, 2009 6:24 AM To: linux-ker...@vger.kernel.org Cc: linuxppc-dev@ozlabs.org; net...@vger.kernel.org; Rusty Russell; Arnd Bergmann; Jan-Bernd Themann Subject: [RFC v1] virtio: add virtio-over-PCI driver snip diff --git a/drivers/virtio/Kconfig b/drivers/virtio/Kconfig index 3dd6294..efcf56b 100644 --- a/drivers/virtio/Kconfig +++ b/drivers/virtio/Kconfig @@ -33,3 +33,25 @@ config VIRTIO_BALLOON If unsure, say M. +config VIRTIO_OVER_PCI_HOST + tristate Virtio-over-PCI Host support (EXPERIMENTAL) + depends on PCI EXPERIMENTAL + select VIRTIO + ---help--- + This driver provides the host support necessary for using virtio + over the PCI bus with a Freescale MPC8349EMDS evaluation board. + + If unsure, say N. + +config VIRTIO_OVER_PCI_FSL + tristate Virtio-over-PCI Guest support (EXPERIMENTAL) + depends on MPC834x_MDS EXPERIMENTAL + select VIRTIO + select DMA_ENGINE + select FSL_DMA + ---help--- + This driver provides the guest support necessary for using virtio + over the PCI bus. + + If unsure, say N. + diff --git a/drivers/virtio/Makefile b/drivers/virtio/Makefile index 6738c44..f31afaa 100644 --- a/drivers/virtio/Makefile +++ b/drivers/virtio/Makefile @@ -2,3 +2,5 @@ obj-$(CONFIG_VIRTIO) += virtio.o obj-$(CONFIG_VIRTIO_RING) += virtio_ring.o obj-$(CONFIG_VIRTIO_PCI) += virtio_pci.o obj-$(CONFIG_VIRTIO_BALLOON) += virtio_balloon.o +obj-$(CONFIG_VIRTIO_OVER_PCI_HOST) += vop_host.o +obj-$(CONFIG_VIRTIO_OVER_PCI_FSL) += vop_fsl.o I suppose we need to build the kernel twice. one for vop_host (on host with pci enabled) and the other is for vop_fsl ( on agent with pci disabled). Is it possible to build one image for both host and agent. We do not scan the pci bus if the controller is configured to agent. You should be able to build a kernel with support for both host and guest operation, and then use the device tree to switch which driver you get. The host driver won't be used without a PCI bus, and the guest driver won't be used without the message unit. Good. Is it necssary to commit a extra dts for the agent mode? or just document it? Also, is it possible to include mpc85xx architecture? They should be same. There is some code for 85xx in Fresscale BSP. http://www.bitshrine.org/gpp/linux-fsl-2.6.23-MPC8568MDS_PCI_A gent_PCIe_ EP_Drvier.patch I looked at the cardnet driver before I implemented my PCINet driver. I hunch it would be rejected for the same reasons, but maybe not. That is also our concern :-( Also, it makes no use of DMA, which is critical for good transfer speed. Using memcpy() in PCINet gives performance around 10 mbit/sec, which is terrible. I can see your improvement for performance. I'm sure the driver isn't very hard to port to 85xx, I just don't have any 85xx boards to test with. The driver only directly interacts with the messaging unit, which is a pretty simple piece of hardware. No matter. It is OK to just support 83xx boards currently. 85xx baords can be dealed with later. Finally, I hope this driver can support 83xx /85xx boards pci and pci express mode. Roy ___ Linuxppc-dev mailing list Linuxppc-dev@ozlabs.org https://ozlabs.org/mailman/listinfo/linuxppc-dev
RE: [RFC v1] virtio: add virtio-over-PCI driver
-Original Message- From: Kumar Gala [mailto:ga...@kernel.crashing.org] Sent: Friday, February 20, 2009 0:52 AM To: Zang Roy-R61911 Cc: Ira Snyder; Arnd Bergmann; Jan-Bernd Themann; net...@vger.kernel.org; Rusty Russell; linux-ker...@vger.kernel.org; linuxppc-dev@ozlabs.org Subject: Re: [RFC v1] virtio: add virtio-over-PCI driver On Feb 19, 2009, at 12:13 AM, Zang Roy-R61911 wrote: -Original Message- From: linuxppc-dev-bounces+tie-fei.zang=freescale@ozlabs.org [mailto:linuxppc-dev-bounces+tie-fei.zang=freescale@ozlabs .org] On Behalf Of Kumar Gala Sent: Thursday, February 19, 2009 0:47 AM To: Ira Snyder Cc: Arnd Bergmann; Jan-Bernd Themann; net...@vger.kernel.org; Rusty Russell; linux-ker...@vger.kernel.org; linuxppc-dev@ozlabs.org Subject: Re: [RFC v1] virtio: add virtio-over-PCI driver On Feb 17, 2009, at 4:24 PM, Ira Snyder wrote: Documentation/virtio-over-PCI.txt | 61 ++ arch/powerpc/boot/dts/mpc834x_mds.dts |7 + we'll have to review the .dts and expect a documentation update for the node. But that's pretty minor at this point. drivers/virtio/Kconfig| 22 + drivers/virtio/Makefile |2 + drivers/virtio/vop.h | 119 ++ drivers/virtio/vop_fsl.c | 1911 + make this vop_fsl_mpc83xx.c or something along those lines. why? so we can deal with 85xx as well. After some modificaiton, the driver should be used on 85xx. For 85xx, most of the cases are for pci express. We just need to isolate the 83xx specific bits (message usage) Yes. Roy ___ Linuxppc-dev mailing list Linuxppc-dev@ozlabs.org https://ozlabs.org/mailman/listinfo/linuxppc-dev
Re: [RFC v1] virtio: add virtio-over-PCI driver
On Wed, Feb 18, 2009 at 05:13:03PM +1030, Rusty Russell wrote: On Wednesday 18 February 2009 08:54:25 Ira Snyder wrote: This adds support to Linux for using virtio between two computers linked by a PCI interface. This allows the use of virtio_net to create a familiar, fast interface for communication. It should be possible to use other virtio devices in the future, but this has not been tested. Hi Ira, It's only first glance, but this looks sane. Two things on first note: don't restrict yourself to 32 feature bits (only PCI does this, and they're going to have to hack when we reach feature 32). There isn't any problem adding more feature bits. Do you think 128 bits is enough? Secondly: +You will notice that the algorithm has no way of handling chains that are +not exactly the same on the host and guest system. Without setting any of +the fancier virtio_net features, this is the case. Hmm, I think we can do slightly better than this. I think so too :) I just wasn't able to come up with an algorithm to make it work. And I wanted input from more experienced people. How about prepending a 4 byte length on the host buffers? Allows host to specify length (for host-guest), and guest writes it to allow truncated buffers on guest-host. That won't allow you to transfer *more* than one buffersize to the host, but you could use a different method (perhaps the 4 bytes indicates the *total* length?). I don't understand how this will help. I looked at virtio_net's implemention with VIRTIO_NET_F_MRG_RXBUF, which seems like it could really help performance. The problems with that are: 1) virtio_net doesn't write the merged header's num_buffers field 2) virtio_net doesn't actually split packets in xmit The problem with 1 is that one instance of virtio_net cannot talk to another, if they're using that feature. The sender never sets the field, so the receiver doesn't know how many buffers to expect. I'm using two instances of virtio_net to talk to each other, rather than a special userspace implementation like lguest and kvm use. Is this a good approach? The problem with 2 is that xmit may add the following to the descriptors: (the network stack doesn't have to split the packet) idx address len flags next 0 XXX 12 N1 1 XXX 8000 -2 With VIRTIO_NET_F_MRG_RXBUF, the other side's recv ring will look like the following: idx address len flags next 0 YYY 4096 -1 1 YYY 4096 -2 2 YYY 4096 -3 So how do we pair up buffers to do DMA? Do I munge the header from virtio_net to set the num_headers field, and split the 8000 bytes of data into two parts? (Giving 12 bytes in desc 0, 4096 bytes in desc 1, and 3904 bytes in desc 2) The current implementation only handles something like the following, which would be an ARP: xmit descriptors: idx address len flags next 0 XXX 10 N1 1 XXX 42 -2 recv descriptors: idx address len flags next 0 YYY 10 N1 1 YYY 1518 -2 Then the algorithm is simple, no munging necessary. All chains are the same length (2 entries) and the length of each buffer is suffient to handle the data. The network stack splits the packets into = 1518 byte chunks for us (as long as MTU isn't changed). Do 4-byte DMA's suck for some reason? I don't think it would hurt much. Some of the fancier features might offset any overhead that is added. Thanks, I appreciate the feedback. Ira ___ Linuxppc-dev mailing list Linuxppc-dev@ozlabs.org https://ozlabs.org/mailman/listinfo/linuxppc-dev
Re: [RFC v1] virtio: add virtio-over-PCI driver
On Feb 17, 2009, at 4:24 PM, Ira Snyder wrote: Documentation/virtio-over-PCI.txt | 61 ++ arch/powerpc/boot/dts/mpc834x_mds.dts |7 + we'll have to review the .dts and expect a documentation update for the node. But that's pretty minor at this point. drivers/virtio/Kconfig| 22 + drivers/virtio/Makefile |2 + drivers/virtio/vop.h | 119 ++ drivers/virtio/vop_fsl.c | 1911 + make this vop_fsl_mpc83xx.c or something along those lines. drivers/virtio/vop_host.c | 1028 ++ drivers/virtio/vop_hw.h | 80 ++ 8 files changed, 3230 insertions(+), 0 deletions(-) create mode 100644 Documentation/virtio-over-PCI.txt create mode 100644 drivers/virtio/vop.h create mode 100644 drivers/virtio/vop_fsl.c create mode 100644 drivers/virtio/vop_host.c create mode 100644 drivers/virtio/vop_hw.h - k ___ Linuxppc-dev mailing list Linuxppc-dev@ozlabs.org https://ozlabs.org/mailman/listinfo/linuxppc-dev
RE: [RFC v1] virtio: add virtio-over-PCI driver
-Original Message- From: linuxppc-dev-bounces+tie-fei.zang=freescale@ozlabs.org [mailto:linuxppc-dev-bounces+tie-fei.zang=freescale@ozlabs .org] On Behalf Of Ira Snyder Sent: Wednesday, February 18, 2009 6:24 AM To: linux-ker...@vger.kernel.org Cc: linuxppc-dev@ozlabs.org; net...@vger.kernel.org; Rusty Russell; Arnd Bergmann; Jan-Bernd Themann Subject: [RFC v1] virtio: add virtio-over-PCI driver snip diff --git a/drivers/virtio/Kconfig b/drivers/virtio/Kconfig index 3dd6294..efcf56b 100644 --- a/drivers/virtio/Kconfig +++ b/drivers/virtio/Kconfig @@ -33,3 +33,25 @@ config VIRTIO_BALLOON If unsure, say M. +config VIRTIO_OVER_PCI_HOST + tristate Virtio-over-PCI Host support (EXPERIMENTAL) + depends on PCI EXPERIMENTAL + select VIRTIO + ---help--- + This driver provides the host support necessary for using virtio + over the PCI bus with a Freescale MPC8349EMDS evaluation board. + + If unsure, say N. + +config VIRTIO_OVER_PCI_FSL + tristate Virtio-over-PCI Guest support (EXPERIMENTAL) + depends on MPC834x_MDS EXPERIMENTAL + select VIRTIO + select DMA_ENGINE + select FSL_DMA + ---help--- + This driver provides the guest support necessary for using virtio + over the PCI bus. + + If unsure, say N. + diff --git a/drivers/virtio/Makefile b/drivers/virtio/Makefile index 6738c44..f31afaa 100644 --- a/drivers/virtio/Makefile +++ b/drivers/virtio/Makefile @@ -2,3 +2,5 @@ obj-$(CONFIG_VIRTIO) += virtio.o obj-$(CONFIG_VIRTIO_RING) += virtio_ring.o obj-$(CONFIG_VIRTIO_PCI) += virtio_pci.o obj-$(CONFIG_VIRTIO_BALLOON) += virtio_balloon.o +obj-$(CONFIG_VIRTIO_OVER_PCI_HOST) += vop_host.o +obj-$(CONFIG_VIRTIO_OVER_PCI_FSL) += vop_fsl.o I suppose we need to build the kernel twice. one for vop_host (on host with pci enabled) and the other is for vop_fsl ( on agent with pci disabled). Is it possible to build one image for both host and agent. We do not scan the pci bus if the controller is configured to agent. Also, is it possible to include mpc85xx architecture? They should be same. There is some code for 85xx in Fresscale BSP. http://www.bitshrine.org/gpp/linux-fsl-2.6.23-MPC8568MDS_PCI_Agent_PCIe_ EP_Drvier.patch Roy ___ Linuxppc-dev mailing list Linuxppc-dev@ozlabs.org https://ozlabs.org/mailman/listinfo/linuxppc-dev
RE: [RFC v1] virtio: add virtio-over-PCI driver
-Original Message- From: linuxppc-dev-bounces+tie-fei.zang=freescale@ozlabs.org [mailto:linuxppc-dev-bounces+tie-fei.zang=freescale@ozlabs .org] On Behalf Of Kumar Gala Sent: Thursday, February 19, 2009 0:47 AM To: Ira Snyder Cc: Arnd Bergmann; Jan-Bernd Themann; net...@vger.kernel.org; Rusty Russell; linux-ker...@vger.kernel.org; linuxppc-dev@ozlabs.org Subject: Re: [RFC v1] virtio: add virtio-over-PCI driver On Feb 17, 2009, at 4:24 PM, Ira Snyder wrote: Documentation/virtio-over-PCI.txt | 61 ++ arch/powerpc/boot/dts/mpc834x_mds.dts |7 + we'll have to review the .dts and expect a documentation update for the node. But that's pretty minor at this point. drivers/virtio/Kconfig| 22 + drivers/virtio/Makefile |2 + drivers/virtio/vop.h | 119 ++ drivers/virtio/vop_fsl.c | 1911 + make this vop_fsl_mpc83xx.c or something along those lines. why? Roy ___ Linuxppc-dev mailing list Linuxppc-dev@ozlabs.org https://ozlabs.org/mailman/listinfo/linuxppc-dev
Re: [RFC v1] virtio: add virtio-over-PCI driver
On Wednesday 18 February 2009 08:54:25 Ira Snyder wrote: This adds support to Linux for using virtio between two computers linked by a PCI interface. This allows the use of virtio_net to create a familiar, fast interface for communication. It should be possible to use other virtio devices in the future, but this has not been tested. Hi Ira, It's only first glance, but this looks sane. Two things on first note: don't restrict yourself to 32 feature bits (only PCI does this, and they're going to have to hack when we reach feature 32). Secondly: +You will notice that the algorithm has no way of handling chains that are +not exactly the same on the host and guest system. Without setting any of +the fancier virtio_net features, this is the case. Hmm, I think we can do slightly better than this. How about prepending a 4 byte length on the host buffers? Allows host to specify length (for host-guest), and guest writes it to allow truncated buffers on guest-host. That won't allow you to transfer *more* than one buffersize to the host, but you could use a different method (perhaps the 4 bytes indicates the *total* length?). Do 4-byte DMA's suck for some reason? Cheers, Rusty. ___ Linuxppc-dev mailing list Linuxppc-dev@ozlabs.org https://ozlabs.org/mailman/listinfo/linuxppc-dev