Re: Would there be interest in virtualization of the ixgbe driver?
On 1/5/2011 12:50 AM, Ryan Stone wrote: The way that I envision this working is that you'd run something like ifconfig vix0 create parent ix1 to create a new virtual interface sharing the same physical interface as ix1. From that point on, vix0 would be a completely different interface from ix1, with its own MAC, vlan table, IPs, etc. Any comments as to whether this would be useful(or useless) would be welcome. Speaking for myself, I would say, yes, it sounds very interesting. Currently the same result can be achieved, by assigning a pseudo-ethernet interface to a vnet and bridging it to a physical ethernet interface. It would be nice to offload some things to the hardware. Yet, I don't know if the number of changes in the infrastructure worth the labor, for just one specific hardware. Is ixgbe the only hardware that support such things? Or maybe it is some trend of the future? As a virtualization user, I find it most useful. Nikos ___ freebsd-virtualization@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-virtualization To unsubscribe, send any mail to freebsd-virtualization-unsubscr...@freebsd.org
Re: Would there be interest in virtualization of the ixgbe driver?
While it seems interesting in theory, from what Ryan has told me it would require massive change to the code base, which I do not think is worthwhile without significant demand. This ability could be provided with SRIOV host support, which I would rather see. I'm still willing to look at changes and decide then if Ryan wishes. Jack On Thu, Jan 13, 2011 at 7:46 AM, Nikos Vassiliadis nvass9...@gmx.comwrote: On 1/5/2011 12:50 AM, Ryan Stone wrote: The way that I envision this working is that you'd run something like ifconfig vix0 create parent ix1 to create a new virtual interface sharing the same physical interface as ix1. From that point on, vix0 would be a completely different interface from ix1, with its own MAC, vlan table, IPs, etc. Any comments as to whether this would be useful(or useless) would be welcome. Speaking for myself, I would say, yes, it sounds very interesting. Currently the same result can be achieved, by assigning a pseudo-ethernet interface to a vnet and bridging it to a physical ethernet interface. It would be nice to offload some things to the hardware. Yet, I don't know if the number of changes in the infrastructure worth the labor, for just one specific hardware. Is ixgbe the only hardware that support such things? Or maybe it is some trend of the future? As a virtualization user, I find it most useful. Nikos ___ freebsd-virtualization@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-virtualization To unsubscribe, send any mail to freebsd-virtualization-unsubscr...@freebsd.org
Re: Would there be interest in virtualization of the ixgbe driver?
On Tue, Jan 4, 2011 at 4:50 PM, Ryan Stone ryst...@gmail.com wrote: At $WORK I've implemented an extension of the ixgbe driver that provides multiple virtualized ixgbe interfaces. The implementation uses the 8259[89]'s virtualization features, so the rx and tx paths of the virtual interfaces are completely independent. From the perspective of everything above the ixgbe driver, it's as if there are multiple physical interfaces present. The use-case for the feature at $WORK is very specific to our architecture, but I can imagine that having hardware-based virtual interfaces could be useful with jails, vnet or when using FreeBSD as the host OS for something like VirtualBox. I'm really not very familiar with what people do or want to do with virtualization on FreeBSD, so I don't have any kind of idea as to whether this feature could be useful to the community. Currently the code is not in a state that could be submitted to jfv@ for consideration: I disabled certain features like RSS because I didn't need them in my implementation, and interfaces can only be created at boot(via tunable). Before I start working on cleaning it up, I want to know if people think that such a feature would be worthwhile or useful to them. The way that I envision this working is that you'd run something like ifconfig vix0 create parent ix1 to create a new virtual interface sharing the same physical interface as ix1. From that point on, vix0 would be a completely different interface from ix1, with its own MAC, vlan table, IPs, etc. It would be nice to split up the hardware for use with vnet jails. The virtualization technique you are describing -- it sounds similar to how network device virtualization is done in the Solaris Project Crossbow implementation. Can you comment on this? In other words, would we have the ability to have a vnet jail tied to specific hardware resources (Rx/Tx rings with their own DMA channels and interrupts, etc...). I'm sorry, I don't have a link to the Project Crossbow features to which I'm referring. -Brandon ___ freebsd-virtualization@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-virtualization To unsubscribe, send any mail to freebsd-virtualization-unsubscr...@freebsd.org
Re: Would there be interest in virtualization of the ixgbe driver?
On Thu, Jan 13, 2011 at 3:04 PM, Brandon Gooch jamesbrandongo...@gmail.com wrote: It would be nice to split up the hardware for use with vnet jails. The virtualization technique you are describing -- it sounds similar to how network device virtualization is done in the Solaris Project Crossbow implementation. Can you comment on this? It looks like what I've done is implement what they call L2 virtualization in Project Crossbow. In other words, would we have the ability to have a vnet jail tied to specific hardware resources (Rx/Tx rings with their own DMA channels and interrupts, etc...). Exactly right. And the rx ring has a unique MAC, so that's how incoming packets are multiplexed across multiple rings(and ultimately vnets). Also, you can use RSS on top of VMDq. To use the terminology used in the 82599's datasheet, each MAC(and vnet) would be associated with a pool of 1 or more rx and tx rings. Packets are multiplexed across the pools by MAC, and then packets are multiplexed across the rx rings in that pool by a hash over the IP addresses and the TCP/UDP ports. All of this, of course, is subject to the limits of the hardware. The 82598 is quite restrictive: something like 16 pools and up to 4 rings per pool. The 82599 has a lot more pools and queues to work with. On Thu, Jan 13, 2011 at 10:46 AM, Nikos Vassiliadis nvass9...@gmx.com wrote: Yet, I don't know if the number of changes in the infrastructure worth the labor, for just one specific hardware. Is ixgbe the only hardware that support such things? Or maybe it is some trend of the future? Basically all of the changes are within the ixgbe driver. No infrastructure should have to change to support the feature. Also, Project Crossbow was implemented for a number of different drivers, including ixgbe and igb, so it should be possible to implement similar features for other drivers. However, this will always end up being quite hardware-specific so while it'd probably be possible to use the same concepts across the different drivers, it would have to re-implemented for each driver. The if_cloner used to create the virtual ifnets could be shared but that's probably 1% of the work. ___ freebsd-virtualization@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-virtualization To unsubscribe, send any mail to freebsd-virtualization-unsubscr...@freebsd.org
Would there be interest in virtualization of the ixgbe driver?
At $WORK I've implemented an extension of the ixgbe driver that provides multiple virtualized ixgbe interfaces. The implementation uses the 8259[89]'s virtualization features, so the rx and tx paths of the virtual interfaces are completely independent. From the perspective of everything above the ixgbe driver, it's as if there are multiple physical interfaces present. The use-case for the feature at $WORK is very specific to our architecture, but I can imagine that having hardware-based virtual interfaces could be useful with jails, vnet or when using FreeBSD as the host OS for something like VirtualBox. I'm really not very familiar with what people do or want to do with virtualization on FreeBSD, so I don't have any kind of idea as to whether this feature could be useful to the community. Currently the code is not in a state that could be submitted to jfv@ for consideration: I disabled certain features like RSS because I didn't need them in my implementation, and interfaces can only be created at boot(via tunable). Before I start working on cleaning it up, I want to know if people think that such a feature would be worthwhile or useful to them. The way that I envision this working is that you'd run something like ifconfig vix0 create parent ix1 to create a new virtual interface sharing the same physical interface as ix1. From that point on, vix0 would be a completely different interface from ix1, with its own MAC, vlan table, IPs, etc. Any comments as to whether this would be useful(or useless) would be welcome. ___ freebsd-virtualization@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-virtualization To unsubscribe, send any mail to freebsd-virtualization-unsubscr...@freebsd.org