Wed, Feb 28, 2018 at 04:45:39PM CET, m...@redhat.com wrote:
>On Wed, Feb 28, 2018 at 04:11:31PM +0100, Jiri Pirko wrote:
>> Wed, Feb 28, 2018 at 03:32:44PM CET, m...@redhat.com wrote:
>> >On Wed, Feb 28, 2018 at 08:08:39AM +0100, Jiri Pirko wrote:
>> >> Tue, Feb 27, 2018 at 10:41:49PM CET, kubak...@wp.pl wrote:
>> >> >On Tue, 27 Feb 2018 13:16:21 -0800, Alexander Duyck wrote:
>> >> >> Basically we need some sort of PCI or PCIe topology mapping for the
>> >> >> devices that can be translated into something we can communicate over
>> >> >> the communication channel. 
>> >> >
>> >> >Hm.  This is probably a completely stupid idea, but if we need to
>> >> >start marshalling configuration requests/hints maybe the entire problem
>> >> >could be solved by opening a netlink socket from hypervisor?  Even make
>> >> >teamd run on the hypervisor side...
>> >> 
>> >> Interesting. That would be more trickier then just to fwd 1 genetlink
>> >> socket to the hypervisor.
>> >> 
>> >> Also, I think that the solution should handle multiple guest oses. What
>> >> I'm thinking about is some generic bonding description passed over some
>> >> communication channel into vm. The vm either use it for configuration,
>> >> or ignores it if it is not smart enough/updated enough.
>> >
>> >For sure, we could build virtio-bond to pass that info to guests.
>> 
>> What do you mean by "virtio-bond". virtio_net extension?
>
>I mean a new device supplying topology information to guests,
>with updates whenever VMs are started, stopped or migrated.

Good. Any idea how that device would look like? Also, any idea how to
handle in in kernel and how to pass along this info to userspace?
Is there anything similar out there?

Thanks!

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

Reply via email to