----- Original Message ----- > From: "Dan Kenigsberg" <[email protected]> > To: "arch" <[email protected]> > Cc: "Livnat Peer" <[email protected]>, "Moti Asayag" <[email protected]>, > "Michael Pasternak" <[email protected]> > Sent: Thursday, January 3, 2013 12:07:22 PM > Subject: feature suggestion: in-host network with no external nics > > Description > =========== > In oVirt, after a VM network is defined in the Data Center level and > added to a cluster, it needs to be implemented on each host. All VM > networks are (currently) based on a Linux software bridge. The > specific > implementation controls how traffic from that bridge reaches the > outer > world. For example, the bridge may be connected externally via eth3, > or > bond3 over eth2 and p1p2. This feature is about implementing a > network > with no network interfaces (NICs) at all. > > Having a disconnected network may first seem to add complexity to VM > placement. Until now, we assumed that if a network (say, blue) is > defined on two hosts, the two hosts lie in the same broadcast domain. > If > a couple of VMs are connected to "blue" it does not matter where they > run - they would always hear each other. This is of course no longer > true if one of the hosts implements "blue" as nicless. > However, this is nothing new. oVirt never validates the single > broadcast > domain assumption, which can be easily broken by an admin: on one > host, > an admin can implement blue using a nic that has completely unrelated > physical connectivity. > > Benefits > ======== > * All-in-One http://www.ovirt.org/Feature/AllInOne use case: we'd > like > to have a complete oVirt deployment that does not rely on external > resources, such as layer-2 connectivity or DNS. > * Collaborative computing: an oVirt user may wish to have a group > of VMs with heavy in-group secret communication, where only one of > the > VMs exposes an external web service. The in-group secret > communication > could be limited to a nic-less network, no need to let it spill > outside. > * [SciFi] NIC-less networks can be tunneled to remove network > segments > over IP, a layer 2 NIC may not be part of its definition. > > Vdsm > ==== > Vdsm already supports defining a network with no nics attached. > > Engine > ====== > I am told that implementing this in Engine is quite a pain, as > network > is not a first-class citizen in the DB; it is more of an attribute of > its primary external interface.
There is more then that. You may take the approach of: 1. Configure this network statically on a host 2. Pin the VMs to host since otherwise what use there is to define such a network on VMs if the scheduler is free to schedule the VMs on different hosts? Or, 1. Create this network ad-hoc according to the first VM that needs it 2. Use the VM affinity feature to state that these VMs must run together on the same host 3. Assigning a network to these VMs automatically configures the affinity. The first is simplistic, and requires minimal changes to the engine (you do need to allow LN as device-less entity*) , the second approach is more robust and user friendly but require more work in the engine. On top of the above you may like to: 1. Allow this network to be NATed - libvirt already supports that - should be simple. 2. Combine this with the upcoming IP setting for the guests - A bit more complex 3. May want to easily define it as a Inter-VM-group-channel property same as affinity-group instead of explicit define such a network. Meaning define group of VMs. Define affinity, define Inter-VM-group-channel, define group's SLA etc - Let's admit that VMs that require this type of internal networking are part of VM group that together compose a workload/application. *A relativity easy change under current modelling (a model that I don't like in the first place), is to define another 'NIC' of type bridge (same as you have VLAN nic, bond nic, and NIC nic) so a 'floating bridge' is a LN on the Bridge NIC. Ugly but this is the current modelling. > > This message is an html-to-text rendering of > http://www.ovirt.org/Features/Nicless_Network > (I like the name, it sounds like a jewelery) The name commonly used for this is 'Host only network' Though we really into inventing new terminologies to things, in this case I would rather not since it's used in similar solutions, (VMWare, Parallels, Virtual-Box, etc) hence it's not vendor specific. In any case Nicless is bad since external interface may also be a Bond. > and I am sure it is missing a lot (Pasternak is intentionally CCed). > Comments are most welcome. > > Dan. > _______________________________________________ Arch mailing list [email protected] http://lists.ovirt.org/mailman/listinfo/arch
