Re: [Users] oVirt and Quantum
On 05/15/2012 05:34 PM, Gary Kotton wrote: ... 2. host management -- interface 2.1 you are suggesting to remove the vlan field with a fabric field? are we sure this is something which shouldn't be presented in the main view and only via extended properties? Yes, this is what I am suggesting. VLAN tagging is one method of network isolation. There are more, for example GRE tunneling. that doesn't mean this is not an important enough piece of information to have in the subtab, rather than in an additional dialog (the fabric type could be represented in an icon for example) miki/andrew/einav - thoughts on the UI aspect of this? 2.2 is the fabric really a host interface level property, or a cluster wide property (would same logical network be implemented on different hosts in the cluster via both vdsm and quantum)? would live migration work in same cluster if one host has qunatum fabric and another vdsm fabric for same logical network? These are all up for discussion. I just wanted to provide something that we can start to work with. It would be nice if there was some input from product management on this issue (not exactly sure how this works in an open source community :)) This goes to assumptions on qunatum integration and capabilities. I'm missing this part in the beginning of the feature page. if quantum can support multiple types, and live migration between, then it can a host level config. if live migration wouldn't work, it would have to be a cluster level config. if it is limited to a single technology, it is kind of a system level config. (and actually, after reading your reply to 3.7.1, i understand the plugin is an interface level config, since you can have more than a single one on the same host). same goes to implications to the logical network entity (see my comment in 2.4 below). I'd also like to understand in that regard how do we match up host reporting it can support (i.e., installed and configured) to a specific technology. i.e., there needs to be some correlation between which logical networks can be deployed on which plugin. so also need to understand how this correlation is done automatically between logical network, to which fabric/interface it belongs to. (I admit your reply to 3.7.1 confused me as to how quantum will decide on which plugin to create a specific attachment)? 2.3 you are describing the subtab of an interface, and mention a qunatum pull down menu. is this in the subtab? in a dialog? UI mockups are needed here. I think that prior to getting mockups, we should ensure that we have the idea crystallized. I can live with that, goes back to what i wrote in 2.2. 2.4 also, is this pulldown correct at host level or cluster level (would live migration work between hosts implementing different plugins for same logical network in same cluster?) as i stated above - this has to be clarified in assumptions. doesn't have to be the current state, rather a clear understanding where it is going to, or as to what makes sense. (I am not sure requiring live migration between multiple technologies makes sense, since implementation of the logical network could be based on different concepts (vlan vs. gre). however, this means we need to define if the logical network (which is not limited to a cluster) should be limited to clusters supporting these technologies. i.e., a vlan based logical network can cross (in different clusters) UCS, openvswitch and bridges. but a gre based logical network cannot cross all of them, etc. on to backend: 3.1 The Quantum Service is a process that runs the Quantum API web server (port 9696) how is authentication between engine and quantum service is done (we have a client certificate for engine which qunatum can verify i guess) In my opinion this should be running locally on the same host as the engine. even on same server, we have authentication between components running in different processes. on top of that, one of our main goals is to allow multiple engines running side by side for scale. maybe quantum will have to be in active/passive mode for a while, but multiple nodes should be able to access it. there is no need for fancy authentication/authorization. simply trusting ovirt engine CA certificate and validating engine client certificate should suffice (and i assume can be done via config in the http layer of quantum). also, this can be a phase 2, but let's have a list of what is a must for merging the code (this isn't), to deployment (phase 2), to more than that (phase 3). under deployment i'd expect not requiring a different db technology, integrating with installer, an upgrade path, cleaner and secure communication channels, etc. 3.2 is there a config of the quantum service uri? should it be a single instance or a per DC property? (it sounds like a per DC property, can be a future thing) I am sorry but I do not understand the question. even if quantum is running on same
Re: [Users] oVirt and Quantum
On 04/29/2012 01:41 PM, Gary Kotton wrote: Hi, As part of a POC we have integrated Quantum (http://wiki.openstack.org/Quantum) into oVirt (http://www.ovirt.org/). This has been tested with the OVS and Linux Bridge plugins. The details of the integration can be found at - https://fedoraproject.org/wiki/Quantum_and_oVirt. Any comments and suggestions would be greatly appreciated. Thanks Gary Thanks Gary - some questions: 1. you are using the term 'user' for both an end-user and an admin. I think admin is more appropriate in most places. 2. host management -- interface 2.1 you are suggesting to remove the vlan field with a fabric field? are we sure this is something which shouldn't be presented in the main view and only via extended properties? 2.2 is the fabric really a host interface level property, or a cluster wide property (would same logical network be implemented on different hosts in the cluster via both vdsm and quantum)? would live migration work in same cluster if one host has qunatum fabric and another vdsm fabric for same logical network? 2.3 you are describing the subtab of an interface, and mention a qunatum pull down menu. is this in the subtab? in a dialog? UI mockups are needed here. 2.4 also, is this pulldown correct at host level or cluster level (would live migration work between hosts implementing different plugins for same logical network in same cluster?) on to backend: 3.1 The Quantum Service is a process that runs the Quantum API web server (port 9696) how is authentication between engine and quantum service is done (we have a client certificate for engine which qunatum can verify i guess) 3.2 is there a config of the quantum service uri? should it be a single instance or a per DC property? (it sounds like a per DC property, can be a future thing) 3.3 network creation semantics: we create a network at DC level. we attach it at cluster level. the UI allows you to create at DC and attach to cluster at cluster level. 3.3.1 what if i create a network with same VLAN in two DC's? what if we attach same logical network to multiple clusters - each will create the logical network in qunatum again? 3.3.2 what if the qunatum service is down when engine performs this action or quantum returns an error? 3.3.3 each creation of a DC creates a rhevm network - I assume these are not created, since no host in a cluster in the DC yet? 3.3.4 what happens on moving of a host to another cluster, or moving of a cluster to another DC (possible if it's DC was removed iirc). 3.3.5 shouldn't this be limited to vm networks, or all networks are relevant? 3.3.6 what if the host with the qunatum fabric is in maint mode? 3.3.7 could a host with qunatum fabric have a VDSM fabric on another interface (for vm neworks? for non-vm networks)? 3.5 network deletion (detach network from a cluster) 3.5.1 what happens if qunatum is down or returned an error? 3.6 vm creation 3.6.1 what about moving a VM from/to a cluster with a quantum fabric? 3.6.2 what about import of a VM into a cluster with a qunatum fabric? 3.6.3 you have vm creation/removal - missing vnic addition/removal 3.6.4 worth mentioning qunatum doesn't care about the vnic model? about the mac address? 3.7 vm start 3.7.1 why is the plugin parameter required at vm start? can multiple plugins be supported in parallel at vdsm/host level or by the quantum service? 3.7.2 aren't network_uuid and port uuid redundant to attachment uuid (I assume qunatum service knows from attachment the port and network) - i have no objection to passing them to vdsm, just trying to understand reasoning for this. I am missing what happens at vdsm level in this point (even after reading the matching vdsm part) 3.8 vm suspend/resume (it's fine to say it behaves like X, but still need to be covered to not cause bugs/regressions) 3.9 vm stop 3.9.1 need to make sure vm stop when engine is down is handled correctly. 3.9.2 what happens if qunatum service is down? unlike start vm or network creation the operation in this case cannot be stopped/rolledback, only rolled forward. 3.10 hot plug nic? 3.11 vm migration 3.11.1 ok, so this is the first time i understand hosts can be mixed in same cluster. worth specifying this in the beginning (still not clear if mixed plugins can exist) 3.11.2 afair, we don't deal with engine talking to both hosts during live migration. only to host A, who is then communicating with host B. so why not always have the VM configuration at vm start (and hot plug nic) have the quantum details so live migration can occur at will without additional information? 3.11.3 In order to implement the above a REST client needs to be implemented in the oVirt engine. did not understand this statement - please elaborate. 4. host management 4.1 deployment we do not deploy packages from engine to hosts, we can install them from a repo configured to the host. but this is done today only as part of initial
Re: [Users] oVirt and Quantum
Hi, Please see my inline comments. There are quite a few. Thanks Gary On 05/15/2012 01:38 PM, Itamar Heim wrote: On 04/29/2012 01:41 PM, Gary Kotton wrote: Hi, As part of a POC we have integrated Quantum (http://wiki.openstack.org/Quantum) into oVirt (http://www.ovirt.org/). This has been tested with the OVS and Linux Bridge plugins. The details of the integration can be found at - https://fedoraproject.org/wiki/Quantum_and_oVirt. Any comments and suggestions would be greatly appreciated. Thanks Gary Thanks Gary - some questions: 1. you are using the term 'user' for both an end-user and an admin. I think admin is more appropriate in most places. 2. host management -- interface 2.1 you are suggesting to remove the vlan field with a fabric field? Yes, this is what I am suggesting. VLAN tagging is one method of network isolation. There are more, for example GRE tunneling. are we sure this is something which shouldn't be presented in the main view and only via extended properties? 2.2 is the fabric really a host interface level property, or a cluster wide property (would same logical network be implemented on different hosts in the cluster via both vdsm and quantum)? would live migration work in same cluster if one host has qunatum fabric and another vdsm fabric for same logical network? These are all up for discussion. I just wanted to provide something that we can start to work with. It would be nice if there was some input from product management on this issue (not exactly sure how this works in an open source community :)) 2.3 you are describing the subtab of an interface, and mention a qunatum pull down menu. is this in the subtab? in a dialog? UI mockups are needed here. I think that prior to getting mockups, we should ensure that we have the idea crystallized. 2.4 also, is this pulldown correct at host level or cluster level (would live migration work between hosts implementing different plugins for same logical network in same cluster?) on to backend: 3.1 The Quantum Service is a process that runs the Quantum API web server (port 9696) how is authentication between engine and quantum service is done (we have a client certificate for engine which qunatum can verify i guess) In my opinion this should be running locally on the same host as the engine. 3.2 is there a config of the quantum service uri? should it be a single instance or a per DC property? (it sounds like a per DC property, can be a future thing) I am sorry but I do not understand the question. 3.3 network creation semantics: we create a network at DC level. we attach it at cluster level. the UI allows you to create at DC and attach to cluster at cluster level. 3.3.1 what if i create a network with same VLAN in two DC's? what if we attach same logical network to multiple clusters - each will create the logical network in qunatum again? If I understand you correctly then I think that one Quantum network can be created. This should work. In my opinion this should be managed by the oVirt engine so the actual creation is not an issue. Ensuring that each cluster has the correct network management configurations is what is important. 3.3.2 what if the qunatum service is down when engine performs this action or quantum returns an error? The engine should be able to deal with this - similar to the way in which it deals with a network creation when VDSM is down. 3.3.3 each creation of a DC creates a rhevm network - I assume these are not created, since no host in a cluster in the DC yet? This functionally can remain the same. 3.3.4 what happens on moving of a host to another cluster, or moving of a cluster to another DC (possible if it's DC was removed iirc). The Quantum agents take care of this. On each host there will be a Quantum agent that treats the network management. 3.3.5 shouldn't this be limited to vm networks, or all networks are relevant? I think all Quantum networks are relevant 3.3.6 what if the host with the qunatum fabric is in maint mode? I do not understand. When a host is in maintenace mode do VM's receive traffic? The Quantum port for the VM's can be set as DOWN 3.3.7 could a host with qunatum fabric have a VDSM fabric on another interface (for vm neworks? for non-vm networks)? Yes. This is something that I would like. I also would like the Quantum API to be updated so that we can indicate the phycal network interface that the Quantum network will be running on. 3.5 network deletion (detach network from a cluster) 3.5.1 what happens if qunatum is down or returned an error? The engine should be able to deal with this - similarly to what it does today 3.6 vm creation 3.6.1 what about moving a VM from/to a cluster with a quantum fabric? I do not see a problem here. The agenets running on VDSM will detect and treat accordingly. 3.6.2 what about import of a VM into a cluster with a qunatum fabric? Same as above 3.6.3 you have vm creation/removal - missing vnic