On 11/28/2012 04:33 PM, Mike Kolesnik wrote:
----- Original Message -----
On 11/28/2012 03:46 PM, Livnat Peer wrote:
On 28/11/12 14:00, Gary Kotton wrote:
On 11/28/2012 01:34 PM, Livnat Peer wrote:
On 27/11/12 16:34, Gary Kotton wrote:
On 11/27/2012 04:06 PM, Mike Kolesnik wrote:
Thanks for the reply,
Please see comments inline

Hi Garry,
Thanks for your input, see my comments inline.

Livnat

----- Original Message -----
On 11/27/2012 03:01 PM, Livnat Peer wrote:
Hi All,
Mike Kolesnik and me have been working on a proposal for
integrating
quantum into oVirt in the past few weeks.
We decided to focus our efforts on integrating with quantum
services, we
started with IP address management service.

here is a link to our proposal:
http://wiki.ovirt.org/wiki/Quantum_IPAM_Integration

As usual comments are welcome,
Please see my comments below:

i. The quantum diagram is incorrect. It is the same message
queue
that
passes the notifications. This is done by a message broker. In
RH we
are
supporting qpid and in the community upstream rabbitmq is
used.
I will fix the diagram accordingly
Thanks
ii. The DHCP agent is plugable. That is there may be more than
one
implementation. At the moment only dnsmasq is support. There
was a
company working on ISC upstream but they have stopped due to
problem
they encountered.
iii. Layer 3 driver. This is incorrect. The layer 2 agent does
the
network connectivity. The layer 3 agent provides floating IP
support.
This is something that you may want to consider to. It is
related to
IPAM
     From what we gathered from code the DHCP Agent is
    communicating with (an
implementation of the) LinuxInterfaceDriver, which is not the
same as
the layer 2 agent used in the plugin.
Correct. The DHCP agent needs to create the relevant interfaces.
The
layer 2 is responsible for attaching these interfaces to the
network.

For example, looking in Linux bridge, the plugin has
Linux_bridge_quantum agent that is part of the Linux bridge
plugin, and
it has (what we called Layer 3 driver) a BridgeInterfaceDriver
that is
used within the DHCP Agent.

Maybe we used a misleading terminology but 'layer 2 agent' is
also
misleading, IMO, as it is already used in the plugin context and
this is
not the same component.

We'll update the doc to call it 'layer 2 driver'.

iv. I am not really sure I understand you picture with server
B and
get/create network. This is not really what happens. If you
want I
can
explain.
We saw that the DHCP Agent is trying to create the network
interface
if it doesn't exist (in DeviceManager.setup which is called as
part of
"enable_dhcp_helper").

If you want to elaborate on this, please do.
The DHCP agent will create a device that is used by the dnsmasq
process.
The creation is done according to a driver that is used for the
underlying l2 implementation. It does not have anything to do
the the
layer 3 agent.
Again the same terminology misunderstanding.

It creates a network device and assigns it an IP address.
The layer 2 agent (if there is one) will attach this device to
the
underlying virtual network.
It seems to be our understanding and what we have described in
the wiki
page, do you see something wrong there?

Prior to doing anything the DHCP agent will create a quantum
port on the
subnet. This is how it receives its own IP address.

v. What do you mean by the "port is then part of the Quantum
DB". Not
all plugins maintain a database.
True but if it's not saved somewhere then how does the Agent
know
which IP to assign to which MAC?
The DHCP agent is notified by the Quantum service of a new port
allocation. It is passed the port details - the mac address and
the IP
address. The plugin may not use a database that one can access.
All of
the interface to the data is done via the Quantum API. For
example
the NVP.

vi. I think that you are missing useful information about the
subnets
and gateways. This is also a critical part of the IPAM. When a
VM
sends
a DHCP request it not only gets and IP but it can also receive
host
route information. This is very important.
can you please elaborate on this?
When you reboot your computer at work you get access to the
internet.
This is done via DHCP. You get an IP address and all of the
relevant
routes configured. The port data has the 'host_routes' which is
also
used by the dnsmasq. There can be more than one route which is
configured. The subnet contains the gateway IP.

We assumed that when creating the subnet in Quantum it would
update the
DHCP Agent with all the information oVirt will provide as part of
the
subnet details (dns_nameservers, host_routes, gateway_ip  etc).
Isn't this the case?
Yes it is. I was misleading as the wiki only referred to Quantum
ports
and not subnets. If I understand correctly then you will be using
the
entire Quantum service? Will this include floating IP's security
groups
etc.?
I did not review security group and floating IP provided by
quantum. Can
you please point us to a documentation on these?
Security groups is in development -
https://blueprints.launchpad.net/quantum/+spec/quantum-security-groups
Floating IP's -
http://docs.openstack.org/trunk/openstack-network/admin/content/index.html

vii. The DHCP agent dynamics are incorrect (l3 agent, write
port
definitions etc.). One of the pain points is that the process
is for
each quantum network. This is a scale issue and is being
discussed
upstream.
This is what we saw that happens in the code, if we are wrong
please
explain what is the right behaviour of the DHCP Agent.
For each network that has one or more subnets with DHCP support
a
dnsmasq process is created. Please see http://fpaste.org/IHbA/.
Here I
have two networks.
That's exactly what we have described in the wiki. dnsmasq per
network.
In the integration with oVirt we planned that the ovirt layer2
driver
will not return interface_name where there is no need for the
dnsmasq
locally on the host.
I do not think that this will work - you will need to attach the
dnsmasq
to the network. At the moment Quantum does not run the dnsmasq on
the
compute nodes. There is a notion of a network node when various
services
can run. One of the issues that we are dealing with at the moment
is HA
and scale for the DHCP agents. At the moment only one DHCP agent
can
run. The open issue is that if the DHCP agent sees thousands of
networks
then it will create dnsmasq process for each network killing the
node
local resources.

That's exactly the problems we addressed in our proposal.
In the integration proposal we'll deploy the DHCP-Agent on the
hosts
(where and when is defined via the the setupDHCP API we added to
oVirt),
and we'll have more than one DHCP Agent...each DHCP Agent will
manage
the networks available on the host it is deployed on.
At the moment Quantum only works with 1 DCHP agent. Upstream there is
work on improving this.
So if I run Quantum Service and 2 instances of the DHCP Agent on different 
machines, this is not supported? (And it it's not can you please elaborate what 
is the problem?)

 There is a bug in the underlying Oslo RPC implementation that sets the topic 
and queue names to be same value. Hence it will only be sent to one agent and 
not all.



The wiki page describes how we intend to do it.
We actually leverage the fact that the DHCP Agent and the Quantum
service don't have to be co-located on the same machine and added
some
logic in oVirt where and when to deploy the DHCP Agent.

The DHCP Agent will get notification on each network created in
Quantum
but when delegating the call to the layer 2 (oVirt) driver we'll
create
devices only for the networks we'd like to control from that DHCP
instance.
Not sure what the layer 2 oVirt driver is? Does this mean that you
will
not be using Quantum L2 agents? If so then this may not work. First I
need a clarification then I can explain.

   In case we won't create a device we would like the DHCP Agent
to avoid spawning a dnsmasq (which is the code we'll contribute to
Quantum).
The DHCP agent creates the device. I do not understand how you will
decide whether or not to create the device. One thing that you should
take into account is HA for the solution. Lets say your DHCP agent on
the node freezes - how will launched VM's get their IP addresses?
This requires a patch to Quantum that in case the driver returns
empty
device name the dnsmasq won't be started.
I am not sure that I understand. The DHCP agent has to create a
device
to interface with the outside world. If the device fails to be
created
then the dnsmasq process will not be spawned.

If the device fails to to be created and an exception is raised
then the
dnsmasq is not created  (there is a retry in a loop via the error
handling in the notification layer), but we suggest that if the
device
returns an empty device name the DHCP Agent won't spawn a dnsmasq
process as it would have no meaning.
ok. I am not sure how this will be accepted upstream as the DHCP
agent
is requesting to create the interface. The logic seems a tad odd.
As mentioned above there is work upstream on this at the moment -
there
are a number of options that are in debate - one is to have scheduler
that decides where to run the agents. Another is to indicate the the
DHCP agent which networks to treat - that is, if it receives a
network
notifiction for a network that it does not "own" then it will ignore
this.
Yes this is essentially our proposal..

ok, this was certainly not clear from the wiki :)


We'll use the above behavior in oVirt driver to control which
dnsmask is
spawned on the server the DHCP Agent is deployed on.

We'll send a patch for that soon.
I added that to the wiki as well.

viii. Quantum does not require homogeneous  hardware. This is
incorrect.
There is something called a provider network that addresses
this.
Can you please elaborate?
When you create a network you can indicate which NIC connects to
the
outside world. If you look at
http://wiki.openstack.org/ConfigureOpenvswitch then you will see
the
bridge mappings. This information is passed via the API.
Our understanding is that Quantum IPAM design assumes the DHCP
Agent has
local access to *ALL* the networks created in quantum.
IPAM is part of the Quantum API. That is, Quantum provides and
interface
for logical ports to be assigned an IP address. The DHCP agent is
one
way of implementing this. The DHCP agent interfaces with the
Quantum
plugin to receive the information that it requires. Currently tyhe
DHCP
agent is able to get information for all networks.

Per Network it spawns a local dnsmasq and connect it to the
network
(which should be accessible from within the host on which the
DHCP Agent
is running on).
The dnsmasq is able to be accessed from all compute nodes on the
network. From what you are mentioning here is that you guys will
be
taking a hybrid approach to using Quantum. Correct?

I did not understand the question, not sure what you mean by hybrid
approach?
   From what you have written I understand, and may be completely wrong
here, that you only want to use certain parts of quantum in certain
ways
which are not supported today. So the hybrid is taking parts of
Quantum
and using them in VDSM but not via the standard API's.
We are planning to write our own pluggable implementations (which is the 
purpose of Quantum), but we're not planning to take just parts of Quantum but 
the whole deal..

Can you please elaborate more on the plugin that you plan to implement. Will this be contributed upstream? If so then I suggest that you guys draft blueprints

This assumption is problematic in the oVirt context and this is
the
issue we were trying to overcome in the proposed integration.
I am sorry but I am not sure that I understand the issue that you
are
trying to overcome.
It's the same issues you raised above, scalability and the
assumption
that there is one hast that has to have connectivity to all the
networks
configured in the system.

In theory more than one DHCP server can run. This is
how people provide HA. One of the servers will answer. Do you plan
to
have a DHCP agent running on each vdsm node? Nova networking has
support
for a feature like this. It is called multinode. It is something
that is
under discussion in Quantum.
The issue is not related to the DHCP Agent HA.
I am not sure how your solution will address the HA. Say you have 2
hosts. VM X is running on HOST A. It has a dnsmasq running on HOST A.
HOST B will not have one as there are no VM's running on B. Say the
dnsmasq freezes on A. A new VM deployed on A will not receive and IP
address. If there was a dnsmasq running on B Then it would.
We are planning on having several DHCP servers for each network, so this will 
not be a problem.

Great.

ix. I do not udnerstand the race when the VM starts. There is
none.
When
a VM starts it will send a DHCP request. If it does not
receive one
it
will send another after a timeout. Can you please explain the
race?
This is exactly it, the VM might start requesting DHCP lease
before it
was updated in the DHCP server, for us it's a race.
This works. This is how DHCP is engineered. Can you please
explain the
problem? If you send a DHCP request and do not get a reply then
you send
one again. The timeout between requests is incremental.

I am not sure that we are on the same page when it comes to a
race
condition. I'd like you to clarify.
You do not need to consume Quantum to provide IPAM. You can
just run
the
dnsmasq and make sure that its interface is connected to the
virtual
network. This will provide you with the functionality that you
are
looking for. If you want I go can over the dirty details. It
will be
far
less time than consuming Quantum and you can achieve the same
goal.
You
just need to be aware when the dnsmasq is running to sent the
updates.

IPAM is one of the many features that Quantum has to offer. It
will
certain
help oVirt.

Thanks
Gary

_______________________________________________
Arch mailing list
[email protected]
http://lists.ovirt.org/mailman/listinfo/arch

Reply via email to