On 11/17/2014 12:43 PM, Devananda van der Veen wrote:
Thanks for the reply!
On Wed Nov 12 2014 at 2:41:27 PM Chuck Carlino
<[email protected] <mailto:[email protected]>> wrote:
Hi,
I'm working on the neutron side of a couple of ironic issues, and
I need some help. Here are the issues.
1. If a nic on an ironic server fails and is replaced by a nic
with a different mac address, neutron's dhcp service will not
serve it the same ip address. This can be worked around by
deleting the neutron port and creating a new one, but it
leaves a window wherein the ip address could be lost to an
unrelated port creation happening at the same time.
2. While performing large deployments, a random nic failure can
cause the failure of the entire deploy. The ability to retry
a failed boot with a different nic has been requested.
It has been proposed that both issues could be at least partially
addressed by adding the ability to use dhcp client id to neutron.
In this solution, the dhcp client is configured to use a dhcp
client id, and the server associates this client id (instead of
mac address) with the ip address. Note that this idea just came
up today, so no code exists yet to try things out.
My questions:
For 1, the mac address of the neutron port will be left different
from the actual nic's mac address. Is that a problem for ironic?
It makes me feel uneasy, and might confuse users, but that's all I
got.
I think that's a show-stopper, actually. Not just because it would be
very confusing for operators to see a fake MAC in Nova and the real
MAC in Ironic. Neutron's lack of knowledge of the physical MAC(s)
would seem to prevent it performing physical switch configuration (via
ml2 plugins) for those who choose to use Ironic in a multi-tenant
environment (eg, OnMetal).
Good to know.
In general, does using dhcp client id present any issues for
booting an ironic server? I've done a bit of web searching and
from a protocol perspective it looks feasible, but I don't get a
sense of whether it's a good general solution.
A few things come to mind:
- How does the instance know what DHCP client ID to include in its
request, before it has an IP by which to contact the metadata service?
It sounds like this feature would only work if Ironic has a pre-boot
way to pass in data (eg, configdrive). Not all our drivers support
that today.
So using dhcp client id may not be a general solution.
- Is it possible / desirable to group multiple NICs under a single
DHCP client ID? If so, then again, it would seem like neutron would
need to know the physical MACs. (I recall us chatting about port
bonding at some point, but I'm not sure if these were related
conversations.)
I'd rather not confuse the issue with any details around how bonding or
link aggregation works, so let's just say that in case #2 above, the
guest may or may not be bonding the interfaces. Since bonding occurs
after boot, the bonding itself is not pertinent. But yes, all NICs
through which network boot can be attempted must present the same
dhcp_client_id for this solution to work. I don't see the connection to
neutron needing correct mac addresses, though, since the client id
effectively replaces the mac address for ip address lookup.
- What prevents some other server from spoofing the DHCP client ID in
a multi-tenant environment? Again, folks using an ML2 plugin today are
able to do MAC filtering on traffic at the switch. Removing knowledge
of the node's physical MACs looks like it breaks this.
Googling around, it looks like spoofing can be addressed as in
https://www.ietf.org/rfc/rfc3046.txt (needs a trusted component). I
agree that neutron needs the correct mac address here.
Thanks,
Chuck
If you have any off-the-top 'there's no chance that'll work' or
better things to try kind of feedback, it would be great to hear
it now since I'm about to start a POC to try it out.
Thanks,
Chuck
_______________________________________________
OpenStack-dev mailing list
[email protected]
<mailto:[email protected]>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
_______________________________________________
OpenStack-dev mailing list
[email protected]
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
_______________________________________________
OpenStack-dev mailing list
[email protected]
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev