Hi James,
> in.dhcpd has never been able to deal with logical interfaces. I think
> the justification for this has been at least CR 5010613 -- as long as
> we don't support multiple address pools on an interface, there's no
> reason to support logical interfaces.
I must admit that it has been some time ago that I have set this up, but (with
my fix), I had no problems whatsoever with that setup.
>> One case where you must bind in.dhcpd to more than one logical interface on
>> the
>> same physical i/f is when, for instance for transition phases, you are
>> forced to
>> provide addresses for more than one IP net on the same link-layer.
>
> The daemon can't currently do that, so I think that if you want to use
> that as the justification for another fix, someone ought to offer that
> RFE as well.
I can tell you for sure that the daemon can do that:
(production system)
# pntadm -L
10.255.252.0
[... removed ...]
10.255.0.0
# pargs 9189
9189: /...path_removed.../tools/slink_dhcpd/in.dhcpd -o 60 -n -i
e1000g3,e1000g2,e1000g
argv[0]: /...path_removed.../tools/slink_dhcpd/in.dhcpd
argv[1]: -o
argv[2]: 60
argv[3]: -n
argv[4]: -i
argv[5]: e1000g3,e1000g2,e1000g1,e1000g0,e1000g2:3,e1000g3:1
# ifconfig -a | /usr/sfw/bin/gegrep -A 1 '^(e1000g[23]|e1000g2:3|e1000g3:1): '
e1000g2: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 4
inet 10.255.253.35 netmask fffffe00 broadcast 10.255.253.255
--
e1000g2:3:
flags=9040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER> mtu
1500 index 4
inet 10.255.7.36 netmask fffff800 broadcast 10.255.7.255
--
e1000g3:
flags=9040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER> mtu
1500 index 5
inet 10.255.253.37 netmask fffffe00 broadcast 10.255.253.255
--
e1000g3:1:
flags=9040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER> mtu
1500 index 5
inet 10.255.7.37 netmask fffff800 broadcast 10.255.7.255
It's serving both 10.255.252/23 and 10.255.7/21 on the same pair of physical
interfaces (e1000g2 and e1000g3)
Cheers, Nils
_______________________________________________
networking-discuss mailing list
[email protected]