Hi, btw, we should also take into account the possibility to share networks in Fuel-8.0. So if cluster is configured with shared public and management networks then moving controllers into different network node groups (racks) is fine and it will work out of the box [0] and we should not forbid such configuration.
Regards, Alex [0] https://bugs.launchpad.net/fuel/+bug/1524320/comments/12 On Tue, Jan 19, 2016 at 9:42 AM, Aleksandr Didenko <[email protected]> wrote: > Hi, > > I would also prefer second solution. The only real downside of it is the > possibility to configure invalid cluster (for instance configure default > "controller" roles in different racks). But such invalid configuration is > possible only under some conditions: > - User should configure multi-rack environment (network node groups). I'd > say it's rather advanced Fuel usage and user most likely will follow our > documentation, so we can describe possible problems in the documentation. > - User should ignore notifications about possible problems from Fuel. I > must say that this is quite possible when using CLI, because notifications > should be checked manually in this case. > > Solution #1 is much safer, of course. But for me it looks like "let's > forbid as much as we can just to avoid any risks". I prefer to give Fuel > users a choice here which is possible only in second solution. > > > What if neither of node is in default group? Still use default group? > > And prey that some third-party plugin will handle this case properly? > > No, let's put a warning for user. I don't think that forbidding is the > proper way of handling such situations. Especially when we're not going to > forbid such setup in 9.0. > > > Default is just pre-created nodegroup and that's it, so there's nothing > special in it. > > Not quite. Default groups is the group where Fuel node is connected to. > > > We don't support load-balancing for nodes in different racks out-of-box. > > True. But we're going block deployment of roles that share VIP (created > from plugin, for instance) even when no load-balancing is involved at all - > just to be safe. > > Regards, > Alex > > On Fri, Jan 15, 2016 at 10:50 AM, Bogdan Dobrelya <[email protected]> > wrote: > >> On 15.01.2016 10:19, Aleksandr Didenko wrote: >> > Hi, >> > >> > We need to come up with some solution for a problem with VIP generation >> > (auto allocation), see the original bug [0]. >> > >> > The main problem here is: how do we know what exactly IPs to auto >> > allocate for VIPs when needed roles are in different nodegroups (i.e. in >> > different IP networks)? >> > For example 'public_vip' for 'controller' roles. >> > >> > Currently we have two possible solutions. >> > >> > 1) Fail early in pre-deployment check (when user hit "Deploy changes") >> > with error about inability to auto allocate VIP for nodes in different >> > nodegroups (racks). So in order to run deploy user has to put all roles >> > with the same VIPs in the same nodegroups (for example: all controllers >> > in the same nodegroup). >> > >> > Pros: >> > >> > * VIPs are always correct, they are from the same network as nodes >> > that are going to use them, thus user simply can't configure invalid >> > VIPs for cluster and break deployment >> > >> > Cons: >> > >> > * hardcoded limitation that is impossible to bypass, does not allow to >> > spread roles with VIPs across multiple racks even if it's properly >> > handled by Fuel Plugin, i.e. made so by design >> >> That'd be no good at all. >> >> > >> > >> > 2) Allow to move roles that use VIPs into different nodegroups, auto >> > allocate VIPs from "default" nodegroup and send an alert/notification to >> > user that such configuration may not work and it's up to user how to >> > proceed (either fix config or deploy at his/her own risk). >> >> It seems we have not much choice then, but use the option 2 >> >> > >> > Pros: >> > >> > * relatively simple solution >> > >> > * impossible to break VIP serialization because in the worst case we >> > allocate VIPs from default nodegroup >> > >> > Cons: >> > >> > * user can deploy invalid environment that will fail during deployment >> > or will not operate properly (for example when public_vip is not >> > able to migrate to controller from different rack) >> > >> > * which nodegroup to choose to allocate VIPs? default nodegroup? >> > random pick? in case of random pick troubleshooting may become >> > problematic >> >> Random choices aren't good IMHO, let's use defaults. >> >> > >> > * waste of IPs - IP address from the network range will be implicitly >> > allocated and marked as used, even it's not used by deployment >> > (plugin uses own ones) >> > >> > >> > *Please also note that this solution is needed for 8.0 only.*In 9.0 we >> > have new feature for manual VIPs allocation [1]. So in 9.0, if we can't >> > auto allocate VIPs for some cluster configuration, we can simply ask >> > user to manually set those problem VIPs or move roles to the same >> > network node group (rack). >> > >> > So, guys, please feel free to share your thoughts on this matter. Any >> > input is greatly appreciated. >> > >> > Regards, >> > Alex >> > >> > [0] https://bugs.launchpad.net/fuel/+bug/1524320 >> > [1] https://blueprints.launchpad.net/fuel/+spec/allow-any-vip >> > >> > >> > >> > >> __________________________________________________________________________ >> > OpenStack Development Mailing List (not for usage questions) >> > Unsubscribe: >> [email protected]?subject:unsubscribe >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > >> >> >> -- >> Best regards, >> Bogdan Dobrelya, >> Irc #bogdando >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> [email protected]?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > >
__________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: [email protected]?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
