On 3/23/16, 2:36 PM, "Steven Hardy" <[email protected]> wrote:
>On Wed, Mar 23, 2016 at 01:01:03AM +0000, Fox, Kevin M wrote: >> +1 for TripleO taking a look at Kolla. >> >> Some random thoughts: >> >> I'm in the middle of deploying a new cloud and I couldn't use either >>TripleO or Kolla for various reasons. A few reasons for each: >> * TripeO - worries me for ever having to do a major upgrade of the >>software, or needing to do oddball configs like vxlans over ipoib. >> * Kolla - At the time it was still immature. No stable artefacts >>posted. database container recently broke, little documentation for >>disaster recovery. No upgrade strategy at the time. >> >> Kolla rearchitected recently to support oddball configs like we've had >>to do at times. They also recently gained upgrade support. I think they >>are on the right path. If I had to start fresh, I'd very seriously >>consider using it. >> >> I think Kolla can provide the missing pieces that TripleO needs. >>TripleO has bare metal deployment down solid. I really like the idea of >>using OpenStack to deploy OpenStack. Kolla is now OpenStack so should be >>considered. > >As mentioned in another reply, one of the aims of current refactoring work >in TripleO is to enable folks to leverage the barematal (and networking) >aspects of TripleO, then hand off to another tool should they so wish. > >This could work really well if you wanted to layer ansible deployed kolla >containers on top of some TripleO deployed nodes (in fact it's one of the >use-cases we had in mind when deciding to do it). > >I do however have several open questions regarding kolla (and the various >other ansible based solutions like openstack-ansible): > >- What does the HA model look like, is active/active HA fully supported > accross multiple controllers? Kolla is active/active HA with a recommended minimum of 3 nodes. Kolla does not do network isolation nor detect failure of components. Docker does detect failure of containers and restarts them so we are covered in the general case of a process stop-crash. In the case of a node loss, failure detection is not done, and is a weakness in the current HA implementation. >- Is SSL fully supported for the deployed services? External SSL is implemented in Mitaka. Internal SSL is not. What this means is you can do something like for a developer: kolla-anible certificates kolla-ansible deploy Copy the haproxy-ca.crt file to your workstation and specify the CA in your openrc Use API endpoints and horizon with self-signed encrypted and authenticated communication For real deployments, we don't recommend using kolla-anible certificates but instead obtaining a certificate signed by a legitimate signing authority. Then the workflow is: Place certificates in /etc/kolla kolla-ansible deploy Use openstack clients as you please. >- Is IPv6 fully supported? Considering ipv4 addresses code is used throughout Ansible, I don't think it would be possible at this time to deploy OpenStack with Ansible on an IPv6 based network. If the IPv6 network nodes also had an IPv4 address which is commonly how IPv6 is deployed, everything would work perfectly. Note, neutron does obviously work with IPv6 out of the box. >- What integration exists for isolation of network traffic between > services? Could you go into more detail on what your looking at here? If you mean are our internal management networks and external API networks segregated, the answer is yes. Also the storage network, tunnel network, and neutron networks can be segregated. >- What's the update/upgrade model, what downtime is associated with minor > version updates and upgrades requiring RPC/DB migration? What's tested > in CI in this regard? We are just starting to test upgrade in the CI/CD system. This work is in progress and should finish by Newton 1. Upgrades are minimal downtime. To upgrade you would do something like kolla-ansible upgrade And all of your cloud would migrate to the new version of OpenStack without VM restart and with minimal (order of milliseconds) network interruption for the virtual machines. During the upgrade process which takes approximately 1-2 minutes on my gear, it is possible some services such as Nova may return errors - or it is possible our serialized rolling upgrade has zero impact on upgrades. We are uncertain on this point as it requires more evaluation on our end. We just finished the job on ugprades in Mitaka, so we are short on downtime data metrics. > >Very interested to learn more about these, as they are challenges we've >been facing within the TripleO community lately in the context of our >current implementation. A demo would help you understand the various aspects of how Kolla operates, including deployment, reconfigure, and upgrade. It takes about 15 minutes to do all these things on a single node. Just ping me on IRC and we can setup a time. I can't at the moment demo multinode because my lab is in shambles as a result of a remodel but it works nearly exactly the same from an interaction standpoint. > >Regardless of the answers I think moving towards a model where we enable >more choice and easier integration between the various efforts (such as >the >split-stack model referred to above) is a good thing and I definitely >welcome building on the existing collaboration we have between the >TripleO, >Kolla and other deployment focussed communities. Good to hear. Regards -steve > >Steve > >__________________________________________________________________________ >OpenStack Development Mailing List (not for usage questions) >Unsubscribe: [email protected]?subject:unsubscribe >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: [email protected]?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
