[Openstack-operators] Ops Meetups Team - Inagural meeting - Tuesday 1400 UTC
Thank you all for the excellent response to our doodle poll. The inagural meeting of the Ops Meetups Team will be at: Tuesday, 17 of May at 1400 UTC [1] Let's meet in IRC[2], in the #openstack-operators channel. Details about the group, and the link to the agenda etherpad, can be found at: https://wiki.openstack.org/wiki/Ops_Meetups_Team#Meeting_Information Regards, Tom [1] To see this in your local time - check: http://www.timeanddate.com/worldclock/fixedtime.html?msg=Ops+Meetups+Team=20160517T22=241 [2] If you're new to IRC, there's a great guide here: http://docs.openstack.org/upstream-training/irc.html ___ OpenStack-operators mailing list OpenStack-operators@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
Re: [Openstack-operators] Moving from distro packages to containers (or virtualenvs...)
On 13 May 2016 at 19:59, Joshua Harlowwrote: > > So I guess its like the following (correct me if I am wrong): > > openstack-ansible > - > > 1. Sets up LXC containers from common base on deployment hosts (ansible > here to do this) > 2. Installs things into those containers (virtualenvs, packages, git > repos, other ... more ansible) > 3. Connects all the things together (more more ansible). > 4. Decommissions existing container (if it exists) and replaces with new > container (more more more ansible). > 5. <> > Almost. As OpenStack-Ansible treats the LXC containers like hosts (this is why OSA supports deploying to LXC machine containers and to VM's or normal hosts) we don't replace containers - we simply deploy the new venv into a new folder, reconfigure, then restart the service to use the new venv. To speed things up in large environments we pre-build the venvs on a repo server, then all hosts or containers grab them from the repo server The mechanisms we use allow deployers to customise the packages built into the venvs (you might need an extra driver in the neutron/cinder venvs, for instance) and allow the OpenStack services to build directly from any git source (this means you can maintain your own fork with all the fixes you need, if you want to). With OpenStack-Ansible you're also not forced to commit to the integrated build. Each service role is broken out into its own repository, so you're able to write your own Ansible playbooks to consume the roles which setup the services in any way that pleases you. The advantage in our case of using the LXC containers is that if something ends up broken somehow in the binary packages (this hasn't happened yet in my experience) you're able to simply blow away the container and rebuild it. I hope this helps. Feel free to ping me any more questions. --- Jesse IRC: odyssey4me ___ OpenStack-operators mailing list OpenStack-operators@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
Re: [Openstack-operators] Moving from distro packages to containers (or virtualenvs...)
Curious how you are using puppet to handle multi-node orchestration, as this is something puppet specific does not do. Are you using ansible/salt to orchestrate a puppet run on all the servers? ___ Kris Lindgren Senior Linux Systems Engineer GoDaddy On 5/12/16, 4:19 PM, "Nick Jones"wrote: >Hi. > >> I am investigating how to help move godaddy from rpms to a container-like >> solution (virtualenvs, lxc, or docker...) and a set of questions that comes >> up is the following (and I would think that some folks on this mailing list >> may have some useful insight into the answers): > >I’ve been mulling this over for a while as well, and although we’re not yet >there I figured I might as well chip in with my .2p all the same. > >> * Have you done the transition? > >Not yet! > >> * Was/is kolla used or looked into? or something custom? > >We’re looking at deploying Docker containers from images that have been >created using Puppet. We’d also use Puppet to manage the orchestration, i.e >to make sure a given container is running in the right place and using the >correct image ID. Containers would comprise discrete OpenStack service >‘composables’, i.e a container on a control node running the core nova >services (nova-api, nova-scheduler, nova-compute, and so on), one running >neutron-server, one for keystone, etc. Nothing unusual there. > >The workflow would be something like: > >1. Developer generates / updates configuration via Puppet and builds a new >image; >2. Image is uploaded into a private Docker image registry. Puppet handles >deploying a container from this new image ID; >3. New container is deployed into a staging environment for testing; >4. Assuming everything checks out, Puppet again handles deploying an updated >container into the production environment on the relevant hosts. > >I’m simplifying things a little but essentially that’s how I see this hanging >together. > >> * What was the roll-out strategy to achieve the final container solution? > >We’d do this piecemeal, and so containerise some of the ‘safer’ components >first of all (such as Horizon) to make sure this all hangs together. >Eventually we’d have all of our core OpenStack services on the control nodes >isolated and running in containers, and then work on this approach for the >rest of the platform. > >Would love to hear from other operators as well as to their experience and >conclusions. > >— > >-Nick >-- >DataCentred Limited registered in England and Wales no. 05611763 > >___ >OpenStack-operators mailing list >OpenStack-operators@lists.openstack.org >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators ___ OpenStack-operators mailing list OpenStack-operators@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
Re: [Openstack-operators] [openstack-dev] [glance] glance-registry deprecation: Request for feedback
On 5/13/16 4:29 PM, Flavio Percoco wrote: > On 13/05/16 15:52 -0400, Nikhil Komawar wrote: >> >> >> On 5/13/16 3:36 PM, Flavio Percoco wrote: >>> On 12/05/16 21:41 -0400, Nikhil Komawar wrote: I have been of the same opinion as far as upgrades go. I think we are stepping ahead of ourselves here a bit. We need to figure out the rolling upgrade story first and see if registry is actually useful or not there as well. >>> >>> I kinda disagree, tbh. We can have a glance-api service that can be >>> upgraded >>> with no downtimes without the need of a registry service. >> >> With a oslo.vo implementation to work with different models of Glance >> tables (image/members/etc.) schema you _need_ a service that can talk to >> both the type of the models without having to upgrade the DB. From my >> initial perspective, the API nodes up-gradation process will not help >> when we use oslo.vo. >> >> Because the API will need to be capable of using the new schema where as >> the DB still has a old one. What is the current thought process for >> supporting a rolling upgrade for DB? > > Why? I'm failing to see the need of a separate service to do this. The > above I do not know all the answers hence, the request for research. For instance: * What happens if I have 3 g-api nodes (no-registry) and oslo.vo upgrade support for the DB. * If I upgrade first g-api do a PATCH on an image (that updates the DB scheme), and then go GET via the other 2 g-api (older version of the g-api) on the same image. What should the non-upgraded g-api return? > suggests there's a service that exposes a single API and that is also > capable of > talking to both database schemas. Why can't that service be glance-api > itself? > > Whatever transformation happens in this separate service could as well > happen in > the main service. What am I missing? I think we need to define some usage patterns and the upgrade support for them to be definite in our approach. > > Flavio > >>> The feedback from operator sessions also indicated that some ops do use it that way ( http://lists.openstack.org/pipermail/openstack-dev/2016-May/094034.html ). Overall, I do think registry is a bit of overhead and it would be nice to actually deprecate it but we do need facts/technical research first. On 5/12/16 9:20 PM, Sam Morrison wrote: We find glance registry quite useful. Have a central glance-registry api is useful when you have multiple datacenters all with glance-apis and talking back to a central registry service. I guess they could all talk back to the central DB server but currently that would be over the public Internet for us. Not really an issue, we can work around it. The major thing that the registry has given us has been rolling upgrades. We have been able to upgrade our registry first then one by one upgrade our API servers (we have about 15 glance-apis) >>> >>> I'm curious to know how you did this upgrade, though. Did you shutdown >>> your >>> registry nodes, upgraded the database and then re-started them? Did >>> you upgraded >>> one registry node at a time? >>> >>> I'm asking because, as far as I can tell, the strategy you used for >>> upgrading >>> the registry nodes is the one you would use to upgrade the glance-api >>> nodes >>> today. Shutting down all registry nodes would live you with unusable >>> glance-api >>> nodes anyway so I'd assume you did a partial upgrade or something >>> similar to >>> that. >>> >>> Thanks a bunch for your feedback, >>> Flavio >>> I don’t think we would’ve been able to do that if all the glance-apis were talking to the DB, (At least not in glance’s current state) Sam On 12 May 2016, at 1:51 PM, Flavio Percocowrote: Greetings, The Glance team is evaluating the needs and usefulness of the Glance Registry service and this email is a request for feedback from the overall community before the team moves forward with anything. Historically, there have been reasons to create this service. Some deployments use it to hide database credentials from Glance public endpoints, others use it for scaling purposes and others because v1 depends on it. This is a good time for the team to re-evaluate the need of these services since v2 doesn't depend on it. So, here's the big question: Why do you think this service should be kept around? Summit etherpad: https://etherpad.openstack.org/p/newton-glance-registry-deprecation Flavio -- @flaper87 Flavio Percoco ___
Re: [Openstack-operators] [openstack-dev] [glance] glance-registry deprecation: Request for feedback
On 13/05/16 15:52 -0400, Nikhil Komawar wrote: On 5/13/16 3:36 PM, Flavio Percoco wrote: On 12/05/16 21:41 -0400, Nikhil Komawar wrote: I have been of the same opinion as far as upgrades go. I think we are stepping ahead of ourselves here a bit. We need to figure out the rolling upgrade story first and see if registry is actually useful or not there as well. I kinda disagree, tbh. We can have a glance-api service that can be upgraded with no downtimes without the need of a registry service. With a oslo.vo implementation to work with different models of Glance tables (image/members/etc.) schema you _need_ a service that can talk to both the type of the models without having to upgrade the DB. From my initial perspective, the API nodes up-gradation process will not help when we use oslo.vo. Because the API will need to be capable of using the new schema where as the DB still has a old one. What is the current thought process for supporting a rolling upgrade for DB? Why? I'm failing to see the need of a separate service to do this. The above suggests there's a service that exposes a single API and that is also capable of talking to both database schemas. Why can't that service be glance-api itself? Whatever transformation happens in this separate service could as well happen in the main service. What am I missing? Flavio The feedback from operator sessions also indicated that some ops do use it that way ( http://lists.openstack.org/pipermail/openstack-dev/2016-May/094034.html ). Overall, I do think registry is a bit of overhead and it would be nice to actually deprecate it but we do need facts/technical research first. On 5/12/16 9:20 PM, Sam Morrison wrote: We find glance registry quite useful. Have a central glance-registry api is useful when you have multiple datacenters all with glance-apis and talking back to a central registry service. I guess they could all talk back to the central DB server but currently that would be over the public Internet for us. Not really an issue, we can work around it. The major thing that the registry has given us has been rolling upgrades. We have been able to upgrade our registry first then one by one upgrade our API servers (we have about 15 glance-apis) I'm curious to know how you did this upgrade, though. Did you shutdown your registry nodes, upgraded the database and then re-started them? Did you upgraded one registry node at a time? I'm asking because, as far as I can tell, the strategy you used for upgrading the registry nodes is the one you would use to upgrade the glance-api nodes today. Shutting down all registry nodes would live you with unusable glance-api nodes anyway so I'd assume you did a partial upgrade or something similar to that. Thanks a bunch for your feedback, Flavio I don’t think we would’ve been able to do that if all the glance-apis were talking to the DB, (At least not in glance’s current state) Sam On 12 May 2016, at 1:51 PM, Flavio Percocowrote: Greetings, The Glance team is evaluating the needs and usefulness of the Glance Registry service and this email is a request for feedback from the overall community before the team moves forward with anything. Historically, there have been reasons to create this service. Some deployments use it to hide database credentials from Glance public endpoints, others use it for scaling purposes and others because v1 depends on it. This is a good time for the team to re-evaluate the need of these services since v2 doesn't depend on it. So, here's the big question: Why do you think this service should be kept around? Summit etherpad: https://etherpad.openstack.org/p/newton-glance-registry-deprecation Flavio -- @flaper87 Flavio Percoco ___ OpenStack-operators mailing list OpenStack-operators@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Thanks, Nikhil -- Thanks, Nikhil -- @flaper87 Flavio Percoco signature.asc Description: PGP signature ___ OpenStack-operators mailing list OpenStack-operators@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
Re: [Openstack-operators] Moving from distro packages to containers (or virtualenvs...)
That's effectively my understanding. On Fri, May 13, 2016 at 9:51 AM, Matthew Thodewrote: > On 05/13/2016 01:59 PM, Joshua Harlow wrote: > > Matthew Thode wrote: > >> On 05/13/2016 12:48 PM, Joshua Harlow wrote: > > * Was/is kolla used or looked into? or something custom? > > > Openstack-ansible, which is Openstack big-tent. It used to be > os-ansible-deployment in stackforge, but we've removed the > rackspacisms. > I will say that openstack-ansible is one of the few that have been > doing upgrades reliably for a while, since at least Icehouse, maybe > further. > >>> Whats the connection between 'openstack-ansible' and 'kolla', is there > >>> any (or any in progress?) > >>> > >> > >> The main difference is that openstack-ansible uses more heavy weight > >> containers from a common base (ubuntu 14.04 currently, 16.04/cent > >> 'soon'), it then builds on top of that, uses python virtualenvs as well. > >> Kolla on the other hand creates the container images centrally and > >> ships them around. > > > > So I guess its like the following (correct me if I am wrong): > > > > openstack-ansible > > - > > > > 1. Sets up LXC containers from common base on deployment hosts (ansible > > here to do this) > > 2. Installs things into those containers (virtualenvs, packages, git > > repos, other ... more ansible) > > 3. Connects all the things together (more more ansible). > > 4. Decommissions existing container (if it exists) and replaces with new > > container (more more more ansible). > > 5. <> > > > > More or less, we do in place upgrades, so long lived containers, but > could just as easily destroy and replace. > > > kolla > > - > > > > 1. Builds up (installing things and such) *docker* containers outside of > > deployment hosts (say inside jenkins) [not ansible] > > 2. Ships built up containers to *a* docker hub > > 3. Ansible then runs commands on deployment hosts to download image from > > docker hub > > 4. Connects all the things together (more ansible). > > 5. Decommissions existing container (if it exists) and replaces with new > > container (more more ansible). > > 6. <> > > > > Yes the above is highly simplistic, but just trying to get a feel for > > the different base steps here ;) > > > > I think so? not sure as I don't work with kolla > > >> > >> The other thing to note is that Kolla has not done a non-greenfield > >> upgrade as far as I know, I know it's on their roadmap though. > >> > > > -- > -- Matthew Thode (prometheanfire) > > ___ > OpenStack-operators mailing list > OpenStack-operators@lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > ___ OpenStack-operators mailing list OpenStack-operators@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
Re: [Openstack-operators] Moving from distro packages to containers (or virtualenvs...)
On 05/13/2016 01:59 PM, Joshua Harlow wrote: > Matthew Thode wrote: >> On 05/13/2016 12:48 PM, Joshua Harlow wrote: > * Was/is kolla used or looked into? or something custom? > Openstack-ansible, which is Openstack big-tent. It used to be os-ansible-deployment in stackforge, but we've removed the rackspacisms. I will say that openstack-ansible is one of the few that have been doing upgrades reliably for a while, since at least Icehouse, maybe further. >>> Whats the connection between 'openstack-ansible' and 'kolla', is there >>> any (or any in progress?) >>> >> >> The main difference is that openstack-ansible uses more heavy weight >> containers from a common base (ubuntu 14.04 currently, 16.04/cent >> 'soon'), it then builds on top of that, uses python virtualenvs as well. >> Kolla on the other hand creates the container images centrally and >> ships them around. > > So I guess its like the following (correct me if I am wrong): > > openstack-ansible > - > > 1. Sets up LXC containers from common base on deployment hosts (ansible > here to do this) > 2. Installs things into those containers (virtualenvs, packages, git > repos, other ... more ansible) > 3. Connects all the things together (more more ansible). > 4. Decommissions existing container (if it exists) and replaces with new > container (more more more ansible). > 5. <> > More or less, we do in place upgrades, so long lived containers, but could just as easily destroy and replace. > kolla > - > > 1. Builds up (installing things and such) *docker* containers outside of > deployment hosts (say inside jenkins) [not ansible] > 2. Ships built up containers to *a* docker hub > 3. Ansible then runs commands on deployment hosts to download image from > docker hub > 4. Connects all the things together (more ansible). > 5. Decommissions existing container (if it exists) and replaces with new > container (more more ansible). > 6. <> > > Yes the above is highly simplistic, but just trying to get a feel for > the different base steps here ;) > I think so? not sure as I don't work with kolla >> >> The other thing to note is that Kolla has not done a non-greenfield >> upgrade as far as I know, I know it's on their roadmap though. >> -- -- Matthew Thode (prometheanfire) ___ OpenStack-operators mailing list OpenStack-operators@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
Re: [Openstack-operators] [openstack-dev] [glance] glance-registry deprecation: Request for feedback
On 5/13/16 3:36 PM, Flavio Percoco wrote: > On 12/05/16 21:41 -0400, Nikhil Komawar wrote: >> I have been of the same opinion as far as upgrades go. >> >> I think we are stepping ahead of ourselves here a bit. We need to >> figure out >> the rolling upgrade story first and see if registry is actually >> useful or not >> there as well. > > I kinda disagree, tbh. We can have a glance-api service that can be > upgraded > with no downtimes without the need of a registry service. With a oslo.vo implementation to work with different models of Glance tables (image/members/etc.) schema you _need_ a service that can talk to both the type of the models without having to upgrade the DB. From my initial perspective, the API nodes up-gradation process will not help when we use oslo.vo. Because the API will need to be capable of using the new schema where as the DB still has a old one. What is the current thought process for supporting a rolling upgrade for DB? > >> The feedback from operator sessions also indicated that some ops do >> use it that >> way ( >> http://lists.openstack.org/pipermail/openstack-dev/2016-May/094034.html >> ). >> >> Overall, I do think registry is a bit of overhead and it would be >> nice to >> actually deprecate it but we do need facts/technical research first. >> >> On 5/12/16 9:20 PM, Sam Morrison wrote: >> >>We find glance registry quite useful. Have a central >> glance-registry api is useful when you have multiple datacenters all >> with glance-apis and talking back to a central registry service. I >> guess they could all talk back to the central DB server but currently >> that would be over the public Internet for us. Not really an issue, >> we can work around it. >> >>The major thing that the registry has given us has been rolling >> upgrades. We have been able to upgrade our registry first then one by >> one upgrade our API servers (we have about 15 glance-apis) > > I'm curious to know how you did this upgrade, though. Did you shutdown > your > registry nodes, upgraded the database and then re-started them? Did > you upgraded > one registry node at a time? > > I'm asking because, as far as I can tell, the strategy you used for > upgrading > the registry nodes is the one you would use to upgrade the glance-api > nodes > today. Shutting down all registry nodes would live you with unusable > glance-api > nodes anyway so I'd assume you did a partial upgrade or something > similar to > that. > > Thanks a bunch for your feedback, > Flavio > >>I don’t think we would’ve been able to do that if all the >> glance-apis were talking to the DB, (At least not in glance’s current >> state) >> >>Sam >> >> >> >> >> >>On 12 May 2016, at 1:51 PM, Flavio Percoco>> wrote: >> >>Greetings, >> >>The Glance team is evaluating the needs and usefulness of the >> Glance Registry >>service and this email is a request for feedback from the >> overall community >>before the team moves forward with anything. >> >>Historically, there have been reasons to create this service. >> Some deployments >>use it to hide database credentials from Glance public >> endpoints, others use it >>for scaling purposes and others because v1 depends on it. This >> is a good time >>for the team to re-evaluate the need of these services since >> v2 doesn't depend >>on it. >> >>So, here's the big question: >> >>Why do you think this service should be kept around? >> >>Summit etherpad: >> https://etherpad.openstack.org/p/newton-glance-registry-deprecation >> >>Flavio >>-- >>@flaper87 >>Flavio Percoco >>___ >>OpenStack-operators mailing list >>OpenStack-operators@lists.openstack.org >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >> >> >> >> __ >>OpenStack Development Mailing List (not for usage questions) >>Unsubscribe: >> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe >>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> -- >> >> Thanks, >> Nikhil >> > -- Thanks, Nikhil ___ OpenStack-operators mailing list OpenStack-operators@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
Re: [Openstack-operators] [openstack-dev] [glance] glance-registry deprecation: Request for feedback
On 12/05/16 21:41 -0400, Nikhil Komawar wrote: I have been of the same opinion as far as upgrades go. I think we are stepping ahead of ourselves here a bit. We need to figure out the rolling upgrade story first and see if registry is actually useful or not there as well. I kinda disagree, tbh. We can have a glance-api service that can be upgraded with no downtimes without the need of a registry service. The feedback from operator sessions also indicated that some ops do use it that way ( http://lists.openstack.org/pipermail/openstack-dev/2016-May/094034.html ). Overall, I do think registry is a bit of overhead and it would be nice to actually deprecate it but we do need facts/technical research first. On 5/12/16 9:20 PM, Sam Morrison wrote: We find glance registry quite useful. Have a central glance-registry api is useful when you have multiple datacenters all with glance-apis and talking back to a central registry service. I guess they could all talk back to the central DB server but currently that would be over the public Internet for us. Not really an issue, we can work around it. The major thing that the registry has given us has been rolling upgrades. We have been able to upgrade our registry first then one by one upgrade our API servers (we have about 15 glance-apis) I'm curious to know how you did this upgrade, though. Did you shutdown your registry nodes, upgraded the database and then re-started them? Did you upgraded one registry node at a time? I'm asking because, as far as I can tell, the strategy you used for upgrading the registry nodes is the one you would use to upgrade the glance-api nodes today. Shutting down all registry nodes would live you with unusable glance-api nodes anyway so I'd assume you did a partial upgrade or something similar to that. Thanks a bunch for your feedback, Flavio I don’t think we would’ve been able to do that if all the glance-apis were talking to the DB, (At least not in glance’s current state) Sam On 12 May 2016, at 1:51 PM, Flavio Percocowrote: Greetings, The Glance team is evaluating the needs and usefulness of the Glance Registry service and this email is a request for feedback from the overall community before the team moves forward with anything. Historically, there have been reasons to create this service. Some deployments use it to hide database credentials from Glance public endpoints, others use it for scaling purposes and others because v1 depends on it. This is a good time for the team to re-evaluate the need of these services since v2 doesn't depend on it. So, here's the big question: Why do you think this service should be kept around? Summit etherpad: https://etherpad.openstack.org/p/newton-glance-registry-deprecation Flavio -- @flaper87 Flavio Percoco ___ OpenStack-operators mailing list OpenStack-operators@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Thanks, Nikhil -- @flaper87 Flavio Percoco signature.asc Description: PGP signature ___ OpenStack-operators mailing list OpenStack-operators@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
Re: [Openstack-operators] Moving from distro packages to containers (or virtualenvs...)
Matthew Thode wrote: On 05/13/2016 12:48 PM, Joshua Harlow wrote: * Was/is kolla used or looked into? or something custom? Openstack-ansible, which is Openstack big-tent. It used to be os-ansible-deployment in stackforge, but we've removed the rackspacisms. I will say that openstack-ansible is one of the few that have been doing upgrades reliably for a while, since at least Icehouse, maybe further. Whats the connection between 'openstack-ansible' and 'kolla', is there any (or any in progress?) The main difference is that openstack-ansible uses more heavy weight containers from a common base (ubuntu 14.04 currently, 16.04/cent 'soon'), it then builds on top of that, uses python virtualenvs as well. Kolla on the other hand creates the container images centrally and ships them around. So I guess its like the following (correct me if I am wrong): openstack-ansible - 1. Sets up LXC containers from common base on deployment hosts (ansible here to do this) 2. Installs things into those containers (virtualenvs, packages, git repos, other ... more ansible) 3. Connects all the things together (more more ansible). 4. Decommissions existing container (if it exists) and replaces with new container (more more more ansible). 5. <> kolla - 1. Builds up (installing things and such) *docker* containers outside of deployment hosts (say inside jenkins) [not ansible] 2. Ships built up containers to *a* docker hub 3. Ansible then runs commands on deployment hosts to download image from docker hub 4. Connects all the things together (more ansible). 5. Decommissions existing container (if it exists) and replaces with new container (more more ansible). 6. <> Yes the above is highly simplistic, but just trying to get a feel for the different base steps here ;) The other thing to note is that Kolla has not done a non-greenfield upgrade as far as I know, I know it's on their roadmap though. ___ OpenStack-operators mailing list OpenStack-operators@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
Re: [Openstack-operators] [kolla] Moving from distro packages to containers (or virtualenvs...)
On 13/05/16 19:48, "Joshua Harlow"wrote: >Matthew Thode wrote: >> On 05/12/2016 04:04 PM, Joshua Harlow wrote: >>> Hi there all-ye-operators, >>> >>> I am investigating how to help move godaddy from rpms to a >>> container-like solution (virtualenvs, lxc, or docker...) and a set of >>> questions that comes up is the following (and I would think that some >>> folks on this mailing list may have some useful insight into the answers): >>> >>> * Have you done the transition? >>> >> >> We've been using openstack-ansible since it existed, it's working well >> for us. >> >>> * How did the transition go? >>> >> >> It can be painful, but it's worked out in the long run. >> >>> * Was/is kolla used or looked into? or something custom? >>> >> >> Openstack-ansible, which is Openstack big-tent. It used to be >> os-ansible-deployment in stackforge, but we've removed the rackspacisms. >> I will say that openstack-ansible is one of the few that have been >> doing upgrades reliably for a while, since at least Icehouse, maybe further. > >Whats the connection between 'openstack-ansible' and 'kolla', is there >any (or any in progress?) > >> >>> * How long did it take to do the transition from a package based >>> solution (with say puppet/chef being used to deploy these packages)? >>> >>>* Follow-up being how big was the team to do this? >> >> Our team was somewhat bigger than most as we have many deployments and >> we had to do it from scratch. If you CAN do it solo, but I'd recommend >> you have coverage / on call for whatever your requirements are. >> >>> * What was the roll-out strategy to achieve the final container solution? >>> >> >> For Openstack-ansible I'd recommend deploying a service at a time, >> migrating piecemeal. You can migrate to the same release as you are on >> (I hope), though I'd recommend kilo or greater as upgrades can get >> annoying after a while. >> >>> Any other feedback (and/or questions that I missed)? >>> Is there some documentation from Kolla on how we could move from a classic RPM/APT based deployment to containers ? Would be an interesting read (asking for a friend, of course) Tim >>> Thanks, >>> >>> Josh >>> >>> ___ >>> OpenStack-operators mailing list >>> OpenStack-operators@lists.openstack.org >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >> >> > >___ >OpenStack-operators mailing list >OpenStack-operators@lists.openstack.org >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators ___ OpenStack-operators mailing list OpenStack-operators@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
Re: [Openstack-operators] Moving from distro packages to containers (or virtualenvs...)
Matthew Thode wrote: On 05/12/2016 04:04 PM, Joshua Harlow wrote: Hi there all-ye-operators, I am investigating how to help move godaddy from rpms to a container-like solution (virtualenvs, lxc, or docker...) and a set of questions that comes up is the following (and I would think that some folks on this mailing list may have some useful insight into the answers): * Have you done the transition? We've been using openstack-ansible since it existed, it's working well for us. * How did the transition go? It can be painful, but it's worked out in the long run. * Was/is kolla used or looked into? or something custom? Openstack-ansible, which is Openstack big-tent. It used to be os-ansible-deployment in stackforge, but we've removed the rackspacisms. I will say that openstack-ansible is one of the few that have been doing upgrades reliably for a while, since at least Icehouse, maybe further. Whats the connection between 'openstack-ansible' and 'kolla', is there any (or any in progress?) * How long did it take to do the transition from a package based solution (with say puppet/chef being used to deploy these packages)? * Follow-up being how big was the team to do this? Our team was somewhat bigger than most as we have many deployments and we had to do it from scratch. If you CAN do it solo, but I'd recommend you have coverage / on call for whatever your requirements are. * What was the roll-out strategy to achieve the final container solution? For Openstack-ansible I'd recommend deploying a service at a time, migrating piecemeal. You can migrate to the same release as you are on (I hope), though I'd recommend kilo or greater as upgrades can get annoying after a while. Any other feedback (and/or questions that I missed)? Thanks, Josh ___ OpenStack-operators mailing list OpenStack-operators@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators ___ OpenStack-operators mailing list OpenStack-operators@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
Re: [Openstack-operators] Moving from distro packages to containers (or virtualenvs...)
Steven Dake (stdake) wrote: On 5/12/16, 2:04 PM, "Joshua Harlow"wrote: Hi there all-ye-operators, I am investigating how to help move godaddy from rpms to a container-like solution (virtualenvs, lxc, or docker...) and a set of questions that comes up is the following (and I would think that some folks on this mailing list may have some useful insight into the answers): * Have you done the transition? * How did the transition go? * Was/is kolla used or looked into? or something custom? * How long did it take to do the transition from a package based solution (with say puppet/chef being used to deploy these packages)? * Follow-up being how big was the team to do this? I know I am not an operator, but to respond on this particular point related to the Kolla question above, I think the team size could be very small and still effective. You would want 24 hour coverage of your data center, and a backup individual, which puts the IC list at 4 people. (3 8 hour shifts + 1 backup in case of illness/etc). Expect for these folks to require other work, as once Kolla is deployed there isn't a whole lot to do. A 64 node cluster is deployable by one individual in 1-2 hours once the gear has been racked. Realistically if you plan to deploy Kolla I'd expect that individual to want to train for 3-6 weeks deploying over and over to get a feel for the Kolla workflow. Try it, I suspect you will like it :) Thanks for the info and/or estimates, but before I dive to far in I have a question. I see that the following has links to how the different services run under kolla: http://docs.openstack.org/developer/kolla/#kolla-services But one that seems missing from this list is what I would expect to be the more complicated one, that being nova-compute (and libvirt and kvm). Are there any secret docs on that (since I would assume it'd be the most problematic to get right)? If you had less rigorous constraints around availability then I'd expect Godaddy to have, a Kolla deployment could likely be managed with as little as half a person or less. Everything including upgrades is automated. Along this line, do people typically plug the following into a local jenkins system? http://docs.openstack.org/developer/kolla/quickstart.html#building-container-images Any docs on how people typically incorporate jenkins into the kolla workflow (I assume they do?) anywhere? Regards -steve * What was the roll-out strategy to achieve the final container solution? Any other feedback (and/or questions that I missed)? Thanks, Josh ___ OpenStack-operators mailing list OpenStack-operators@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators ___ OpenStack-operators mailing list OpenStack-operators@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
[Openstack-operators] [logging] Announcing openstack-infra/logstash-filters
The openstack-infra team has put a lot of effort into creating logstash filters to parse openstack logs. These filters are primarily used to parse service logs from devstack runs, but should work for production deployments as well. Yesterday I worked with Clark Boylan to move these filters out of puppet and into their own project called openstack-infra/logstash-filters to make them easy to reconsume. This project has three files in the filters/ directory: an example input section, an example output section, and the filters section used to index devstack service log data into logstash.openstack.org. Using conf.d style logstash configs, you can easily drop these filters into your own config while using custom input and output config sections. You can see how this is done for logstash.openstack.org using puppet at [1]. These filters work by switching on tags for the different log formats. In order for them to parse the logs correctly, the correct tags need to be applied to the logs before they reach the filters. The tags applied to the devstack service logs can be viewed at [2]. Most service logs use the "oslofmt" tag, but some require the "apachecombined" tag instead. The filters also understand the "libvirt" and "syslog" tags for their respective standard log formats. [1] https://review.openstack.org/#/c/310052/6/modules/openstack_project/manifests/logstash_worker.pp [2] http://git.openstack.org/cgit/openstack-infra/system-config/tree/modules/openstack_project/files/logstash/jenkins-log-client.yaml#n31 -- Jonathan Harker ___ OpenStack-operators mailing list OpenStack-operators@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
Re: [Openstack-operators] [glance] glance-registry deprecation: Request for feedback
-Original Message- From: Fox, Kevin MReply: Fox, Kevin M Date: May 12, 2016 at 19:00:12 To: Matt Fischer , Flavio Percoco Cc: openstack-...@lists.openstack.org , openstack-operators@lists.openstack.org Subject: Re: [Openstack-operators] [glance] glance-registry deprecation: Request for feedback > Is there a copy-from-url method that's not deprecated yet? > > The app catalog is still pointing users at the command line in v1 mode I'm sorry for missing something that must be obvious. What does this have to do with deprecating the registry? -- Ian Cordasco ___ OpenStack-operators mailing list OpenStack-operators@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
Re: [Openstack-operators] Anyone else use vendordata_driver in nova.conf?
There is also a golang library that can validate tokens.. http://gophercloud.io/docs/identity/v3/ On Thu, May 12, 2016 at 11:25 PM, David Medberrywrote: > There's a jython implementation of keystone and I thought there was other > work to validate tokens from within Java. Added Jim Baker to the thread. > > -d > > On Thu, May 12, 2016 at 5:06 PM, Michael Still wrote: > >> I'm just going to reply to myself here with another status update. >> >> The design seems largely settled at this point, with one exception -- how >> does nova authenticate with the external microservice? >> >> The current proposal is to have nova use the client's keystone token to >> authenticate with the external microservice. This is a neat solution >> because its what nova does when talking to other services in your OpenStack >> deployment, so its consistent and well understood. >> >> The catch here is that it means your external microservice needs to know >> how to do keystone authentication. That's well understood for python >> microservices, and I can provide sample code for that case using the >> keystone wsgi middleware. On the other hand, its harder for things like >> Java where I'm not sure I'm aware of any keystone auth implementation. Is >> effectively requiring the microservices to be written in python a >> particular problem? I'm hoping not given that all the current plugins are >> written in python by definition. >> >> Cheers, >> Michael >> >> >> >> >> On Wed, May 4, 2016 at 7:37 AM, Michael Still wrote: >> >>> Hey, >>> >>> I just wanted to let people know that the review is progressing, but we >>> have a question. >>> >>> Do operators really need to call more than one external REST service to >>> collect vendordata? We can implement that in nova, but it would be nice to >>> reduce the complexity to only having one external REST service. If you >>> needed to call more than one service you could of course write a REST >>> service that aggregated REST services. >>> >>> Does anyone in the operator community have strong feelings either way? >>> Should nova be able to call more than one external vendordata REST service? >>> >>> Thanks, >>> Michael >>> >>> >>> >>> >>> On Sat, Apr 30, 2016 at 4:11 AM, Michael Still >>> wrote: >>> So, after a series of hallway track chats this week, I wrote this: https://review.openstack.org/#/c/310904/ Which is a proposal for how to implement vendordata in a way which would (probably) be acceptable to nova, whilst also meeting the needs of operators. I should reinforce that because this week is so hectic nova core hasn't really talked about this yet, but I am pretty sure I understand and have addressed Sean's concerns. I'd be curious as to if the proposed solution actually meets your needs. Michael On Mon, Apr 18, 2016 at 10:55 AM, Fox, Kevin M wrote: > We've used it too to work around the lack of instance users in nova. > Please keep it until a viable solution can be reached. > > Thanks, > Kevin > -- > *From:* David Medberry [openst...@medberry.net] > *Sent:* Monday, April 18, 2016 7:16 AM > *To:* Ned Rhudy > *Cc:* openstack-operators@lists.openstack.org > > *Subject:* Re: [Openstack-operators] Anyone else use > vendordata_driver in nova.conf? > > Hi Ned, Jay, > > We use it also and I have to agree, it's onerous to require users to > add that functionality back in. Where was this discussed? > > On Mon, Apr 18, 2016 at 8:13 AM, Ned Rhudy (BLOOMBERG/ 731 LEX) < > erh...@bloomberg.net> wrote: > >> Requiring users to remember to pass specific userdata through to >> their instance at every launch in order to replace functionality that >> currently works invisible to them would be a step backwards. It's an >> alternative, yes, but it's an alternative that adds burden to our users >> and >> is not one we would pursue. >> >> What is the rationale for desiring to remove this functionality? >> >> From: jaypi...@gmail.com >> Subject: Re: [Openstack-operators] Anyone else use vendordata_driver >> in nova.conf? >> >> On 04/18/2016 09:24 AM, Ned Rhudy (BLOOMBERG/ 731 LEX) wrote: >> > I noticed while reading through Mitaka release notes that >> > vendordata_driver has been deprecated in Mitaka >> > (https://review.openstack.org/#/c/288107/) and is slated for >> removal at >> > some point. This came as somewhat of a surprise to me - I searched >> > openstack-dev for vendordata-related subject lines going back to >> January >> > and found no discussion on the matter (IRC logs, while available on >> > eavesdrop, are not trivially searchable without a little scripting >> to >> > fetch them
Re: [Openstack-operators] Swift ACL's together with Keystone (v3) integration
hi Saverio, all we've just found the solution. The issue is in the way swift manages files bigger than 5 GB, as explained here: http://docs.openstack.org/developer/swift/overview_large_objects.html Swift segments big files and puts them in a new container called _segments. So it is necessary to give read access to the _segment container too in order to make external tenants access it: swift post mycontainer -r 'ext_tenant:*' swift post mycontainer_segments -r 'ext_tenant:*' now the ext_tenant user can read and download big files. cheers, Alberto Il 12/05/2016 17:29, Saverio Proto ha scritto: Hello, I got this working, I was actually missing the storage-url in my client config. Related to this setup I am experiencing a weird problem with large objects when reading from another tenant. Only when I read objects from a container where I have read-only access, if the object is larger than 10GB I am not able to read the object. I used the rclone client that has a very covenient option --dump-bodies. When I list the objects in the container I receive a JSON structure with all the objects in the container. All objects that I can read have data that makes sense. Some files that are bigger than 10Gb have a size of 0 bytes. example: {"hash": "d41d8cd98f00b204e9800998ecf8427e", "last_modified": "2016-05-10T15:12:44.233710", "bytes": 0, "name": "eng/googlebooks-eng-all-3gram-20120701-li.gz", "content_type": "application/octet-stream"} or sometimes the bytes size is just random and wrong. When I try to read these objects I get a 403. I tried both with swiftclient and rclone and I have the same problem. Of course if I use the container in my tenant where I have read and write, I can successfully read and write all the large objects. This only happens when reading a large object shared across tenants. Did you maybe try to work with such large objects in your setup ? does it work for you ? Saverio 2016-05-03 14:34 GMT+02:00 Wijngaarden, Pieter van: Hi Saverio, Yes, in the end I was able to get it working! The issue was related to my proxy server pipeline config (filter:authtoken). I did not find pointers to updated documentation though. When I had updated the [filter:authtoken] configuration in /etc/swift/proxy-server.conf, everything worked. In my case the values auth_uri and auth_url were not configured correctly: [filter:authtoken] paste.filter_factory = keystonemiddleware.auth_token:filter_factory auth_uri = https://:5443 auth_url = http://:35357 auth_plugin = password project_name = service project_domain_id = default user_domain_id = default username = swift password = X I don’t know why that meant that regular token validation worked, but cross-tenant did not (unfortunately it’s a test cluster so I don’t have history on what it was before I changed it :( ) What works for me now (using python-swiftclient) is the following. I hope that the text formatting survives in the email: 1. A user with complete ownership over the account (say account X) executes a. swift post --read-acl ‘:’ b. or c. swift post --read-acl ‘:*>’ 2. A user in the account can now list the container and get objects in the container by doing: a. swift list --os-storage-url --os-auth-token b. or c. swift download --os-storage-url --os-auth-token Note that you can review the full storage URL for an account by doing swift stat -v. In this case, the user in step 2 is not able to do anything else in account X besides do object listing in the container and get its objects, which is what I was aiming for. What does not work for me is if I set the read-acl to ‘’ only, even though that should work according to the documentation. If you want to allow all users in another project read access to a container, use ‘:*’ as the read-ACL. I hope this helps! With kind regards, Pieter van Wijngaarden -Original Message- From: Saverio Proto [mailto:ziopr...@gmail.com] Sent: dinsdag 3 mei 2016 12:44 To: Wijngaarden, Pieter van Cc: openstack-operators@lists.openstack.org Subject: Re: [Openstack-operators] Swift ACL's together with Keystone (v3) integration Hello Pieter, I did run into the same problem today. Did you find pointers to more updated documentation ? Were you able to configure the cross tenant read ACL ? thank you Saverio 2016-04-20 13:48 GMT+02:00 Wijngaarden, Pieter van : Hi all, I’m playing around with a Swift cluster (Liberty) and cannot get the Swift ACL’s to work. My objective is to give users from one project (and thus Swift account?) selective access to specific containers in another project. According to http://docs.openstack.org/developer/swift/middleware.html#keystoneauth , the swift/keystoneauth plugin should support cross-tenant (now cross-project) ACL’s by