Re: [Openstack-operators] Moving from distro packages to containers (or virtualenvs...)

2016-05-17 Thread Steven Dake (stdake)


On 5/13/16, 10:46 AM, "Joshua Harlow"  wrote:

>Steven Dake (stdake) wrote:
>>
>> On 5/12/16, 2:04 PM, "Joshua Harlow"  wrote:
>>
>>> Hi there all-ye-operators,
>>>
>>> I am investigating how to help move godaddy from rpms to a
>>> container-like solution (virtualenvs, lxc, or docker...) and a set of
>>> questions that comes up is the following (and I would think that some
>>> folks on this mailing list may have some useful insight into the
>>>answers):
>>>
>>> * Have you done the transition?
>>>
>>> * How did the transition go?
>>>
>>> * Was/is kolla used or looked into? or something custom?
>>>
>>> * How long did it take to do the transition from a package based
>>> solution (with say puppet/chef being used to deploy these packages)?
>>>
>>>* Follow-up being how big was the team to do this?
>>
>> I know I am not an operator, but to respond on this particular point
>> related to the Kolla question above, I think the team size could be very
>> small and still effective.  You would want 24 hour coverage of your data
>> center, and a backup individual, which puts the IC list at 4 people. (3
>>8
>> hour shifts + 1 backup in case of illness/etc).  Expect for these folks
>>to
>> require other work, as once Kolla is deployed there isn't a whole lot to
>> do.  A 64 node cluster is deployable by one individual in 1-2 hours once
>> the gear has been racked.  Realistically if you plan to deploy Kolla I'd
>> expect that individual to want to train for 3-6 weeks deploying over and
>> over to get a feel for the Kolla workflow.  Try it, I suspect you will
>> like it :)
>
>Thanks for the info and/or estimates, but before I dive to far in I have
>a question. I see that the following has links to how the different
>services run under kolla:
>
>http://docs.openstack.org/developer/kolla/#kolla-services
>
>But one that seems missing from this list is what I would expect to be
>the more complicated one, that being nova-compute (and libvirt and kvm).
>Are there any secret docs on that (since I would assume it'd be the most
>problematic to get right)?

Nova was by far and away 5-10x harder to containerize then other services.
 So when you look at Nova, your looking at a "worst case" scenario :)

No secret docs (all our stuff is in the open as per Kolla's policies).
The code does a nice job of documenting the steps taken for the various
operations.

There are essentially 3 operations:
Deploy: 
https://github.com/openstack/kolla/blob/master/ansible/roles/nova/tasks/dep
loy.yml
Upgrade: 
https://github.com/openstack/kolla/blob/master/ansible/roles/nova/tasks/upg
rade.yml
Reconfigure: 
https://github.com/openstack/kolla/blob/master/ansible/roles/nova/tasks/rec
onfigure.yml

Those tasks in the plays operate in order, so simply reading through the
list will give you an idea of the orchestration taking place.

We will add more Ansible playbooks as time passes to do things like add
OSDs and remove OSDs, add compute nodes and remove compute nodes, and
hopefully one day add cells and remove cells.

Bringing to my next question; GoDaddy probably is at the scale where it
requires cells - which Kolla doesn't implement.  I am not quite sure what
this involves and it would require some R on the Kolla communities part
(or GoDaddy's alternatively).  I don't know if Cells are a small job or a
big job.  From my understanding, it basically involves deploying a new set
of controller nodes per cell, and connecting them to the main controller
node.  This doesn't seem insurmountable, but is a gap you should be aware
of.


>
>>
>> If you had less rigorous constraints around availability then I'd expect
>> Godaddy to have, a Kolla deployment could likely be managed with as
>>little
>> as half a person or less.  Everything including upgrades is automated.
>>
>
>Along this line, do people typically plug the following into a local
>jenkins system?
>
>http://docs.openstack.org/developer/kolla/quickstart.html#building-contain
>er-images
>
>Any docs on how people typically incorporate jenkins into the kolla
>workflow (I assume they do?) anywhere?

There is no documentation but we had CI/CD in mind when Kolla was born.
Oracle IIUC has publicly stated their CI/CD workflow here and other places
on the internets:

http://events.linuxfoundation.org/sites/events/files/slides/CloudOpen2015.p
df


I'd suggest starting small with an AIO and see if the deployment model
fits your objectives.  Multinode is basically just like AIO but multinode.
 Still setting up multinode does take a little more time, and AIO is a 1-2
hour experiment on the Kolla workflow, which is designed to be jammed into
a CI/CD pipeline. :)

Regards
-steve


>
>> Regards
>> -steve
>>
>>> * What was the roll-out strategy to achieve the final container
>>>solution?
>>>
>>> Any other feedback (and/or questions that I missed)?
>>>
>>> Thanks,
>>>
>>> Josh
>>>
>>> ___
>>> OpenStack-operators mailing list
>>> 

Re: [Openstack-operators] Moving from distro packages to containers (or virtualenvs...)

2016-05-13 Thread Jesse Pretorius
On 13 May 2016 at 19:59, Joshua Harlow  wrote:

>
> So I guess its like the following (correct me if I am wrong):
>
> openstack-ansible
> -
>
> 1. Sets up LXC containers from common base on deployment hosts (ansible
> here to do this)
> 2. Installs things into those containers (virtualenvs, packages, git
> repos, other ... more ansible)
> 3. Connects all the things together (more more ansible).
> 4. Decommissions existing container (if it exists) and replaces with new
> container (more more more ansible).
> 5. <>
>

Almost.

As OpenStack-Ansible treats the LXC containers like hosts (this is why OSA
supports deploying to LXC machine containers and to VM's or normal hosts)
we don't replace containers - we simply deploy the new venv into a new
folder, reconfigure, then restart the service to use the new venv.

To speed things up in large environments we pre-build the venvs on a repo
server, then all hosts or containers grab them from the repo server

The mechanisms we use allow deployers to customise the packages built into
the venvs (you might need an extra driver in the neutron/cinder venvs, for
instance) and allow the OpenStack services to build directly from any git
source (this means you can maintain your own fork with all the fixes you
need, if you want to).

With OpenStack-Ansible you're also not forced to commit to the integrated
build. Each service role is broken out into its own repository, so you're
able to write your own Ansible playbooks to consume the roles which setup
the services in any way that pleases you.

The advantage in our case of using the LXC containers is that if something
ends up broken somehow in the binary packages (this hasn't happened yet in
my experience) you're able to simply blow away the container and rebuild it.

I hope this helps. Feel free to ping me any more questions.

---
Jesse
IRC: odyssey4me
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Moving from distro packages to containers (or virtualenvs...)

2016-05-13 Thread Kris G. Lindgren
Curious how you are using puppet to handle multi-node orchestration, as this is 
something puppet specific does not do.  Are you using ansible/salt to 
orchestrate a puppet run on all the servers?

___
Kris Lindgren
Senior Linux Systems Engineer
GoDaddy







On 5/12/16, 4:19 PM, "Nick Jones"  wrote:

>Hi.
>
>> I am investigating how to help move godaddy from rpms to a container-like 
>> solution (virtualenvs, lxc, or docker...) and a set of questions that comes 
>> up is the following (and I would think that some folks on this mailing list 
>> may have some useful insight into the answers):
>
>I’ve been mulling this over for a while as well, and although we’re not yet 
>there I figured I might as well chip in with my .2p all the same.
>
>> * Have you done the transition?
>
>Not yet!
>
>> * Was/is kolla used or looked into? or something custom?
>
>We’re looking at deploying Docker containers from images that have been 
>created using Puppet.  We’d also use Puppet to manage the orchestration, i.e 
>to make sure a given container is running in the right place and using the 
>correct image ID.  Containers would comprise discrete OpenStack service 
>‘composables’, i.e a container on a control node running the core nova 
>services (nova-api, nova-scheduler, nova-compute, and so on), one running 
>neutron-server, one for keystone, etc.  Nothing unusual there.
>
>The workflow would be something like:
>
>1. Developer generates / updates configuration via Puppet and builds a new 
>image;
>2. Image is uploaded into a private Docker image registry.  Puppet handles 
>deploying a container from this new image ID;
>3. New container is deployed into a staging environment for testing;
>4. Assuming everything checks out, Puppet again handles deploying an updated 
>container into the production environment on the relevant hosts.
>
>I’m simplifying things a little but essentially that’s how I see this hanging 
>together.
>
>> * What was the roll-out strategy to achieve the final container solution?
>
>We’d do this piecemeal, and so containerise some of the ‘safer’ components 
>first of all (such as Horizon) to make sure this all hangs together.  
>Eventually we’d have all of our core OpenStack services on the control nodes 
>isolated and running in containers, and then work on this approach for the 
>rest of the platform.
>
>Would love to hear from other operators as well as to their experience and 
>conclusions.
>
>— 
>
>-Nick
>-- 
>DataCentred Limited registered in England and Wales no. 05611763
>
>___
>OpenStack-operators mailing list
>OpenStack-operators@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Moving from distro packages to containers (or virtualenvs...)

2016-05-13 Thread Robert Starmer
That's effectively my understanding.

On Fri, May 13, 2016 at 9:51 AM, Matthew Thode 
wrote:

> On 05/13/2016 01:59 PM, Joshua Harlow wrote:
> > Matthew Thode wrote:
> >> On 05/13/2016 12:48 PM, Joshua Harlow wrote:
> > * Was/is kolla used or looked into? or something custom?
> >
>  Openstack-ansible, which is Openstack big-tent.  It used to be
>  os-ansible-deployment in stackforge, but we've removed the
>  rackspacisms.
> I will say that openstack-ansible is one of the few that have been
>  doing upgrades reliably for a while, since at least Icehouse, maybe
>  further.
> >>> Whats the connection between 'openstack-ansible' and 'kolla', is there
> >>> any (or any in progress?)
> >>>
> >>
> >> The main difference is that openstack-ansible uses more heavy weight
> >> containers from a common base (ubuntu 14.04 currently, 16.04/cent
> >> 'soon'), it then builds on top of that, uses python virtualenvs as well.
> >>   Kolla on the other hand creates the container images centrally and
> >> ships them around.
> >
> > So I guess its like the following (correct me if I am wrong):
> >
> > openstack-ansible
> > -
> >
> > 1. Sets up LXC containers from common base on deployment hosts (ansible
> > here to do this)
> > 2. Installs things into those containers (virtualenvs, packages, git
> > repos, other ... more ansible)
> > 3. Connects all the things together (more more ansible).
> > 4. Decommissions existing container (if it exists) and replaces with new
> > container (more more more ansible).
> > 5. <>
> >
>
> More or less, we do in place upgrades, so long lived containers, but
> could just as easily destroy and replace.
>
> > kolla
> > -
> >
> > 1. Builds up (installing things and such) *docker* containers outside of
> > deployment hosts (say inside jenkins) [not ansible]
> > 2. Ships built up containers to *a* docker hub
> > 3. Ansible then runs commands on deployment hosts to download image from
> > docker hub
> > 4. Connects all the things together (more ansible).
> > 5. Decommissions existing container (if it exists) and replaces with new
> > container (more more ansible).
> > 6. <>
> >
> > Yes the above is highly simplistic, but just trying to get a feel for
> > the different base steps here ;)
> >
>
> I think so? not sure as I don't work with kolla
>
> >>
> >> The other thing to note is that Kolla has not done a non-greenfield
> >> upgrade as far as I know, I know it's on their roadmap though.
> >>
>
>
> --
> -- Matthew Thode (prometheanfire)
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Moving from distro packages to containers (or virtualenvs...)

2016-05-13 Thread Matthew Thode
On 05/13/2016 01:59 PM, Joshua Harlow wrote:
> Matthew Thode wrote:
>> On 05/13/2016 12:48 PM, Joshua Harlow wrote:
> * Was/is kolla used or looked into? or something custom?
>
 Openstack-ansible, which is Openstack big-tent.  It used to be
 os-ansible-deployment in stackforge, but we've removed the
 rackspacisms.
I will say that openstack-ansible is one of the few that have been
 doing upgrades reliably for a while, since at least Icehouse, maybe
 further.
>>> Whats the connection between 'openstack-ansible' and 'kolla', is there
>>> any (or any in progress?)
>>>
>>
>> The main difference is that openstack-ansible uses more heavy weight
>> containers from a common base (ubuntu 14.04 currently, 16.04/cent
>> 'soon'), it then builds on top of that, uses python virtualenvs as well.
>>   Kolla on the other hand creates the container images centrally and
>> ships them around.
> 
> So I guess its like the following (correct me if I am wrong):
> 
> openstack-ansible
> -
> 
> 1. Sets up LXC containers from common base on deployment hosts (ansible
> here to do this)
> 2. Installs things into those containers (virtualenvs, packages, git
> repos, other ... more ansible)
> 3. Connects all the things together (more more ansible).
> 4. Decommissions existing container (if it exists) and replaces with new
> container (more more more ansible).
> 5. <>
> 

More or less, we do in place upgrades, so long lived containers, but
could just as easily destroy and replace.

> kolla
> -
> 
> 1. Builds up (installing things and such) *docker* containers outside of
> deployment hosts (say inside jenkins) [not ansible]
> 2. Ships built up containers to *a* docker hub
> 3. Ansible then runs commands on deployment hosts to download image from
> docker hub
> 4. Connects all the things together (more ansible).
> 5. Decommissions existing container (if it exists) and replaces with new
> container (more more ansible).
> 6. <>
> 
> Yes the above is highly simplistic, but just trying to get a feel for
> the different base steps here ;)
> 

I think so? not sure as I don't work with kolla

>>
>> The other thing to note is that Kolla has not done a non-greenfield
>> upgrade as far as I know, I know it's on their roadmap though.
>>


-- 
-- Matthew Thode (prometheanfire)

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Moving from distro packages to containers (or virtualenvs...)

2016-05-13 Thread Joshua Harlow

Matthew Thode wrote:

On 05/13/2016 12:48 PM, Joshua Harlow wrote:

* Was/is kolla used or looked into? or something custom?


Openstack-ansible, which is Openstack big-tent.  It used to be
os-ansible-deployment in stackforge, but we've removed the rackspacisms.
   I will say that openstack-ansible is one of the few that have been
doing upgrades reliably for a while, since at least Icehouse, maybe
further.

Whats the connection between 'openstack-ansible' and 'kolla', is there
any (or any in progress?)



The main difference is that openstack-ansible uses more heavy weight
containers from a common base (ubuntu 14.04 currently, 16.04/cent
'soon'), it then builds on top of that, uses python virtualenvs as well.
  Kolla on the other hand creates the container images centrally and
ships them around.


So I guess its like the following (correct me if I am wrong):

openstack-ansible
-

1. Sets up LXC containers from common base on deployment hosts (ansible 
here to do this)
2. Installs things into those containers (virtualenvs, packages, git 
repos, other ... more ansible)

3. Connects all the things together (more more ansible).
4. Decommissions existing container (if it exists) and replaces with new 
container (more more more ansible).

5. <>

kolla
-

1. Builds up (installing things and such) *docker* containers outside of 
deployment hosts (say inside jenkins) [not ansible]

2. Ships built up containers to *a* docker hub
3. Ansible then runs commands on deployment hosts to download image from 
docker hub

4. Connects all the things together (more ansible).
5. Decommissions existing container (if it exists) and replaces with new 
container (more more ansible).

6. <>

Yes the above is highly simplistic, but just trying to get a feel for 
the different base steps here ;)




The other thing to note is that Kolla has not done a non-greenfield
upgrade as far as I know, I know it's on their roadmap though.



___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Moving from distro packages to containers (or virtualenvs...)

2016-05-13 Thread Joshua Harlow

Matthew Thode wrote:

On 05/12/2016 04:04 PM, Joshua Harlow wrote:

Hi there all-ye-operators,

I am investigating how to help move godaddy from rpms to a
container-like solution (virtualenvs, lxc, or docker...) and a set of
questions that comes up is the following (and I would think that some
folks on this mailing list may have some useful insight into the answers):

* Have you done the transition?



We've been using openstack-ansible since it existed, it's working well
for us.


* How did the transition go?



It can be painful, but it's worked out in the long run.


* Was/is kolla used or looked into? or something custom?



Openstack-ansible, which is Openstack big-tent.  It used to be
os-ansible-deployment in stackforge, but we've removed the rackspacisms.
  I will say that openstack-ansible is one of the few that have been
doing upgrades reliably for a while, since at least Icehouse, maybe further.


Whats the connection between 'openstack-ansible' and 'kolla', is there 
any (or any in progress?)





* How long did it take to do the transition from a package based
solution (with say puppet/chef being used to deploy these packages)?

   * Follow-up being how big was the team to do this?


Our team was somewhat bigger than most as we have many deployments and
we had to do it from scratch.  If you CAN do it solo, but I'd recommend
you have coverage / on call for whatever your requirements are.


* What was the roll-out strategy to achieve the final container solution?



For Openstack-ansible I'd recommend deploying a service at a time,
migrating piecemeal.  You can migrate to the same release as you are on
(I hope), though I'd recommend kilo or greater as upgrades can get
annoying after a while.


Any other feedback (and/or questions that I missed)?

Thanks,

Josh

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators





___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Moving from distro packages to containers (or virtualenvs...)

2016-05-13 Thread Joshua Harlow

Steven Dake (stdake) wrote:


On 5/12/16, 2:04 PM, "Joshua Harlow"  wrote:


Hi there all-ye-operators,

I am investigating how to help move godaddy from rpms to a
container-like solution (virtualenvs, lxc, or docker...) and a set of
questions that comes up is the following (and I would think that some
folks on this mailing list may have some useful insight into the answers):

* Have you done the transition?

* How did the transition go?

* Was/is kolla used or looked into? or something custom?

* How long did it take to do the transition from a package based
solution (with say puppet/chef being used to deploy these packages)?

   * Follow-up being how big was the team to do this?


I know I am not an operator, but to respond on this particular point
related to the Kolla question above, I think the team size could be very
small and still effective.  You would want 24 hour coverage of your data
center, and a backup individual, which puts the IC list at 4 people. (3 8
hour shifts + 1 backup in case of illness/etc).  Expect for these folks to
require other work, as once Kolla is deployed there isn't a whole lot to
do.  A 64 node cluster is deployable by one individual in 1-2 hours once
the gear has been racked.  Realistically if you plan to deploy Kolla I'd
expect that individual to want to train for 3-6 weeks deploying over and
over to get a feel for the Kolla workflow.  Try it, I suspect you will
like it :)


Thanks for the info and/or estimates, but before I dive to far in I have 
a question. I see that the following has links to how the different 
services run under kolla:


http://docs.openstack.org/developer/kolla/#kolla-services

But one that seems missing from this list is what I would expect to be 
the more complicated one, that being nova-compute (and libvirt and kvm). 
Are there any secret docs on that (since I would assume it'd be the most 
problematic to get right)?




If you had less rigorous constraints around availability then I'd expect
Godaddy to have, a Kolla deployment could likely be managed with as little
as half a person or less.  Everything including upgrades is automated.



Along this line, do people typically plug the following into a local 
jenkins system?


http://docs.openstack.org/developer/kolla/quickstart.html#building-container-images

Any docs on how people typically incorporate jenkins into the kolla 
workflow (I assume they do?) anywhere?



Regards
-steve


* What was the roll-out strategy to achieve the final container solution?

Any other feedback (and/or questions that I missed)?

Thanks,

Josh

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators




___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Moving from distro packages to containers (or virtualenvs...)

2016-05-12 Thread Robert Starmer
I'm working with a customer to define and manage a transition to what is
currently anticipated to be a container based solution for OpenStack
services.  The focus on containers is to simplify the middleware deployment
of both OpenStack services and other services that are deployed to enable
the overall provider's cloud enviornment.

On Thu, May 12, 2016 at 11:04 AM, Joshua Harlow 
wrote:

> Hi there all-ye-operators,
>
> I am investigating how to help move godaddy from rpms to a container-like
> solution (virtualenvs, lxc, or docker...) and a set of questions that comes
> up is the following (and I would think that some folks on this mailing list
> may have some useful insight into the answers):
>
> * Have you done the transition?
>

Not yet.  Still investigating and modeling the transition and deployment of
non-openstack services.

>
> * How did the transition go?
>

We hope for it to be very smooth :)

>
> * Was/is kolla used or looked into? or something custom?
>

Our principal target is Kolla, along with the Kolla/OSAD model for other
services being deployed alongside the rest of OpenStack. I expect Kolla to
be our final solution, as it also appears to map well into a CI approach we
want to leverage for all future deployments.


>
> * How long did it take to do the transition from a package based solution
> (with say puppet/chef being used to deploy these packages)?
>
The expectation is that the actual transition will be automated, but that
we'll be rolling this solution out in ~6 months time.


>
>   * Follow-up being how big was the team to do this?
>
Leveraging the Kolla team? -> Huge :).  We are currently planning on a 3
person team working on this, though not all full time, and we're also
looking a the CI services, and mapping other services into containers.


>
> * What was the roll-out strategy to achieve the final container solution?
>
TBD.  But we'll be doing greenfield first, and working back into the
browfield active system transition over time.  We do have NFS backed
instance storage, so migration of VMs is possible if it becomes necessary
to migrate the live system.  The control system is also HA capable so it
_should_ be possible to migrate services into containers and keep the
system online.  We'll see how that all maps out in the lab first though.


> Any other feedback (and/or questions that I missed)?
>

I do think that now is the time to do this transition, and am looking
forward to supporting this journey!


> Thanks,
>
> Josh
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Moving from distro packages to containers (or virtualenvs...)

2016-05-12 Thread Matthew Thode
On 05/12/2016 04:04 PM, Joshua Harlow wrote:
> Hi there all-ye-operators,
> 
> I am investigating how to help move godaddy from rpms to a
> container-like solution (virtualenvs, lxc, or docker...) and a set of
> questions that comes up is the following (and I would think that some
> folks on this mailing list may have some useful insight into the answers):
> 
> * Have you done the transition?
> 

We've been using openstack-ansible since it existed, it's working well
for us.

> * How did the transition go?
> 

It can be painful, but it's worked out in the long run.

> * Was/is kolla used or looked into? or something custom?
> 

Openstack-ansible, which is Openstack big-tent.  It used to be
os-ansible-deployment in stackforge, but we've removed the rackspacisms.
 I will say that openstack-ansible is one of the few that have been
doing upgrades reliably for a while, since at least Icehouse, maybe further.

> * How long did it take to do the transition from a package based
> solution (with say puppet/chef being used to deploy these packages)?
> 
>   * Follow-up being how big was the team to do this?

Our team was somewhat bigger than most as we have many deployments and
we had to do it from scratch.  If you CAN do it solo, but I'd recommend
you have coverage / on call for whatever your requirements are.

> 
> * What was the roll-out strategy to achieve the final container solution?
> 

For Openstack-ansible I'd recommend deploying a service at a time,
migrating piecemeal.  You can migrate to the same release as you are on
(I hope), though I'd recommend kilo or greater as upgrades can get
annoying after a while.

> Any other feedback (and/or questions that I missed)?
> 
> Thanks,
> 
> Josh
> 
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


-- 
-- Matthew Thode (prometheanfire)

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Moving from distro packages to containers (or virtualenvs...)

2016-05-12 Thread Carter, Kevin
Hi Josh,

sorry for the double reply, I seem to be having ML bounce issues.

comments in-line,

On Thu, May 12, 2016 at 4:04 PM, Joshua Harlow  wrote:
>
> Hi there all-ye-operators,
>
> I am investigating how to help move godaddy from rpms to a container-like 
> solution (virtualenvs, lxc, or docker...) and a set of questions that comes 
> up is the following (and I would think that some folks on this mailing list 
> may have some useful insight into the answers):
>
> * Have you done the transition?
Back in the Havana timeframe I attempted a package to source
conversion and it was fugly. I was attempting to convert nodes
in-place and found that the distro packages we're applying "value
added" out of tree patches and those patches caused me no end of pain.

> * How did the transition go?
As mentioned, the transition was painful while cleaning up packages
leaves all sorts of crufty bits on a host the biggest problem I ran
into during that time period was related to the DB. Some of the distro
packages pulled in patches that added DB migrations and those
migrations made going to the OpenStack source hard. While the
conversion was a great learning experience in the end I abandoned the
effort.

> * Was/is kolla used or looked into? or something custom?
Full disclosure, I work for Rackspace on the OpenStack-Ansible
project. The OSA project uses both containers (LXC) and python
virtual-environments. We do both because we want user, file system,
and process isolation which containers gives us and we want to isolate
OpenStack from the operating system which has python dependencies. The
idea is to allow the host operating system to do what it does best and
keep OpenStack as far away from it as possible. This has some great
advantages which we've seen in our ability to scale services
independently of a given hosts while keeping it fully isolated. In the
end our solution is a hybrid one as we run the on metal for
Cinder-Volume (when using the reference LVM driver), Nova-Compute
(when using Linux+KVM), Swift-.* (except proxies).

> * How long did it take to do the transition from a package based solution 
> (with say puppet/chef being used to deploy these packages)?
I cant remember specifically but I think I worked on the effort for ~2
weeks before I decided it wasn't worth continuing. If I were to try it
all again I'd likely have a better time today that I did then but I
still think it'd be a mess. My basic plan of attack today would be to
add nodes to the environment using the new infrastructure and slowly
decommission the old deployment. You'll likely need to identify the
point in time your distro packages currently are and take stock of all
of the patches they may have applied. Then with all of that fully
understood you will need to deploy a version of OpenStack just ahead
of your release which "should" pulled the environment in line with the
community.

>   * Follow-up being how big was the team to do this?
It was just me, I was working on a 15+ node lab.

> * What was the roll-out strategy to achieve the final container solution?
My team at Rackspace created the initial release of what is now known
as the OpenStack-Ansible project. In our evaluation of container
technologies found docker to be a cute tool that was incapable of
doing what we needed while remaining stable/functional so we went with
LXC. Having run LXC for a few years now, 2 of which has been with
production workloads, I've been very happy with the results. This is
our basic reference architecture: [
http://docs.openstack.org/developer/openstack-ansible/install-guide/overvie=
w-hostlayout.html
]. Regardless of your choices in container technologies your going to
have to deal with some limitations. Before going full "container all
the things" I'd look into what your current deployment needs are and
see if there are known limitations going to cause you major headaches
(like AF_NET_LINK not being namespace aware making it impossible to
mount an iscsi target within a container
https://bugs.launchpad.net/ubuntu/+source/lxc/+bug/1226855 unless you
drop the network namespace).

> Any other feedback (and/or questions that I missed)?
  * Dependency problems are still problems in every container
technology. Having a reliable build environment or a package mirror is
still a must.
  * Docker files are not packages no matter how many docker people
tell you they are. However, a docker file is a really nice way to
express a container runtime and if you treat it like a runtime
expression engine and stay within those lines it works rather well.
  * OverlayFS has had some problems with various under mounts
(example: https://bugzilla.redhat.com/show_bug.cgi?id=3D1319507). We're
using LVM + EXT4 for the container root devices which I've not seen
reliability issues with.
  * BTRFS may not be a good solution either (see the gotchas for more
on that https://btrfs.wiki.kernel.org/index.php/Gotchas)
  * ZFS on linux looks really promising and has a 

Re: [Openstack-operators] Moving from distro packages to containers (or virtualenvs...)

2016-05-12 Thread Joseph Bajin
On Thu, May 12, 2016 at 5:04 PM, Joshua Harlow 
wrote:

> Hi there all-ye-operators,
>
> I am investigating how to help move godaddy from rpms to a container-like
> solution (virtualenvs, lxc, or docker...) and a set of questions that comes
> up is the following (and I would think that some folks on this mailing list
> may have some useful insight into the answers):
>
> * Have you done the transition?
>
We've done the transition to containers using both RPM's as well as source
code.  We started out with just putting the RPM's into the container.  We
then moved to building the containers using the source code.  There has
been change in direction a bit that is requiring us to go back to RPM's
which was a simple flip over from an RPM.

The biggest thing we had to think about was the configuration files.  We
wanted them to be as easy and clean as possible.  We didn't want to keep
creating tons of container images for all the different environments.  At
the end of the day, we realized that we could use ETCD to allow us to use
environment variables to make configuration changes very easy.


>
> * How did the transition go?
>
It was very easy for us to move between RPM's on the host to containers.
We started off with one project and worked through that and proceeded on to
the next. We were easily able to mix and match between RPMs on the host and
new containers.   Our automation proved to be very useful to making things
easier (obviously).



>
> * Was/is kolla used or looked into? or something custom?
>
We started down this process way before kolla was out there and running, so
it would take a lot for us to move over to kolla as we have a pretty
detailed deployment setup.


>
> * How long did it take to do the transition from a package based solution
> (with say puppet/chef being used to deploy these packages)?
>

It took a week or two honestly.  It is a lot easier than you think.  Just
take your current configuration file, and put it inside the container and
run it and see what happens.  That was the easiest way to get started and
see how they act within your environment.


>
>   * Follow-up being how big was the team to do this?
>

* What was the roll-out strategy to achieve the final container solution?
>

We use ansible along with docker-compose to do all our file deployments.
We use it to talk with haproxy to take the service out of rotation, wait
for it to drain, take the container down, load the new container, start it
up, run a few test cases to ensure the container is doing what it should be
doing, and then put it back into rotation via Haproxy.


>
> Any other feedback (and/or questions that I missed)?
>
> One thing we realized is that you have to be using host based networking.
Do not try to run the containers using the docker networking that is built
in. You will get some weird results. We seemed to solve all the weirdness
when we moved everything over to host based networking.

We are beginning to work on doing compute nodes and gateway nodes.  Since
those don't change as often as controller functions do, we gained a lot of
efficiency and speed for deployments by moving to containers.

We have started to look at deploying via Kubernetes.  We have it working in
our lab for a while now, but we are still trying to get familiar with it
before we start trying to use it in production.



> Thanks,
>
> Josh
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Moving from distro packages to containers (or virtualenvs...)

2016-05-12 Thread Steven Dake (stdake)


On 5/12/16, 2:04 PM, "Joshua Harlow"  wrote:

>Hi there all-ye-operators,
>
>I am investigating how to help move godaddy from rpms to a
>container-like solution (virtualenvs, lxc, or docker...) and a set of
>questions that comes up is the following (and I would think that some
>folks on this mailing list may have some useful insight into the answers):
>
>* Have you done the transition?
>
>* How did the transition go?
>
>* Was/is kolla used or looked into? or something custom?
>
>* How long did it take to do the transition from a package based
>solution (with say puppet/chef being used to deploy these packages)?
>
>   * Follow-up being how big was the team to do this?

I know I am not an operator, but to respond on this particular point
related to the Kolla question above, I think the team size could be very
small and still effective.  You would want 24 hour coverage of your data
center, and a backup individual, which puts the IC list at 4 people. (3 8
hour shifts + 1 backup in case of illness/etc).  Expect for these folks to
require other work, as once Kolla is deployed there isn't a whole lot to
do.  A 64 node cluster is deployable by one individual in 1-2 hours once
the gear has been racked.  Realistically if you plan to deploy Kolla I'd
expect that individual to want to train for 3-6 weeks deploying over and
over to get a feel for the Kolla workflow.  Try it, I suspect you will
like it :)

If you had less rigorous constraints around availability then I'd expect
Godaddy to have, a Kolla deployment could likely be managed with as little
as half a person or less.  Everything including upgrades is automated.

Regards
-steve

>
>* What was the roll-out strategy to achieve the final container solution?
>
>Any other feedback (and/or questions that I missed)?
>
>Thanks,
>
>Josh
>
>___
>OpenStack-operators mailing list
>OpenStack-operators@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Moving from distro packages to containers (or virtualenvs...)

2016-05-12 Thread Nick Jones
Hi.

> I am investigating how to help move godaddy from rpms to a container-like 
> solution (virtualenvs, lxc, or docker...) and a set of questions that comes 
> up is the following (and I would think that some folks on this mailing list 
> may have some useful insight into the answers):

I’ve been mulling this over for a while as well, and although we’re not yet 
there I figured I might as well chip in with my .2p all the same.

> * Have you done the transition?

Not yet!

> * Was/is kolla used or looked into? or something custom?

We’re looking at deploying Docker containers from images that have been created 
using Puppet.  We’d also use Puppet to manage the orchestration, i.e to make 
sure a given container is running in the right place and using the correct 
image ID.  Containers would comprise discrete OpenStack service ‘composables’, 
i.e a container on a control node running the core nova services (nova-api, 
nova-scheduler, nova-compute, and so on), one running neutron-server, one for 
keystone, etc.  Nothing unusual there.

The workflow would be something like:

1. Developer generates / updates configuration via Puppet and builds a new 
image;
2. Image is uploaded into a private Docker image registry.  Puppet handles 
deploying a container from this new image ID;
3. New container is deployed into a staging environment for testing;
4. Assuming everything checks out, Puppet again handles deploying an updated 
container into the production environment on the relevant hosts.

I’m simplifying things a little but essentially that’s how I see this hanging 
together.

> * What was the roll-out strategy to achieve the final container solution?

We’d do this piecemeal, and so containerise some of the ‘safer’ components 
first of all (such as Horizon) to make sure this all hangs together.  
Eventually we’d have all of our core OpenStack services on the control nodes 
isolated and running in containers, and then work on this approach for the rest 
of the platform.

Would love to hear from other operators as well as to their experience and 
conclusions.

— 

-Nick
-- 
DataCentred Limited registered in England and Wales no. 05611763

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators