Re: [openstack-dev] OpenStack Operators Midcycle

2017-07-14 Thread Melvin Hillsman
Just a friendly reminder for those who are up on the other side of the
globe. Please take a moment to put some ideas for sessions in the etherpad
- https://etherpad.openstack.org/p/MEX-ops-meetup

Even if you are not able to make the midcycle, please take time to propose
sessions.

On Wed, Jul 5, 2017 at 10:32 PM, Melvin Hillsman 
wrote:

> Just a friendly reminder for those who are waking up on the other side of
> the globe. Please take a moment to put some ideas for sessions in the
> etherpad - https://etherpad.openstack.org/p/MEX-ops-meetup
>
> Even if you are not able to make the midcycle, please take time to propose
> sessions.
>
> On Mon, Jul 3, 2017 at 7:40 AM, Melvin Hillsman 
> wrote:
>
>> Hey everyone,
>>
>> Happy 4th of July Eve! We have the upcoming Operator Midcycle in Mexico
>> City, Mexico August 9-10th, 2017. Sign up via Eventbrite now*, space is
>> limited* - https://www.eventbrite.com/e/mexico-city-openstack-operators
>> -meetup-tickets-34989052197
>>
>> We are taking session ideas and need your help generating them. Right now
>> we have Kubernetes, Containers, NFV, and few more -
>> https://etherpad.openstack.org/p/MEX-ops-meetup
>>
>> --
>> Kind regards
>>
>> OpenStack User Committee
>> (Edgar, Jon, Melvin, Shamail, Shilla)
>>
>
>
> --
> Kind regards,
>
> Melvin Hillsman
> mrhills...@gmail.com
> mobile: (832) 264-2646
>
> Learner | Ideation | Belief | Responsibility | Command
>



-- 
-- 
Kind regards,

Melvin Hillsman
mrhills...@gmail.com
mobile: (832) 264-2646

Learner | Ideation | Belief | Responsibility | Command
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] How to deal with confusion around "hosted projects"

2017-07-14 Thread Samuel Cassiba
On Jul 14, 2017, at 14:10, Ed Leafe  wrote:
> 
> On Jul 14, 2017, at 2:17 PM, Zane Bitter  wrote:
> 
>> * The pool of OpenStack developers is a fixed resource, and if we make it 
>> clear that some projects are unwelcome then their developers will be 
>> reassigned to 'core' projects in a completely zero-sum process. (Nnope.)
> 
> Yeah, I’ve heard this many times, and always shake my head. If I want to work 
> on X, and X is not in OpenStack governance, I’m going to work on that anyway 
> because I need it. Or maybe on a similar project. I’m going to scratch my 
> itch.
> 
>> * While code like e.g. the Nova scheduler might be so complicated today that 
>> even the experts routinely complain about its terrible design,[1] if only we 
>> could add dozens more cooks (see above) it would definitely get much simpler 
>> and easier to maintain. (Bwahahahahahahaha.)
> 
> No, they need to appoint me as the Scheduler Overlord with the power to smite 
> all those who propose complicated code!
> 
>> * Once we make it clear to users that under no circumstances will we ever 
>> e.g. provide them with notifications about when a server has failed, ways to 
>> orchestrate a replacement, and an API to update DNS to point to the new one, 
>> then they will finally stop demanding bloat-inducing VMWare/oVirt-style 
>> features that enable them to treat cloud servers like pets. (I. don't. even.)
> 
> Again, itches will be scratched. What I think is more important is a 
> marketing issue, not a technical one. When I think of what it means to be a 
> “core” project, I think of things that people looking to “get cloudy” would 
> likely want. It isn’t until you start using a cloud that the additional 
> projects you mention become important. So simplifying what is presented to 
> the cloud market is a good thing, as it won’t confuse people as to what 
> OpenStack is. But that doesn’t require any of the other projects be stopped 
> or in any way discouraged.
> 
> -- Ed Leafe
> 
> 
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Chiming in from the believed-to-be-dead Chef project, I work on it because it 
scratches my itch. I served as PTL because it did and does scratch my itch. 
Working on it in any capacity that moves things forward continues to scratch 
that itch. We have less of a technical problem, not to downplay our tech debt, 
as we’re still pushing patches and shuffling reviews. However, we have a huge 
perception problem and equally large marketing problem, which is apparently an 
unwritten side job of being a PTL. We didn’t get that memo until the Big Tent 
was deemed too smothering. The fun part about being a PTL with effectively no 
team is that, when you or your counterpart isn’t actively marketing and 
spending more time making noise than working, people call you dead to your 
face. Even when you spend the time and money to go to marketing events.

--
Best,

Samuel Cassiba



signature.asc
Description: Message signed with OpenPGP
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] How to deal with confusion around "hosted projects"

2017-07-14 Thread feilong

On 15/07/17 03:45, Ed Leafe wrote:

On Jul 13, 2017, at 10:32 PM, Fei Long Wang  wrote:


I agree with Zane for most of the parts. But one thing I don't really
understand is, why OpenStack community is still confusing at the IaaS,
PaaS and SaaS, does the classification really mater at nowadays? Do we
really need a label/tag for OpenStack to limit it as an IaaS, PaaS or
SaaS? I never see AWS says it's an IaaS, PaaS or SaaS. Did Azure or
Google Cloud say that? I think they're just providing the service their
customer want.

Sure, they may not distinguish those things publicly, but in their internal 
development teams it is very likely that they understand these boundaries. And 
just for another quick example, from the Azure site:

https://azure.microsoft.com/en-us/overview/azure-vs-aws/

"We are the only cloud provider recognized in the industry as having leading 
solutions in IaaS, PaaS, and SaaS. And Azure PaaS platform services can help you be 
more productive and increase your ROI according to this Forrester Total Economic 
Impact study.”

So I don’t think that this distinction is peculiar to OpenStack.


Yep, there is no conflict between you and me. It's good to understand 
the XaaS boundary within OpenStack community, such as there is no 
argument to say Zaqar is a PaaS layer service. What I'm saying is, is it 
really necessary to limit OpenStack only on IaaS?


I can see the contradiction happening in OpenStack community. OpenStack 
are successful  on the IaaS layer as a de facto open standard. So we'd 
like to grow to get a big success. But we're running into some troubles 
because we're running too fast. Then some people said, we should stop 
grow the scope of OpenStack, let's just focus on IaaS. However, that 
shouldn't be the excuse to stop building a better ecosystem around 
OpenStack. If the classification is really matter, then we can define 
(we already have?) the core service at IaaS layer. But I still want to 
see some upper layer services within OpenStack community, such as LBaaS, 
DNSaaS, MQaaS, etc, etc.


Just my $0.02



-- Ed Leafe






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
Cheers & Best regards,
Feilong Wang (王飞龙)
--
Senior Cloud Software Engineer
Tel: +64-48032246
Email: flw...@catalyst.net.nz
Catalyst IT Limited
Level 6, Catalyst House, 150 Willis Street, Wellington
--


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] How to deal with confusion around "hosted projects"

2017-07-14 Thread Ed Leafe
On Jul 14, 2017, at 2:17 PM, Zane Bitter  wrote:

> * The pool of OpenStack developers is a fixed resource, and if we make it 
> clear that some projects are unwelcome then their developers will be 
> reassigned to 'core' projects in a completely zero-sum process. (Nnope.)

Yeah, I’ve heard this many times, and always shake my head. If I want to work 
on X, and X is not in OpenStack governance, I’m going to work on that anyway 
because I need it. Or maybe on a similar project. I’m going to scratch my itch.

> * While code like e.g. the Nova scheduler might be so complicated today that 
> even the experts routinely complain about its terrible design,[1] if only we 
> could add dozens more cooks (see above) it would definitely get much simpler 
> and easier to maintain. (Bwahahahahahahaha.)

No, they need to appoint me as the Scheduler Overlord with the power to smite 
all those who propose complicated code!

> * Once we make it clear to users that under no circumstances will we ever 
> e.g. provide them with notifications about when a server has failed, ways to 
> orchestrate a replacement, and an API to update DNS to point to the new one, 
> then they will finally stop demanding bloat-inducing VMWare/oVirt-style 
> features that enable them to treat cloud servers like pets. (I. don't. even.)


Again, itches will be scratched. What I think is more important is a marketing 
issue, not a technical one. When I think of what it means to be a “core” 
project, I think of things that people looking to “get cloudy” would likely 
want. It isn’t until you start using a cloud that the additional projects you 
mention become important. So simplifying what is presented to the cloud market 
is a good thing, as it won’t confuse people as to what OpenStack is. But that 
doesn’t require any of the other projects be stopped or in any way discouraged.

-- Ed Leafe





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] Proposal to change integrated neutron grenade gate job to multi-node

2017-07-14 Thread Brian Haley

Hi,

While looking at ways to reduce the number of jobs we run in the Neutron 
gate, I found we ran two very similar jobs for some projects:


gate-grenade-dsvm-neutron-ubuntu-xenial (single-node job)
gate-grenade-dsvm-neutron-multinode-ubuntu-xenial (2-node job)

We talked about this in the Neutron CI meeting this week [1] and felt it 
best to remove the single-node job and just run the multi-node job, 
mostly because it more mimics a "real" Neutron deployment where there 
are separate controller and compute nodes.  Looking at the Neutron 
grafana dashboard [2] the two jobs have about the same failure rate in 
the gate (~0), so I don't think there will be any problems with the switch.


This has an impact on the integrated gate since it currently runs the 
single-node job, so I wanted ot get thoughts on any issues they'd have 
with this change [3].


Thanks,

-Brian

[1] 
http://eavesdrop.openstack.org/meetings/neutron_ci/2017/neutron_ci.2017-07-11-16.00.log.html#l-112

[3] http://grafana.openstack.org/dashboard/db/neutron-failure-rate
[2] https://review.openstack.org/#/c/483600/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Let's use Ansible to deploy OpenStack services on Kubernetes

2017-07-14 Thread Ben Nemec



On 07/14/2017 11:43 AM, Joshua Harlow wrote:
Out of curiosity, since I keep on hearing/reading all the tripleo 
discussions on how tripleo folks are apparently thinking/doing? 
redesigning the whole thing to use ansible + mistral + heat, or ansible 
+ kubernetes or ansible + mistral + heat + ansible (a second time!) or ...


Seeing all those kinds of questions and suggestions around what should 
be used and why and how (and even this thread) makes me really wonder 
who actually uses tripleo and can afford/understand such kinds of changes?


Does anyone?

If there are  is there going to be an upgrade 
path for there existing 'cloud/s' to whatever this solution is?


What operator(s) has the ability to do such a massive shift at this 
point in time? Who are these 'mystical' operators?


All this has really peaked my curiosity because I am personally trying 
to do that shift (not exactly the same solution...) and I know it is a 
massive undertaking (that will take quite a while to get right) even for 
a simple operator with limited needs out of openstack (ie godaddy); so I 
don't really understand how the generic solution for all existing 
tripleo operators can even work...


This is a valid point.  Up until now the answer has been that we 
abstracted most of the ugliness of major changes behind either Heat or 
tripleoclient.  If we end up essentially dropping those two in favor of 
some other method of driving deployments it's going to be a lot harder 
to migrate.  And I could be wrong, but I'm pretty sure it _is_ important 
to our users to have an in-place upgrade path (see the first bullet 
point in [1]).


New, shiny technology is great and all, but we do need to remember that 
we have a lot of users out there already depending on the old, 
not-so-shiny bits too.  They're not going to be happy if we leave them 
hanging.


1: http://lists.openstack.org/pipermail/openstack-dev/2017-June/119063.html



Flavio Percoco wrote:


Greetings,

As some of you know, I've been working on the second phase of TripleO's
containerization effort. This phase if about migrating the docker based
deployment onto Kubernetes.

These phase requires work on several areas: Kubernetes deployment,
OpenStack
deployment on Kubernetes, configuration management, etc. While I've been
diving
into all of these areas, this email is about the second point, OpenStack
deployment on Kubernetes.

There are several tools we could use for this task. kolla-kubernetes,
openstack-helm, ansible roles, among others. I've looked into these
tools and
I've come to the conclusion that TripleO would be better of by having
ansible
roles that would allow for deploying OpenStack services on Kubernetes.

The existing solutions in the OpenStack community require using Helm.
While I
like Helm and both, kolla-kubernetes and openstack-helm OpenStack
projects, I
believe using any of them would add an extra layer of complexity to
TripleO,
which is something the team has been fighting for years years -
especially now
that the snowball is being chopped off.

Adopting any of the existing projects in the OpenStack communty would
require
TripleO to also write the logic to manage those projects. For example,
in the
case of openstack-helm, the TripleO team would have to write either 
ansible

roles or heat templates to manage - install, remove, upgrade - the
charts (I'm
happy to discuss this point further but I'm keepping it at a 
high-level on

purpose for the sake of not writing a 10k-words-long email).

James Slagle sent an email[0], a couple of days ago, to form TripleO 
plans
around ansible. One take-away from this thread is that TripleO is 
adopting

ansible more and more, which is great and it fits perfectly with the
conclusion
I reached.

Now, what this work means is that we would have to write an ansible role
for
each service that will deploy the service on a Kubernetes cluster.
Ideally these
roles will also generate the configuration files (removing the need of
puppet
entirely) and they would manage the lifecycle. The roles would be
isolated and
this will reduce the need of TripleO Heat templates. Doing this would 
give

TripleO full control on the deployment process too.

In addition, we could also write Ansible Playbook Bundles to contain
these roles
and run them using the existing docker-cmd implementation that is coming
out in
Pike (you can find a PoC/example of this in this repo[1]).

Now, I do realize the amount of work this implies and that this is my
opinion/conclusion. I'm sending this email out to kick-off the
discussion and
gather thoughts and opinions from the rest of the community.

Finally, what I really like about writing pure ansible roles is that
ansible is
a known, powerfull, tool that has been adopted by many operators
already. It'll
provide the flexibility needed and, if structured correctly, it'll allow
for
operators (and other teams) to just use the parts they need/want without
depending on the full-stack. I like the idea of being able to 

Re: [openstack-dev] [TripleO] Let's use Ansible to deploy OpenStack services on Kubernetes

2017-07-14 Thread Fox, Kevin M
Part of the confusion I think is in the different ways helm can be used.

Helm can be used to orchestrate the deployment of a whole service (ex, nova). 
"launch these 3 k8s objects, template out this config file, run this job to 
init the db, or this job to upgrade the db, etc", all as a single unit.

It can also be used purely for its templating ability.

So, "render this single k8s object using these values".

This is one of the main differences between openstack-helm and kolla-kubernetes.

Openstack-helm has charts only for orchestrating the deployment of whole 
openstack services.

Kolla-kubernetes has taken a different track though. While it does use helm for 
its golang templater, it has taken a microservices approach to be shareable 
with other tools. So, each openstack process (nova-api, neutron-server, 
neutron-openvswitch-agent), etc, has its own chart and can be independently 
configured/placed as needed by an external orchestration system. 
Kolla-Kubernetes microservice charts are to Kubernetes what Kolla-Containers 
are to Docker. Reusable building blocks of known tested functionality and 
assemblable anyway the orchestration system/user feels is in their best 
interest.

This is why I think kolla-kubernetes would be a good fit for TripleO, as you 
can replace a single component at a time, however you want, using the config 
files you already have and upgrade the system a piece at a time from non 
container to containered. It doesn't have to happen all at once, even within a 
single service, or within a single TripleO release. The orchestration of it is 
totally up to you, and can be tailored very precisely to deal with the 
particulars of the upgrade strategy needed by TripleO's existing deployments.

Does that help to alleviate some of the confusion?

Thanks,
Kevin

From: James Slagle [james.sla...@gmail.com]
Sent: Friday, July 14, 2017 10:26 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [TripleO] Let's use Ansible to deploy OpenStack 
services on Kubernetes

On Fri, Jul 14, 2017 at 12:16 PM, Fox, Kevin M  wrote:
> https://xkcd.com/927/

That's cute, but we aren't really trying to have competing standards.
It's not really about competition between tools.

> I don't think adopting helm as a dependency adds more complexity then writing 
> more new k8s object deployment tooling?

That depends, and will likely end up containing a fair amount of
subjectivity. What we're trying to do is explore choices around
tooling.

>
> There are efforts to make it easy to deploy kolla-kubernetes microservice 
> charts using ansible for orchestration in kolla-kubernetes. See:
> https://review.openstack.org/#/c/473588/
> What kolla-kubernetes brings to the table is a tested/shared base k8s object 
> layer. Orchestration is done by ansible via TripleO, and the solutions 
> already found/debugged to how to deploy OpenStack in containers on Kubernetes 
> can be reused/shared.

That's good, and we'd like to reuse existing code and patterns. I
admit to not being super famliliar with kolla-kubernetes. Are there
reusable components without having to also use Helm?

> See for example:
> https://github.com/tripleo-apb/ansible-role-k8s-keystone/blob/331f405bd3f7ad346d99e964538b5b27447a0ebf/provision-keystone-apb/tasks/main.yaml

Pretty sure that was just a POC/example.

>
> I don't see much by way of dealing with fernet token rotation. That was a 
> tricky bit of code to get to work, but kolla-kubernetes has a solution to it. 
> You can get it by: helm install kolla/keystone-fernet-rotate-job.
>
> We designed this layer to be shareable so we all can contribute to the 
> commons rather then having every project reimplement their own and have to 
> chase bugs across all the implementations. The deployment projects will be 
> stronger together if we can share as much as possible.
>
> Please reconsider. I'd be happy to talk with you more if you want.

Just to frame the conversation with a bit more context, I'm sure there
are many individual features/bugs/special handling that TripleO and
Kolla both do today that the other does not.

TripleO had about a 95% solution for deploying OpenStack when
kolla-ansible did not exist and was started from scratch. But, kolla
made a choice based around tooling, which I contend is perfectly valid
given that we are creating deployment tools. Part of the individual
value in each deployment project is the underlying tooling itself.

I think what TripleO is trying to do here is not immediately jump to a
solution that uses Helm and explore what alternatives exist. Even if
the project chooses not to use Helm I still see room for collaboration
on code beneath the Helm/whatever layer.

--
-- James Slagle
--

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

Re: [openstack-dev] [kolla] Looking for Docker images for Cinder, Glance etc for oVirt

2017-07-14 Thread Matthieu Simonin
Hello,

If it helps, we are building on a regular basis a subset of the kolla images. 
They are pushed in dockerhub under beyondtheclouds namespace [1].

stable/ocata images should be up-to-date, master is tagged latest.
Nevertheless some caveats of relying on those tags are mentionned in this 
thread [2].

[1]: https://hub.docker.com/u/beyondtheclouds/
[2]: http://lists.openstack.org/pipermail/openstack-dev/2017-April/115391.html

Best,

Matt

- Mail original -
> De: "Michał Jastrzębski" 
> À: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Envoyé: Dimanche 9 Juillet 2017 00:48:56
> Objet: Re: [openstack-dev] [kolla] Looking for Docker images for Cinder, 
> Glance etc for oVirt
> 
> Hello,
> 
> Unfortunately we still don't have proper dockerhub uploading
> mechanism, that's in progress. For now you need to build your own
> images, here's doc for that:
> https://docs.openstack.org/kolla/latest/image-building.html
> Also feel free to join us on #openstack-kolla irc if you have further
> questions.
> 
> Cheers,
> Michal
> 
> On 8 July 2017 at 11:03, Leni Kadali Mutungi  wrote:
> > Hello all.
> >
> > I am trying to use the Cinder and Glance Docker images you provide in
> > relation to the setup here:
> > http://www.ovirt.org/develop/release-management/features/cinderglance-docker-integration/
> >
> > I tried to run `sudo docker pull
> > kollaglue/centos-rdo-glance-registry:latest` and got an error of not
> > found. I thought that it could possible to use a Dockerfile to spin up
> > an equivalent of it, so I would like some guidance on how to go about
> > doing that. Best practices and so on. Alternatively, if it is
> > possible, may you point me in the direction of the equivalent images
> > mentioned in the guides if they have been superseded by something else?
> > Thanks.
> >
> > CCing the oVirt users and devel lists to see if anyone has experienced
> > something similar.
> >
> > --
> > - Warm regards
> > Leni Kadali Mutungi
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Let's use Ansible to deploy OpenStack services on Kubernetes

2017-07-14 Thread James Slagle
On Fri, Jul 14, 2017 at 3:38 PM, Steven Dake  wrote:
>
>
> On Fri, Jul 14, 2017 at 10:26 AM, James Slagle 
> wrote:
>>
> James,
>
>>
>> Just to frame the conversation with a bit more context, I'm sure there
>> are many individual features/bugs/special handling that TripleO and
>> Kolla both do today that the other does not.
>>
>
> I think what you are saying in a nutshell is that TripleO and Kolla compete.

No. That is not what I'm saying. In fact I said:

 It's not really about competition between tools.

I'm not sure how you thought that meant I was saying that the two tools compete.

Some may consider that to be the case (that they compete), but that is
more a personal frame of reference. I don't think that either project
is trying to "win" the deployment battle. Or there even is a battle.
If that were the case, it would be very difficult to work together, as
we do effectively quite a bit today already.

>> TripleO had about a 95% solution for deploying OpenStack when
>> kolla-ansible did not exist and was started from scratch. But, kolla
>> made a choice based around tooling, which I contend is perfectly valid
>> given that we are creating deployment tools. Part of the individual
>> value in each deployment project is the underlying tooling itself.
>>
>
> I think what you are saying here is Kolla chose to compete on tooling.  I
> haven't really given it a lot of thought; I'd say all are technical choices
> made with Kolla had mostly to do with selecting wisely from the technical
> ecosystem.

No. What I'm saying is exactly what I wrote. Please don't read or
project anything else onto it about "competition".

Again, I don't think that is all that relevant or healthy to the
conversation (hence why I dismissed the comic: it's a farce of the
actual situation).

I see it more as differentiation instead of competition.  Especially
since we are talking about open source projects. There are advantages
and disadvantages to every tool choice, including Heat vs Ansible.
What I said was that "kolla made a choice based around tooling". And
that is a valid thing to do and creates individual value to that
project that differentiates it from TripleO.

>> I think what TripleO is trying to do here is not immediately jump to a
>> solution that uses Helm and explore what alternatives exist. Even if
>> the project chooses not to use Helm I still see room for collaboration
>> on code beneath the Helm/whatever layer.
>>
>
> I believe it wise that you don't jump to any conclusion or solution that
> does or doesn't use Helm.  I'd encourage you to understand how Kubernetes
> works before making such technical choices.

Exactly. Which is why "just use kolla-kubernetes" is not a silver
bullet to this discussion.

> All that said, there is clearly value in working together rather than apart.
> To me, that is more important then the technical choices you are presented
> with.

-- 
-- James Slagle
--

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] stable/ocata and stable/newton are broken

2017-07-14 Thread Lance Bragstad
All the patches in the original note have merged for both stable/ocata
and stable/newton. Existing patches to both branches are being recheck
and rebased.


On 07/13/2017 06:04 PM, Lance Bragstad wrote:
> Oh - the original issues with the stable branches were reported here:
>
> https://bugs.launchpad.net/keystone/+bug/1704148
>
>
> On 07/13/2017 06:00 PM, Lance Bragstad wrote:
>> Colleen found out today while doing a backport that both of our stable
>> branches are broken. After doing some digging, it looks like bug 1687593
>> is the culprit [0]. The fix to that bug merged in master and the author
>> added some nicely written functional tests using the
>> keystone-tempest-plugin. The functional tests are being run against both
>> stables branches but the fix wasn't actually backported. As a result,
>> both stable branches are bricked at the moment because of the functional
>> tests.
>>
>> I've proposed the necessary backports for stable/ocata [1] and
>> stable/newton [2], in addition to a cleaned up release note for master
>> [3]. Any reviews would be greatly appreciated since we'll be doing a
>> release of both stable branches relatively soon.
>>
>> Thanks!
>>
>>
>> [0] https://bugs.launchpad.net/keystone/+bug/1687593
>> [1]
>> https://review.openstack.org/#/q/status:open+project:openstack/keystone+branch:stable/ocata+topic:bug/1687593
>> [2]
>> https://review.openstack.org/#/q/status:open+project:openstack/keystone+branch:stable/newton+topic:bug/1687593
>> [3] https://review.openstack.org/#/c/483598/
>>
>>
>




signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Let's use Ansible to deploy OpenStack services on Kubernetes

2017-07-14 Thread Steven Dake
On Fri, Jul 14, 2017 at 10:26 AM, James Slagle 
wrote:

> On Fri, Jul 14, 2017 at 12:16 PM, Fox, Kevin M  wrote:
> > https://xkcd.com/927/
>
> That's cute, but we aren't really trying to have competing standards.
> It's not really about competition between tools.
>
> > I don't think adopting helm as a dependency adds more complexity then
> writing more new k8s object deployment tooling?
>
> That depends, and will likely end up containing a fair amount of
> subjectivity. What we're trying to do is explore choices around
> tooling.
>
> >
> > There are efforts to make it easy to deploy kolla-kubernetes
> microservice charts using ansible for orchestration in kolla-kubernetes.
> See:
> > https://review.openstack.org/#/c/473588/
> > What kolla-kubernetes brings to the table is a tested/shared base k8s
> object layer. Orchestration is done by ansible via TripleO, and the
> solutions already found/debugged to how to deploy OpenStack in containers
> on Kubernetes can be reused/shared.
>
> That's good, and we'd like to reuse existing code and patterns. I
> admit to not being super famliliar with kolla-kubernetes. Are there
> reusable components without having to also use Helm?
>
> > See for example:
> > https://github.com/tripleo-apb/ansible-role-k8s-keystone/blob/
> 331f405bd3f7ad346d99e964538b5b27447a0ebf/provision-keystone-
> apb/tasks/main.yaml
>
> Pretty sure that was just a POC/example.
>
> >
> > I don't see much by way of dealing with fernet token rotation. That was
> a tricky bit of code to get to work, but kolla-kubernetes has a solution to
> it. You can get it by: helm install kolla/keystone-fernet-rotate-job.
> >
> > We designed this layer to be shareable so we all can contribute to the
> commons rather then having every project reimplement their own and have to
> chase bugs across all the implementations. The deployment projects will be
> stronger together if we can share as much as possible.
> >
> > Please reconsider. I'd be happy to talk with you more if you want.
>
> James,


> Just to frame the conversation with a bit more context, I'm sure there
> are many individual features/bugs/special handling that TripleO and
> Kolla both do today that the other does not.
>
>
I think what you are saying in a nutshell is that TripleO and Kolla compete.


> TripleO had about a 95% solution for deploying OpenStack when
> kolla-ansible did not exist and was started from scratch. But, kolla
> made a choice based around tooling, which I contend is perfectly valid
> given that we are creating deployment tools. Part of the individual
> value in each deployment project is the underlying tooling itself.
>
>
I think what you are saying here is Kolla chose to compete on tooling.  I
haven't really given it a lot of thought; I'd say all are technical choices
made with Kolla had mostly to do with selecting wisely from the technical
ecosystem.


> I think what TripleO is trying to do here is not immediately jump to a
> solution that uses Helm and explore what alternatives exist. Even if
> the project chooses not to use Helm I still see room for collaboration
> on code beneath the Helm/whatever layer.
>
>
I believe it wise that you don't jump to any conclusion or solution that
does or doesn't use Helm.  I'd encourage you to understand how Kubernetes
works before making such technical choices.

All that said, there is clearly value in working together rather than
apart.  To me, that is more important then the technical choices you are
presented with.

Regards
-steve


> --
> -- James Slagle
> --
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] How to deal with confusion around "hosted projects"

2017-07-14 Thread Zane Bitter

On 13/07/17 23:32, Fei Long Wang wrote:

I agree with Zane for most of the parts. But one thing I don't really
understand is, why OpenStack community is still confusing at the IaaS,
PaaS and SaaS, does the classification really mater at nowadays? Do we
really need a label/tag for OpenStack to limit it as an IaaS, PaaS or
SaaS? I never see AWS says it's an IaaS, PaaS or SaaS. Did Azure or
Google Cloud say that? I think they're just providing the service their
customer want.


I sort-of agree that it shouldn't matter. But we do need to communicate 
clearly, and those terms commonly come up, and if we don't have a shared 
understanding of what they mean then we're going to be talking past each 
other a lot.


So a common thing for folks to say is that OpenStack should concentrate 
on IaaS, not PaaS. And this is fairly widely agreed-upon. Even I mostly 
agree!


AWS has a PaaS. It's called Elastic Beanstalk. I don't hear about it 
much. Without having seen numbers for any of them, I would guess that 
there are far more users using PaaS services that run on top of AWS's 
IaaS (e.g. Heroku, OpenShift Online) than there are using Elastic 
Beanstalk. Google's PaaS (AppEngine) hasn't been a runaway success 
either. That's a good argument for leaving PaaS to other open source 
projects - it's a complex space with lots of innovation happening and 
there's no reason to think that we need to pick only one and control it.


Unfortunately, when a lot of people say OpenStack should do "IaaS only" 
they don't mean "equivalents of basically anything that AWS does except 
Elastic Beanstalk is fair game", they mean "only the equivalents of EC2, 
VPC, and EBS are fair game". This, they believe, will lead to the holy 
grail of a "small, stable core", presumably predicated on the following 
assumptions:


* The pool of OpenStack developers is a fixed resource, and if we make 
it clear that some projects are unwelcome then their developers will be 
reassigned to 'core' projects in a completely zero-sum process. (Nnope.)


* While code like e.g. the Nova scheduler might be so complicated today 
that even the experts routinely complain about its terrible design,[1] 
if only we could add dozens more cooks (see above) it would definitely 
get much simpler and easier to maintain. (Bwahahahahahahaha.)


* Once we make it clear to users that under no circumstances will we 
ever e.g. provide them with notifications about when a server has 
failed, ways to orchestrate a replacement, and an API to update DNS to 
point to the new one, then they will finally stop demanding 
bloat-inducing VMWare/oVirt-style features that enable them to treat 
cloud servers like pets. (I. don't. even.)


That's a... ahem... let's just say it's a difficult case to make, but 
it's much easier to say "OpenStack should just be IaaS" and let everyone 
substitute in their own different definitions of IaaS and nod in 
agreement. So the terminology is unlikely to go away ;)


cheers,
Zane.

[1] https://twitter.com/jaypipes/status/885278601821769728


On 14/07/17 05:03, Zane Bitter wrote:

On 29/06/17 10:55, Monty Taylor wrote:

(Incidentally, I think it's unworkable to have an IaaS without DNS.
Other people have told me that having an IaaS without LBaaS or a
message queuing service is unworkable, while I neither need nor want
either of those things from my IaaS - they seem like PaaS components
to me)


I resemble that remark, so maybe it's worth clarifying how I see things.

In many ways the NIST definitions of SaaS/PaaS/IaaS from 2011, while
helpful to cut through the vagueness of the 'cloud' buzzword and frame
the broad outlines of cloud service models (at least at the time),
have proven inadequate to describe the subtlety of the various
possible offerings. The only thing that is crystal clear is that LBaaS
and message queuing are not PaaS components ;)

I'd like to suggest that the 'Platform' in PaaS means the same thing
that it has since at least the '90s: the Operating System and possibly
there language runtime if any. The difference between PaaS and IaaS in
terms of compute is that in the latter case you're given a machine and
you install whatever platform you like on it, while in the former the
platform is provided as a service. Hence the name.

To the extent that hardware load balancers are used, LBaaS is pretty
clearly IaaS. Hardware is infrastructure, if you provide access to
that as a service it's Infrastructure as a Service. QED. It's also
possible to provide software load balancers as a service. Technically
I guess this is SaaS. Theoretically you could make an argument that an
API that can abstract over either hardware or software load balancers
is not "real" IaaS. And I would label that argument as BS sophistry :)

The fact that PaaS implementations use load balancers internally is
really neither here nor there.

You can certainly build a useful cloud without LBaaS. That just means
that anybody who needs load balancing will have to spin up their 

Re: [openstack-dev] [TripleO] Let's use Ansible to deploy OpenStack services on Kubernetes

2017-07-14 Thread Juan Antonio Osorio
I actually like the idea of moving to kolla-kubernetes. I guess there would
be a bunch of work towards giving folks an upgrade path and reaching
feature parity; but this would happen anyway eurgh the switch to
kubernetes.  And this would have the added value of merging two
communities, thus more devs and folks testing :D . I like it!

On 14 Jul 2017 18:56, "Michał Jastrzębski"  wrote:

Guys you just described Kolla-Kubernetes pretty much... how about
we join effort and work towards this goal together?

On 14 July 2017 at 08:43, Flavio Percoco  wrote:
> On 14/07/17 17:26 +0200, Bogdan Dobrelya wrote:
>>
>> On 14.07.2017 11:17, Flavio Percoco wrote:
>>>
>>>
>>> Greetings,
>>>
>>> As some of you know, I've been working on the second phase of TripleO's
>>> containerization effort. This phase if about migrating the docker based
>>> deployment onto Kubernetes.
>>>
>>> These phase requires work on several areas: Kubernetes deployment,
>>> OpenStack
>>> deployment on Kubernetes, configuration management, etc. While I've been
>>> diving
>>> into all of these areas, this email is about the second point, OpenStack
>>> deployment on Kubernetes.
>>>
>>> There are several tools we could use for this task. kolla-kubernetes,
>>> openstack-helm, ansible roles, among others. I've looked into these
>>> tools and
>>> I've come to the conclusion that TripleO would be better of by having
>>> ansible
>>> roles that would allow for deploying OpenStack services on Kubernetes.
>>>
>>> The existing solutions in the OpenStack community require using Helm.
>>> While I
>>> like Helm and both, kolla-kubernetes and openstack-helm OpenStack
>>> projects, I
>>> believe using any of them would add an extra layer of complexity to
>>> TripleO,
>>
>>
>> It's hard to estimate that complexity w/o having a PoC of such an
>> integration. We should come up with a final choice once we have it done.
>>
>> My vote would go for investing engineering resources into solutions that
>> have problems already solved, even by the price of added complexity (but
>> that sort of depends...). Added complexity may be compensated with
>> removed complexity (like those client -> Mistral -> Heat -> Mistral ->
>> Ansible manipulations discussed in the mail thread mentioned below [0])
>
>
> I agree it's hard to estimate but you gotta draw the line somewhere. I
> actually
> spent time on this and here's a small PoC of ansible+mariadb+helm. I wrote
> the
> pyhelm lib (took some code from the openstack-helm folks) and I wrote the
> ansible helm module myself. I'd say I've spent enough time on this
research.
>
> I don't think getting a full PoC working is worth it as that will require
> way
> more work for not much value since we can anticipate some of the
> complexities
> already.
>
> As far as the complexity comment goes, I disagree with you. I don't think
> you're
> evaluating the amount of complexity that there *IS* already in TripleO and
> how
> adding more complexity (layers, states, services) would make things worse
> for
> not much extra value.
>
> By all means, I might be wrong here so, do let me know if you're seeing
> something I'm not.
> Flavio
> --
> @flaper87
> Flavio Percoco
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] New setuptools release, and the world is broken

2017-07-14 Thread Doug Hellmann
Excerpts from Jeremy Stanley's message of 2017-07-14 16:18:04 +:
> On 2017-07-14 16:05:36 + (+), Jesse Pretorius wrote:
> > On 7/14/17, 4:54 PM, "Doug Hellmann"  wrote:
> [...]
> > > I wonder if we could convince the PyPA folks to allow get-pip.py
> > >to take a version argument, so we could specify which version we want 
> > > in
> > >our jobs. We would still need a way to manage that version number, but
> > >modifying get-pip.py would solve the bootstrapping issue.
> > 
> > It has been capable of this for quite some time. You can provide
> > both requirements And constraints.
> > 
> > python /opt/get-pip.py pip==9.0.1 setuptools==33.1.1 wheel==0.29.0
> > 
> > Or, far better, is to use constraints because it’ll ensure that
> > any dependent packages are also the right versions.
> > 
> > python /opt/get-pip.py pip setuptools wheel –constraint
> > http://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt
> 
> Is there a mechanism to leverage this in tox or when invoking
> virtualenv? We don't run get-pip.py in most jobs because our images
> have pip/setuptools preinstalled to get around bootstrapping issues,
> though I suppose that could with some effort be moved into job
> runtime as a (very early) builder macro. Using constraints to
> control these during image generation doesn't make a whole lot of
> sense though as images are only rebuilt once a day and so tracking
> these in the constraints list won't be self-testing in that regard
> anyway.

I was thinking we would use an early stage builder to do it, too.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Let's use Ansible to deploy OpenStack services on Kubernetes

2017-07-14 Thread James Slagle
On Fri, Jul 14, 2017 at 12:16 PM, Fox, Kevin M  wrote:
> https://xkcd.com/927/

That's cute, but we aren't really trying to have competing standards.
It's not really about competition between tools.

> I don't think adopting helm as a dependency adds more complexity then writing 
> more new k8s object deployment tooling?

That depends, and will likely end up containing a fair amount of
subjectivity. What we're trying to do is explore choices around
tooling.

>
> There are efforts to make it easy to deploy kolla-kubernetes microservice 
> charts using ansible for orchestration in kolla-kubernetes. See:
> https://review.openstack.org/#/c/473588/
> What kolla-kubernetes brings to the table is a tested/shared base k8s object 
> layer. Orchestration is done by ansible via TripleO, and the solutions 
> already found/debugged to how to deploy OpenStack in containers on Kubernetes 
> can be reused/shared.

That's good, and we'd like to reuse existing code and patterns. I
admit to not being super famliliar with kolla-kubernetes. Are there
reusable components without having to also use Helm?

> See for example:
> https://github.com/tripleo-apb/ansible-role-k8s-keystone/blob/331f405bd3f7ad346d99e964538b5b27447a0ebf/provision-keystone-apb/tasks/main.yaml

Pretty sure that was just a POC/example.

>
> I don't see much by way of dealing with fernet token rotation. That was a 
> tricky bit of code to get to work, but kolla-kubernetes has a solution to it. 
> You can get it by: helm install kolla/keystone-fernet-rotate-job.
>
> We designed this layer to be shareable so we all can contribute to the 
> commons rather then having every project reimplement their own and have to 
> chase bugs across all the implementations. The deployment projects will be 
> stronger together if we can share as much as possible.
>
> Please reconsider. I'd be happy to talk with you more if you want.

Just to frame the conversation with a bit more context, I'm sure there
are many individual features/bugs/special handling that TripleO and
Kolla both do today that the other does not.

TripleO had about a 95% solution for deploying OpenStack when
kolla-ansible did not exist and was started from scratch. But, kolla
made a choice based around tooling, which I contend is perfectly valid
given that we are creating deployment tools. Part of the individual
value in each deployment project is the underlying tooling itself.

I think what TripleO is trying to do here is not immediately jump to a
solution that uses Helm and explore what alternatives exist. Even if
the project chooses not to use Helm I still see room for collaboration
on code beneath the Helm/whatever layer.

-- 
-- James Slagle
--

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Let's use Ansible to deploy OpenStack services on Kubernetes

2017-07-14 Thread Clint Byrum
Excerpts from Bogdan Dobrelya's message of 2017-07-14 18:14:42 +0200:
> On 14.07.2017 17:55, Michał Jastrzębski wrote:
> > Guys you just described Kolla-Kubernetes pretty much... how about
> > we join effort and work towards this goal together?
> 
> That's exactly that I'd like we all to do.
> 

Agreed, and ...

> > 
> > On 14 July 2017 at 08:43, Flavio Percoco  wrote:
> >> On 14/07/17 17:26 +0200, Bogdan Dobrelya wrote:
> >>>
> >>> On 14.07.2017 11:17, Flavio Percoco wrote:
> 
> 
>  Greetings,
> 
>  As some of you know, I've been working on the second phase of TripleO's
>  containerization effort. This phase if about migrating the docker based
>  deployment onto Kubernetes.
> 
>  These phase requires work on several areas: Kubernetes deployment,
>  OpenStack
>  deployment on Kubernetes, configuration management, etc. While I've been
>  diving
>  into all of these areas, this email is about the second point, OpenStack
>  deployment on Kubernetes.
> 
>  There are several tools we could use for this task. kolla-kubernetes,
>  openstack-helm, ansible roles, among others. I've looked into these
>  tools and
>  I've come to the conclusion that TripleO would be better of by having
>  ansible
>  roles that would allow for deploying OpenStack services on Kubernetes.
> 
>  The existing solutions in the OpenStack community require using Helm.
>  While I
>  like Helm and both, kolla-kubernetes and openstack-helm OpenStack
>  projects, I
>  believe using any of them would add an extra layer of complexity to
>  TripleO,
> >>>
> >>>
> >>> It's hard to estimate that complexity w/o having a PoC of such an
> >>> integration. We should come up with a final choice once we have it done.
> >>>
> >>> My vote would go for investing engineering resources into solutions that
> >>> have problems already solved, even by the price of added complexity (but
> >>> that sort of depends...). Added complexity may be compensated with
> >>> removed complexity (like those client -> Mistral -> Heat -> Mistral ->
> >>> Ansible manipulations discussed in the mail thread mentioned below [0])
> >>
> >>
> >> I agree it's hard to estimate but you gotta draw the line somewhere. I
> >> actually
> >> spent time on this and here's a small PoC of ansible+mariadb+helm. I wrote
> >> the
> >> pyhelm lib (took some code from the openstack-helm folks) and I wrote the
> >> ansible helm module myself. I'd say I've spent enough time on this 
> >> research.
> >>
> >> I don't think getting a full PoC working is worth it as that will require
> >> way
> >> more work for not much value since we can anticipate some of the
> >> complexities
> >> already.
> >>
> >> As far as the complexity comment goes, I disagree with you. I don't think
> >> you're
> >> evaluating the amount of complexity that there *IS* already in TripleO and
> >> how
> >> adding more complexity (layers, states, services) would make things worse
> >> for
> >> not much extra value.
> >>
> >> By all means, I might be wrong here so, do let me know if you're seeing
> >> something I'm not.
> 
> My point was to "trade" complexity described in the "Forming our plans
> around Ansible​" ML thread:
> 
> (3) Mistral calling Heat calling Mistral calling Ansible
> 
> to just
> 
> (3') something calls kolla-kubernetes/openstack-helm, via some wrapper
> composition overlay (which creates complexity), or the like
> 
> While the latter might add complexity like the way you (Flavio) have
> described, the former would remove *another* type of complexity, and the
> result might worth the efforts.
>

The two options seem to be

a. Bootstrap helm and charts and then use openstack-helm/kolla-kubernetes
b. Bootstrap (something) and then use newly minted ansible to manipulate
   kubernetes.

(a) seems like less net complexity, as bootstrapping code is usually
able to be more naive. The new ansible will have to be at least as good
as openstack-helm and kolla-kubernetes, and still needs bootstraps of
its own.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [horizon] [keystone] [requirements] [rpm-packaging] [deb-packaging] Merging Django OpenStack Auth with Horizon

2017-07-14 Thread Rob Cresswell
Apologies in advance for so many tags, hoping this is seen by the appropriate 
people.

I've put up a patch to merge Django OpenStack Auth (DOA) into the Horizon tree: 
https://review.openstack.org/#/c/482561/ There is a blueprint to track any 
further changes / issues here: 
https://blueprints.launchpad.net/horizon/+spec/merge-openstack-auth

This has been suggested for quite a while, but we've only recently got round to 
it. Historically, the design was supposed to allow for multiple auth plugins 
(years ago, we also had Django OpenStack Auth Kerberos).

However, it's currently highly coupled with Horizon (its sole consumer, as far 
as I know) and the separation seems increasingly arbitrary. Almost all new 
features to Keystone / auth support require changes to both repos (with an 
intermerdiary release of DOA) and it causes a good deal of confusion when 
people try and debug issues too. Merging the two would reduce release and 
packaging work, reduce translation overhead, reduce debugging time and cut down 
on review time needed to make changes that affect auth.

I'd like to see if there are any thoughts or concerns from the wider community.

Thanks,
Rob
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Let's use Ansible to deploy OpenStack services on Kubernetes

2017-07-14 Thread Dmitry Tantsur

On 07/14/2017 06:16 PM, Fox, Kevin M wrote:

https://xkcd.com/927/

I don't think adopting helm as a dependency adds more complexity then writing 
more new k8s object deployment tooling?


I don't know much about the containerization work, and I don't have a big say in 
TripleO, but that's the question I have as well. If we are going now to rewrite 
ansible modules for everything (including MariaDB per Emilien's comment), this 
may require too much effort.


Think of TripleO contributors, who are not on tripleo-core (the group which 
probably contains 99% of people understanding TripleO well). Writing heat 
templates is already not fun, but at least people got used to it more or less. 
Now we will need to rewrite a lot of puppet into a lot of ansible, and a lot of 
yaml into... mmm.. more ansible? If we go down this way, let's at least make 
sure we're not inventing a bicycle.




There are efforts to make it easy to deploy kolla-kubernetes microservice 
charts using ansible for orchestration in kolla-kubernetes. See:
https://review.openstack.org/#/c/473588/
What kolla-kubernetes brings to the table is a tested/shared base k8s object 
layer. Orchestration is done by ansible via TripleO, and the solutions already 
found/debugged to how to deploy OpenStack in containers on Kubernetes can be 
reused/shared.

See for example:
https://github.com/tripleo-apb/ansible-role-k8s-keystone/blob/331f405bd3f7ad346d99e964538b5b27447a0ebf/provision-keystone-apb/tasks/main.yaml

I don't see much by way of dealing with fernet token rotation. That was a 
tricky bit of code to get to work, but kolla-kubernetes has a solution to it. 
You can get it by: helm install kolla/keystone-fernet-rotate-job.

We designed this layer to be shareable so we all can contribute to the commons 
rather then having every project reimplement their own and have to chase bugs 
across all the implementations. The deployment projects will be stronger 
together if we can share as much as possible.


+++



Please reconsider. I'd be happy to talk with you more if you want.

Thanks,
Kevin

From: Flavio Percoco [fla...@redhat.com]
Sent: Friday, July 14, 2017 2:17 AM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [TripleO] Let's use Ansible to deploy OpenStack 
services on Kubernetes

Greetings,

As some of you know, I've been working on the second phase of TripleO's
containerization effort. This phase if about migrating the docker based
deployment onto Kubernetes.

These phase requires work on several areas: Kubernetes deployment, OpenStack
deployment on Kubernetes, configuration management, etc. While I've been diving
into all of these areas, this email is about the second point, OpenStack
deployment on Kubernetes.

There are several tools we could use for this task. kolla-kubernetes,
openstack-helm, ansible roles, among others. I've looked into these tools and
I've come to the conclusion that TripleO would be better of by having ansible
roles that would allow for deploying OpenStack services on Kubernetes.

The existing solutions in the OpenStack community require using Helm. While I
like Helm and both, kolla-kubernetes and openstack-helm OpenStack projects, I
believe using any of them would add an extra layer of complexity to TripleO,
which is something the team has been fighting for years years - especially now
that the snowball is being chopped off.

Adopting any of the existing projects in the OpenStack communty would require
TripleO to also write the logic to manage those projects. For example, in the
case of openstack-helm, the TripleO team would have to write either ansible
roles or heat templates to manage - install, remove, upgrade - the charts (I'm
happy to discuss this point further but I'm keepping it at a high-level on
purpose for the sake of not writing a 10k-words-long email).

James Slagle sent an email[0], a couple of days ago, to form TripleO plans
around ansible. One take-away from this thread is that TripleO is adopting
ansible more and more, which is great and it fits perfectly with the conclusion
I reached.

Now, what this work means is that we would have to write an ansible role for
each service that will deploy the service on a Kubernetes cluster. Ideally these
roles will also generate the configuration files (removing the need of puppet
entirely) and they would manage the lifecycle. The roles would be isolated and
this will reduce the need of TripleO Heat templates. Doing this would give
TripleO full control on the deployment process too.

In addition, we could also write Ansible Playbook Bundles to contain these roles
and run them using the existing docker-cmd implementation that is coming out in
Pike (you can find a PoC/example of this in this repo[1]).

Now, I do realize the amount of work this implies and that this is my
opinion/conclusion. I'm sending this email out to kick-off the discussion and
gather thoughts and opinions from the rest of the community.


[openstack-dev] [nova] placement/resource providers update 28

2017-07-14 Thread Chris Dent


Placement update 28.

# What Matters Most

Still claims in the scheduler. It's getting closer, the current
hiccup is dealing with things like a resize on the same host. Diligent
work and discussion in progress. The related changes are in the stack
near:

https://review.openstack.org/#/c/483564/

# What's Changed

Lots of refactoring in the scheduler related unit tests.

# Help Wanted

Areas where volunteers are needed.

* General attention to bugs tagged placement:
  https://bugs.launchpad.net/nova/+bugs?field.tag=placement

# Main Themes

## Claims in the Scheduler

As linked above, the claims in the scheduler work is in this stack:

https://review.openstack.org/#/c/483564/

## Custom Resource Classes for Ironic

A spec for custom resource classes is being updated to reflect the
need to update the flavor and allocations of a previously allocated
ironic node that how has a custom resource class (such as
CUSTOM_SILVER_IRON):

https://review.openstack.org/#/c/481748/

Work has started on the implementation of that, today, but as far as I
can see nothing is up for review yet, will be soon. This functionality
needs to be in place or we will be continuing to manage ironic
inventory poorly, for another entire cycle.

## Traits

The concept of traits now exists in the placement service, but
filtering resource providers on traits is in flux. With the advent
of /allocation_candidates as the primary scheduling interface, that
needs to support traits. Work for that is in a stack starting at

 https://review.openstack.org/#/c/478464/

It's not yet clear if we'll want to support traits at both
/allocation_candidates and /resource_providers. I think we should,
but the immediate need is on /allocation_candidates.

There's some proposed code to get the latter started:

 https://review.openstack.org/#/c/474602/

## Shared Resource Providers

Support for shared resource providers is "built in" to the
/allocation_candidates concept and one of the drivers for having it.

There was a thread on the dev list recently about using them with
custom resource classes which may be instructive:

http://lists.openstack.org/pipermail/openstack-dev/2017-July/119648.html

## Nested Resource Providers

Work continues on nested resource providers.

   
https://review.openstack.org/#/q/status:open+topic:bp/nested-resource-providers

The need with these is simply more review, but they are behind
claims in priority.

## Docs

Lots of placement-related api docs have merged or are in progress:

https://review.openstack.org/#/q/status:open+topic:cd/placement-api-ref

Setting up the official publishing job for the api ref is on hold
until the content has been migrated to the locations specified by the
docs migration that is currently in progress:


http://specs.openstack.org/openstack/docs-specs/specs/pike/os-manuals-migration.html

Some changes have been proposed to document the scheduler's
workflow, including visual aids, starting at:

 https://review.openstack.org/#/c/475810/

# Other Code/Specs

* https://review.openstack.org/#/c/472378/
A proposed fix to using multiple config locations with the
placement wsgi app. There's some active discussion on whether the
solution in mind is the right solution, or even whether the bug is
a bug (it is!).

* https://review.openstack.org/#/c/470578/
Add functional test for local delete allocations

* https://review.openstack.org/#/c/427200/
   Add a status check for legacy filters in nova-status.

* https://review.openstack.org/#/c/469048/
 Provide more information about installing placement

* https://review.openstack.org/#/c/468928/
 Disambiguate resource provider conflict message

* https://review.openstack.org/#/c/468797/
 Spec for requesting traits in flavors

* https://review.openstack.org/#/c/480379/
   ensure shared RP maps with correct root RP
   (Some discussion on this one what the goal is and whether the
   approach is the right one.)

* https://review.openstack.org/#/c/483506/
  Call _update fewer times in the resource tracer

* https://review.openstack.org/#/c/483460/
  Retry resource provider registration when session's service
  catalog does not have placement

* https://review.openstack.org/#/c/452006/
  A functional test to confirm that migration between two different
  cells is not allowed. Included here because it uses the
  PlacementFixture and may be experiencing the bug that
  https://review.openstack.org/#/c/483564/ is trying to fix.

# End

Thanks for reading this far. Now please go review some of the things
linked above. Your prize is a reservation on a delicate thrown
porcelain cup or bowl, hand made in Cornwall after I retire.

--
Chris Dent  ┬──┬◡ノ(° -°ノ)   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)

Re: [openstack-dev] [TripleO] Let's use Ansible to deploy OpenStack services on Kubernetes

2017-07-14 Thread Flavio Percoco

First and foremost I just realized that I forgot to tag kolla and openstack-helm
in the subject so, I apologize. I'm glad the subject was catchy enough to get
your attention.

Just want to raise here what I just mentioned on IRC:

It's late in EU so I shouldn't be here right now but, I do want to point out
that, as usual, I asked for feedback and clarifications from everyone in this
thread.

I'm not trying to re-invent the wheel. What's in my original email is my
conclusion based on a research I did across the different tools there are. I
can, of course, be wrong and I'd like you all to help us by providing feedback.

I'm not expecting sales pitches but I'd love to have a more technical discussion
on how we can, hopefully, make this work.

On 14/07/17 16:16 +, Fox, Kevin M wrote:

https://xkcd.com/927/

I don't think adopting helm as a dependency adds more complexity then writing 
more new k8s object deployment tooling?

There are efforts to make it easy to deploy kolla-kubernetes microservice 
charts using ansible for orchestration in kolla-kubernetes. See:
https://review.openstack.org/#/c/473588/
What kolla-kubernetes brings to the table is a tested/shared base k8s object 
layer. Orchestration is done by ansible via TripleO, and the solutions already 
found/debugged to how to deploy OpenStack in containers on Kubernetes can be 
reused/shared.

See for example:
https://github.com/tripleo-apb/ansible-role-k8s-keystone/blob/331f405bd3f7ad346d99e964538b5b27447a0ebf/provision-keystone-apb/tasks/main.yaml

I don't see much by way of dealing with fernet token rotation. That was a 
tricky bit of code to get to work, but kolla-kubernetes has a solution to it. 
You can get it by: helm install kolla/keystone-fernet-rotate-job.


It's just a PoC, don't take the implementation as definitive.


We designed this layer to be shareable so we all can contribute to the commons 
rather then having every project reimplement their own and have to chase bugs 
across all the implementations. The deployment projects will be stronger 
together if we can share as much as possible.

Please reconsider. I'd be happy to talk with you more if you want.


Let's talk, that's the whole point of this thread.
Flavio

--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] New setuptools release, and the world is broken

2017-07-14 Thread Jesse Pretorius
On 7/14/17, 5:18 PM, "Jeremy Stanley"  wrote:

>Is there a mechanism to leverage this in tox or when invoking
>virtualenv? We don't run get-pip.py in most jobs because our images
>have pip/setuptools preinstalled to get around bootstrapping issues,
>though I suppose that could with some effort be moved into job
>runtime as a (very early) builder macro. Using constraints to
>control these during image generation doesn't make a whole lot of
>sense though as images are only rebuilt once a day and so tracking
>these in the constraints list won't be self-testing in that regard
>anyway.

Fair enough, although that’s what we have today so I don’t see it as worse. In 
fact, I think it’s an improvement because our constraints can be updated when 
we choose to (through the normal requirements management process) rather than 
whenever the new package releases.

I think it may make sense to do a pre-tox macro which preps 
pip/setuptools/wheel using get-pip.py, then places the relevant wheels into the 
virtualenv_support directory [1] so that tox uses them when building 
virtualenvs. Another option could be to use a configuration file [2] to specify 
a directory to find the right wheels in.

[1] https://virtualenv.pypa.io/en/stable/userguide/#the-extra-search-dir-option
[2] https://virtualenv.pypa.io/en/stable/reference/#configuration-file



Rackspace Limited is a company registered in England & Wales (company 
registered number 03897010) whose registered office is at 5 Millington Road, 
Hyde Park Hayes, Middlesex UB3 4AZ. Rackspace Limited privacy policy can be 
viewed at www.rackspace.co.uk/legal/privacy-policy - This e-mail message may 
contain confidential or privileged information intended for the recipient. Any 
dissemination, distribution or copying of the enclosed material is prohibited. 
If you receive this transmission in error, please notify us immediately by 
e-mail at ab...@rackspace.com and delete the original message. Your cooperation 
is appreciated.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Let's use Ansible to deploy OpenStack services on Kubernetes

2017-07-14 Thread Joshua Harlow
Out of curiosity, since I keep on hearing/reading all the tripleo 
discussions on how tripleo folks are apparently thinking/doing? 
redesigning the whole thing to use ansible + mistral + heat, or ansible 
+ kubernetes or ansible + mistral + heat + ansible (a second time!) or ...


Seeing all those kinds of questions and suggestions around what should 
be used and why and how (and even this thread) makes me really wonder 
who actually uses tripleo and can afford/understand such kinds of changes?


Does anyone?

If there are  is there going to be an upgrade 
path for there existing 'cloud/s' to whatever this solution is?


What operator(s) has the ability to do such a massive shift at this 
point in time? Who are these 'mystical' operators?


All this has really peaked my curiosity because I am personally trying 
to do that shift (not exactly the same solution...) and I know it is a 
massive undertaking (that will take quite a while to get right) even for 
a simple operator with limited needs out of openstack (ie godaddy); so I 
don't really understand how the generic solution for all existing 
tripleo operators can even work...


Flavio Percoco wrote:


Greetings,

As some of you know, I've been working on the second phase of TripleO's
containerization effort. This phase if about migrating the docker based
deployment onto Kubernetes.

These phase requires work on several areas: Kubernetes deployment,
OpenStack
deployment on Kubernetes, configuration management, etc. While I've been
diving
into all of these areas, this email is about the second point, OpenStack
deployment on Kubernetes.

There are several tools we could use for this task. kolla-kubernetes,
openstack-helm, ansible roles, among others. I've looked into these
tools and
I've come to the conclusion that TripleO would be better of by having
ansible
roles that would allow for deploying OpenStack services on Kubernetes.

The existing solutions in the OpenStack community require using Helm.
While I
like Helm and both, kolla-kubernetes and openstack-helm OpenStack
projects, I
believe using any of them would add an extra layer of complexity to
TripleO,
which is something the team has been fighting for years years -
especially now
that the snowball is being chopped off.

Adopting any of the existing projects in the OpenStack communty would
require
TripleO to also write the logic to manage those projects. For example,
in the
case of openstack-helm, the TripleO team would have to write either ansible
roles or heat templates to manage - install, remove, upgrade - the
charts (I'm
happy to discuss this point further but I'm keepping it at a high-level on
purpose for the sake of not writing a 10k-words-long email).

James Slagle sent an email[0], a couple of days ago, to form TripleO plans
around ansible. One take-away from this thread is that TripleO is adopting
ansible more and more, which is great and it fits perfectly with the
conclusion
I reached.

Now, what this work means is that we would have to write an ansible role
for
each service that will deploy the service on a Kubernetes cluster.
Ideally these
roles will also generate the configuration files (removing the need of
puppet
entirely) and they would manage the lifecycle. The roles would be
isolated and
this will reduce the need of TripleO Heat templates. Doing this would give
TripleO full control on the deployment process too.

In addition, we could also write Ansible Playbook Bundles to contain
these roles
and run them using the existing docker-cmd implementation that is coming
out in
Pike (you can find a PoC/example of this in this repo[1]).

Now, I do realize the amount of work this implies and that this is my
opinion/conclusion. I'm sending this email out to kick-off the
discussion and
gather thoughts and opinions from the rest of the community.

Finally, what I really like about writing pure ansible roles is that
ansible is
a known, powerfull, tool that has been adopted by many operators
already. It'll
provide the flexibility needed and, if structured correctly, it'll allow
for
operators (and other teams) to just use the parts they need/want without
depending on the full-stack. I like the idea of being able to separate
concerns
in the deployment workflow and the idea of making it simple for users of
TripleO
to do the same at runtime. Unfortunately, going down this road means
that my
hope of creating a field where we could collaborate even more with other
deployment tools will be a bit limited but I'm confident the result
would also
be useful for others and that we all will benefit from it... My hopes
might be a
bit naive *shrugs*

Flavio

[0]
http://lists.openstack.org/pipermail/openstack-dev/2017-July/119405.html
[1] https://github.com/tripleo-apb/tripleo-apbs

--
@flaper87
Flavio Percoco

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

Re: [openstack-dev] [all] New setuptools release, and the world is broken

2017-07-14 Thread Jeremy Stanley
On 2017-07-14 15:55:40 + (+), Jesse Pretorius wrote:
> I did a little spelunking to try and figure out where it is we put
> pip onto the nodepool nodes. It appears that d-i-b for Ubuntu
> Xenial uses packages to put it there, but that doesn’t explain how
> the latest version got there. Does anyone have any insight into
> where exactly the latest version is installed?

This has changed slightly since we excised the remaining Puppet
setup from our images, so it's no longer done by the puppet-pip
module. It seems the pip-and-virtualenv element is currently
bringing that in.

http://git.openstack.org/cgit/openstack/diskimage-builder/tree/diskimage_builder/elements/pip-and-virtualenv/
 >

-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Let's use Ansible to deploy OpenStack services on Kubernetes

2017-07-14 Thread Bogdan Dobrelya
On 14.07.2017 18:16, Fox, Kevin M wrote:
> https://xkcd.com/927/
> 
> I don't think adopting helm as a dependency adds more complexity then writing 
> more new k8s object deployment tooling?
> 
> There are efforts to make it easy to deploy kolla-kubernetes microservice 
> charts using ansible for orchestration in kolla-kubernetes. See:
> https://review.openstack.org/#/c/473588/
> What kolla-kubernetes brings to the table is a tested/shared base k8s object 
> layer. Orchestration is done by ansible via TripleO, and the solutions 
> already found/debugged to how to deploy OpenStack in containers on Kubernetes 
> can be reused/shared.
> 
> See for example:
> https://github.com/tripleo-apb/ansible-role-k8s-keystone/blob/331f405bd3f7ad346d99e964538b5b27447a0ebf/provision-keystone-apb/tasks/main.yaml
> 
> I don't see much by way of dealing with fernet token rotation. That was a 
> tricky bit of code to get to work, but kolla-kubernetes has a solution to it. 
> You can get it by: helm install kolla/keystone-fernet-rotate-job.
> 
> We designed this layer to be shareable so we all can contribute to the 
> commons rather then having every project reimplement their own and have to 
> chase bugs across all the implementations. The deployment projects will be 
> stronger together if we can share as much as possible.

Thank you Kevin, this ^^ expresses my thoughts better than I could ever say.

> 
> Please reconsider. I'd be happy to talk with you more if you want.
> 
> Thanks,
> Kevin


-- 
Best regards,
Bogdan Dobrelya,
Irc #bogdando

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] New setuptools release, and the world is broken

2017-07-14 Thread Jeremy Stanley
On 2017-07-14 16:05:36 + (+), Jesse Pretorius wrote:
> On 7/14/17, 4:54 PM, "Doug Hellmann"  wrote:
[...]
> > I wonder if we could convince the PyPA folks to allow get-pip.py
> >to take a version argument, so we could specify which version we want in
> >our jobs. We would still need a way to manage that version number, but
> >modifying get-pip.py would solve the bootstrapping issue.
> 
> It has been capable of this for quite some time. You can provide
> both requirements And constraints.
> 
> python /opt/get-pip.py pip==9.0.1 setuptools==33.1.1 wheel==0.29.0
> 
> Or, far better, is to use constraints because it’ll ensure that
> any dependent packages are also the right versions.
> 
> python /opt/get-pip.py pip setuptools wheel –constraint
> http://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt

Is there a mechanism to leverage this in tox or when invoking
virtualenv? We don't run get-pip.py in most jobs because our images
have pip/setuptools preinstalled to get around bootstrapping issues,
though I suppose that could with some effort be moved into job
runtime as a (very early) builder macro. Using constraints to
control these during image generation doesn't make a whole lot of
sense though as images are only rebuilt once a day and so tracking
these in the constraints list won't be self-testing in that regard
anyway.
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Let's use Ansible to deploy OpenStack services on Kubernetes

2017-07-14 Thread Fox, Kevin M
https://xkcd.com/927/

I don't think adopting helm as a dependency adds more complexity then writing 
more new k8s object deployment tooling?

There are efforts to make it easy to deploy kolla-kubernetes microservice 
charts using ansible for orchestration in kolla-kubernetes. See:
https://review.openstack.org/#/c/473588/
What kolla-kubernetes brings to the table is a tested/shared base k8s object 
layer. Orchestration is done by ansible via TripleO, and the solutions already 
found/debugged to how to deploy OpenStack in containers on Kubernetes can be 
reused/shared.

See for example:
https://github.com/tripleo-apb/ansible-role-k8s-keystone/blob/331f405bd3f7ad346d99e964538b5b27447a0ebf/provision-keystone-apb/tasks/main.yaml

I don't see much by way of dealing with fernet token rotation. That was a 
tricky bit of code to get to work, but kolla-kubernetes has a solution to it. 
You can get it by: helm install kolla/keystone-fernet-rotate-job.

We designed this layer to be shareable so we all can contribute to the commons 
rather then having every project reimplement their own and have to chase bugs 
across all the implementations. The deployment projects will be stronger 
together if we can share as much as possible.

Please reconsider. I'd be happy to talk with you more if you want.

Thanks,
Kevin

From: Flavio Percoco [fla...@redhat.com]
Sent: Friday, July 14, 2017 2:17 AM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [TripleO] Let's use Ansible to deploy OpenStack 
services on Kubernetes

Greetings,

As some of you know, I've been working on the second phase of TripleO's
containerization effort. This phase if about migrating the docker based
deployment onto Kubernetes.

These phase requires work on several areas: Kubernetes deployment, OpenStack
deployment on Kubernetes, configuration management, etc. While I've been diving
into all of these areas, this email is about the second point, OpenStack
deployment on Kubernetes.

There are several tools we could use for this task. kolla-kubernetes,
openstack-helm, ansible roles, among others. I've looked into these tools and
I've come to the conclusion that TripleO would be better of by having ansible
roles that would allow for deploying OpenStack services on Kubernetes.

The existing solutions in the OpenStack community require using Helm. While I
like Helm and both, kolla-kubernetes and openstack-helm OpenStack projects, I
believe using any of them would add an extra layer of complexity to TripleO,
which is something the team has been fighting for years years - especially now
that the snowball is being chopped off.

Adopting any of the existing projects in the OpenStack communty would require
TripleO to also write the logic to manage those projects. For example, in the
case of openstack-helm, the TripleO team would have to write either ansible
roles or heat templates to manage - install, remove, upgrade - the charts (I'm
happy to discuss this point further but I'm keepping it at a high-level on
purpose for the sake of not writing a 10k-words-long email).

James Slagle sent an email[0], a couple of days ago, to form TripleO plans
around ansible. One take-away from this thread is that TripleO is adopting
ansible more and more, which is great and it fits perfectly with the conclusion
I reached.

Now, what this work means is that we would have to write an ansible role for
each service that will deploy the service on a Kubernetes cluster. Ideally these
roles will also generate the configuration files (removing the need of puppet
entirely) and they would manage the lifecycle. The roles would be isolated and
this will reduce the need of TripleO Heat templates. Doing this would give
TripleO full control on the deployment process too.

In addition, we could also write Ansible Playbook Bundles to contain these roles
and run them using the existing docker-cmd implementation that is coming out in
Pike (you can find a PoC/example of this in this repo[1]).

Now, I do realize the amount of work this implies and that this is my
opinion/conclusion. I'm sending this email out to kick-off the discussion and
gather thoughts and opinions from the rest of the community.

Finally, what I really like about writing pure ansible roles is that ansible is
a known, powerfull, tool that has been adopted by many operators already. It'll
provide the flexibility needed and, if structured correctly, it'll allow for
operators (and other teams) to just use the parts they need/want without
depending on the full-stack. I like the idea of being able to separate concerns
in the deployment workflow and the idea of making it simple for users of TripleO
to do the same at runtime. Unfortunately, going down this road means that my
hope of creating a field where we could collaborate even more with other
deployment tools will be a bit limited but I'm confident the result would also
be useful for others and that we all will benefit from it... My 

Re: [openstack-dev] [TripleO] Let's use Ansible to deploy OpenStack services on Kubernetes

2017-07-14 Thread Bogdan Dobrelya
On 14.07.2017 17:55, Michał Jastrzębski wrote:
> Guys you just described Kolla-Kubernetes pretty much... how about
> we join effort and work towards this goal together?

That's exactly that I'd like we all to do.

> 
> On 14 July 2017 at 08:43, Flavio Percoco  wrote:
>> On 14/07/17 17:26 +0200, Bogdan Dobrelya wrote:
>>>
>>> On 14.07.2017 11:17, Flavio Percoco wrote:


 Greetings,

 As some of you know, I've been working on the second phase of TripleO's
 containerization effort. This phase if about migrating the docker based
 deployment onto Kubernetes.

 These phase requires work on several areas: Kubernetes deployment,
 OpenStack
 deployment on Kubernetes, configuration management, etc. While I've been
 diving
 into all of these areas, this email is about the second point, OpenStack
 deployment on Kubernetes.

 There are several tools we could use for this task. kolla-kubernetes,
 openstack-helm, ansible roles, among others. I've looked into these
 tools and
 I've come to the conclusion that TripleO would be better of by having
 ansible
 roles that would allow for deploying OpenStack services on Kubernetes.

 The existing solutions in the OpenStack community require using Helm.
 While I
 like Helm and both, kolla-kubernetes and openstack-helm OpenStack
 projects, I
 believe using any of them would add an extra layer of complexity to
 TripleO,
>>>
>>>
>>> It's hard to estimate that complexity w/o having a PoC of such an
>>> integration. We should come up with a final choice once we have it done.
>>>
>>> My vote would go for investing engineering resources into solutions that
>>> have problems already solved, even by the price of added complexity (but
>>> that sort of depends...). Added complexity may be compensated with
>>> removed complexity (like those client -> Mistral -> Heat -> Mistral ->
>>> Ansible manipulations discussed in the mail thread mentioned below [0])
>>
>>
>> I agree it's hard to estimate but you gotta draw the line somewhere. I
>> actually
>> spent time on this and here's a small PoC of ansible+mariadb+helm. I wrote
>> the
>> pyhelm lib (took some code from the openstack-helm folks) and I wrote the
>> ansible helm module myself. I'd say I've spent enough time on this research.
>>
>> I don't think getting a full PoC working is worth it as that will require
>> way
>> more work for not much value since we can anticipate some of the
>> complexities
>> already.
>>
>> As far as the complexity comment goes, I disagree with you. I don't think
>> you're
>> evaluating the amount of complexity that there *IS* already in TripleO and
>> how
>> adding more complexity (layers, states, services) would make things worse
>> for
>> not much extra value.
>>
>> By all means, I might be wrong here so, do let me know if you're seeing
>> something I'm not.

My point was to "trade" complexity described in the "Forming our plans
around Ansible​" ML thread:

(3) Mistral calling Heat calling Mistral calling Ansible

to just

(3') something calls kolla-kubernetes/openstack-helm, via some wrapper
composition overlay (which creates complexity), or the like

While the latter might add complexity like the way you (Flavio) have
described, the former would remove *another* type of complexity, and the
result might worth the efforts.

>> Flavio
>> --
>> @flaper87
>> Flavio Percoco
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


-- 
Best regards,
Bogdan Dobrelya,
Irc #bogdando

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] New setuptools release, and the world is broken

2017-07-14 Thread Jesse Pretorius
On 7/14/17, 4:54 PM, "Doug Hellmann"  wrote:

> I wonder if we could convince the PyPA folks to allow get-pip.py
>to take a version argument, so we could specify which version we want in
>our jobs. We would still need a way to manage that version number, but
>modifying get-pip.py would solve the bootstrapping issue.

It has been capable of this for quite some time. You can provide both 
requirements
And constraints.

python /opt/get-pip.py pip==9.0.1 setuptools==33.1.1 wheel==0.29.0

Or, far better, is to use constraints because it’ll ensure that any dependent 
packages are also the right versions.

python /opt/get-pip.py pip setuptools wheel –constraint 
http://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt






Rackspace Limited is a company registered in England & Wales (company 
registered number 03897010) whose registered office is at 5 Millington Road, 
Hyde Park Hayes, Middlesex UB3 4AZ. Rackspace Limited privacy policy can be 
viewed at www.rackspace.co.uk/legal/privacy-policy - This e-mail message may 
contain confidential or privileged information intended for the recipient. Any 
dissemination, distribution or copying of the enclosed material is prohibited. 
If you receive this transmission in error, please notify us immediately by 
e-mail at ab...@rackspace.com and delete the original message. Your cooperation 
is appreciated.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] New setuptools release, and the world is broken

2017-07-14 Thread Jesse Pretorius
On 7/14/17, 4:17 PM, "Jeremy Stanley"  wrote:

>Chicken and egg. Once you get to the point where pip can enforce
>   constraints, you already have a version of setuptools installed. And
>as evidenced by, for example, this current bug you would just end up
>breaking on the downgrade trying to replace your existing broken
>version with whatever version is requested. Also you would need a
>separate phase to upgrade/downgrade setuptools separate from other
>packages using it.

Which is why I think we should do it right at the point where we put pip there. 
This can be done with the get-pip script which only requires python to be there.

I did a little spelunking to try and figure out where it is we put pip onto the 
nodepool nodes. It appears that d-i-b for Ubuntu Xenial uses packages to put it 
there, but that doesn’t explain how the latest version got there. Does anyone 
have any insight into where exactly the latest version is installed?



Rackspace Limited is a company registered in England & Wales (company 
registered number 03897010) whose registered office is at 5 Millington Road, 
Hyde Park Hayes, Middlesex UB3 4AZ. Rackspace Limited privacy policy can be 
viewed at www.rackspace.co.uk/legal/privacy-policy - This e-mail message may 
contain confidential or privileged information intended for the recipient. Any 
dissemination, distribution or copying of the enclosed material is prohibited. 
If you receive this transmission in error, please notify us immediately by 
e-mail at ab...@rackspace.com and delete the original message. Your cooperation 
is appreciated.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] New setuptools release, and the world is broken

2017-07-14 Thread Doug Hellmann
Excerpts from Jeremy Stanley's message of 2017-07-14 15:17:52 +:
> On 2017-07-14 10:50:50 -0400 (-0400), Doug Hellmann wrote:
> > Excerpts from Jesse Pretorius's message of 2017-07-14 08:32:48 +:
> > > FYI if you see the following error in your job logs, you have the new 
> > > setuptools to thank:
> > > 
> > > AttributeError: Distribution instance has no attribute 'install_requires'
> > > 
> > > I’ve registered https://github.com/pypa/setuptools/issues/1086 to track 
> > > the issue, and very quickly got a response and there’s a PR up to resolve 
> > > it.
> > > 
> > > In our case we saw this when downgrading setuptools to our known, good 
> > > working version.
> > > 
> > > I’d like to suggest that we include setuptools, pip, wheel and other core 
> > > packages in the upper constraints management process and that all the 
> > > images built for jobs make use of it. The number of times that a new 
> > > release of pip/setuptools has completely ground development to a halt for 
> > > a day, sometimes more, is a little too frequent for my liking.
> > > 
> > > IIRC we’d need to just change the u-c generation output from ‘pip freeze’ 
> > > to ‘pip freeze –all’ for the output to include the versions for pip, 
> > > setuptools and wheel. Then, with that spec, pip can be installed using 
> > > u-c like so:
> > > 
> > > CURL_CMD="curl --silent --show-error --retry 5"
> > > OUTPUT_FILE="get-pip.py"
> > > ${CURL_CMD} https://bootstrap.pypa.io/get-pip.py > ${OUTPUT_FILE} ||\
> > > ${CURL_CMD} 
> > > https://raw.githubusercontent.com/pypa/get-pip/master/get-pip.py > 
> > > ${OUTPUT_FILE}
> > > 
> > > python ${OUTPUT_FILE} pip setuptools wheel -c upper-constraints.txt
> > > 
> > > That will ensure a consistent, known good version set is installed and 
> > > will also cater for the situation where the primary URL for get-pip.py is 
> > > down (as happens sometimes).
> > > 
> > 
> > I know we made the explicit decision not to pin setuptools, but I don't
> > remember the motivation. Was it a technical decision (we can't) or
> > because it seemed like a bad idea (we want to ensure we have the
> > latest)?
> 
> Chicken and egg. Once you get to the point where pip can enforce
> constraints, you already have a version of setuptools installed. And
> as evidenced by, for example, this current bug you would just end up
> breaking on the downgrade trying to replace your existing broken
> version with whatever version is requested. Also you would need a
> separate phase to upgrade/downgrade setuptools separate from other
> packages using it.

That makes sense.

I wonder if we could convince the PyPA folks to allow get-pip.py
to take a version argument, so we could specify which version we want in
our jobs. We would still need a way to manage that version number, but
modifying get-pip.py would solve the bootstrapping issue.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Let's use Ansible to deploy OpenStack services on Kubernetes

2017-07-14 Thread Michał Jastrzębski
Guys you just described Kolla-Kubernetes pretty much... how about
we join effort and work towards this goal together?

On 14 July 2017 at 08:43, Flavio Percoco  wrote:
> On 14/07/17 17:26 +0200, Bogdan Dobrelya wrote:
>>
>> On 14.07.2017 11:17, Flavio Percoco wrote:
>>>
>>>
>>> Greetings,
>>>
>>> As some of you know, I've been working on the second phase of TripleO's
>>> containerization effort. This phase if about migrating the docker based
>>> deployment onto Kubernetes.
>>>
>>> These phase requires work on several areas: Kubernetes deployment,
>>> OpenStack
>>> deployment on Kubernetes, configuration management, etc. While I've been
>>> diving
>>> into all of these areas, this email is about the second point, OpenStack
>>> deployment on Kubernetes.
>>>
>>> There are several tools we could use for this task. kolla-kubernetes,
>>> openstack-helm, ansible roles, among others. I've looked into these
>>> tools and
>>> I've come to the conclusion that TripleO would be better of by having
>>> ansible
>>> roles that would allow for deploying OpenStack services on Kubernetes.
>>>
>>> The existing solutions in the OpenStack community require using Helm.
>>> While I
>>> like Helm and both, kolla-kubernetes and openstack-helm OpenStack
>>> projects, I
>>> believe using any of them would add an extra layer of complexity to
>>> TripleO,
>>
>>
>> It's hard to estimate that complexity w/o having a PoC of such an
>> integration. We should come up with a final choice once we have it done.
>>
>> My vote would go for investing engineering resources into solutions that
>> have problems already solved, even by the price of added complexity (but
>> that sort of depends...). Added complexity may be compensated with
>> removed complexity (like those client -> Mistral -> Heat -> Mistral ->
>> Ansible manipulations discussed in the mail thread mentioned below [0])
>
>
> I agree it's hard to estimate but you gotta draw the line somewhere. I
> actually
> spent time on this and here's a small PoC of ansible+mariadb+helm. I wrote
> the
> pyhelm lib (took some code from the openstack-helm folks) and I wrote the
> ansible helm module myself. I'd say I've spent enough time on this research.
>
> I don't think getting a full PoC working is worth it as that will require
> way
> more work for not much value since we can anticipate some of the
> complexities
> already.
>
> As far as the complexity comment goes, I disagree with you. I don't think
> you're
> evaluating the amount of complexity that there *IS* already in TripleO and
> how
> adding more complexity (layers, states, services) would make things worse
> for
> not much extra value.
>
> By all means, I might be wrong here so, do let me know if you're seeing
> something I'm not.
> Flavio
> --
> @flaper87
> Flavio Percoco
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove][all][tc] A proposal to rearchitect Trove

2017-07-14 Thread Fox, Kevin M
Yeah. Understood. Just was responding to the question, why you would ever want 
to do X. There are reasons. Being out of scope is an ok answer though.

Thanks,
Kevin

From: Amrith Kumar [amrith.ku...@gmail.com]
Sent: Thursday, July 13, 2017 9:22 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [trove][all][tc] A proposal to rearchitect Trove

Kevin,

In interests of 'keeping it simple', I'm going to try and prioritize the 
use-cases and pick implementation strategies which target the higher priority 
ones without needlessly excluding other (lower priority) ones.

Thanks,

-amrith

--
Amrith Kumar
​
P.S. Verizon is hiring ​OpenStack engineers nationwide. If you are interested, 
please contact me or visit https://t.co/gGoUzYvqbE


On Wed, Jul 12, 2017 at 5:46 PM, Fox, Kevin M 
> wrote:
There is a use case where some sites have folks buy whole bricks of compute 
nodes that get added to the overarching cloud, but using AZ's or 
HostAggregates/Flavors to dedicate the hardware to the users.

You might want to land the db vm on the hardware for that project and one would 
expect the normal quota would be dinged for it rather then a special trove 
quota. Otherwise they may have more quota then the hosts can actually handle.

Thanks,
Kevin

From: Doug Hellmann [d...@doughellmann.com]
Sent: Wednesday, July 12, 2017 6:57 AM
To: openstack-dev
Subject: Re: [openstack-dev] [trove][all][tc] A proposal to rearchitect Trove

Excerpts from Amrith Kumar's message of 2017-07-12 06:14:28 -0500:
> All:
>
> First, let me thank all of you who responded and provided feedback
> on what I wrote. I've summarized what I heard below and am posting
> it as one consolidated response rather than responding to each
> of your messages and making this thread even deeper.
>
> As I say at the end of this email, I will be setting up a session at
> the Denver PTG to specifically continue this conversation and hope
> you will all be able to attend. As soon as time slots for PTG are
> announced, I will try and pick this slot and request that you please
> attend.
>
> 
>
> Thierry: naming issue; call it Hoard if it does not have a migration
> path.
>
> 
>
> Kevin: use a container approach with k8s as the orchestration
> mechanism, addresses multiple issues including performance. Trove to
> provide containers for multiple components which cooperate to provide
> a single instance of a database or cluster. Don't put all components
> (agent, monitoring, database) in a single VM, decoupling makes
> migraiton and upgrades easier and allows trove to reuse database
> vendor supplied containers. Performance of databases in VM's poor
> compared to databases on bare-metal.
>
> 
>
> Doug Hellmann:
>
> > Does "service VM" need to be a first-class thing?  Akanda creates
> > them, using a service user. The VMs are tied to a "router" which is
> > the billable resource that the user understands and interacts with
> > through the API.
>
> Amrith: Doug, yes because we're looking not just for service VM's but all
> resources provisioned by a service. So, to Matt's comment about a
> blackbox DBaaS, the VM's, storage, snapshots, ... they should all be
> owned by the service, charged to a users quota but not visible to the
> user directly.

I still don't understand. If you have entities that represent the
DBaaS "host" or "database" or "database backup" or whatever, then
you put a quota on those entities and you bill for them. If the
database actually runs in a VM or the backup is a snapshot, those
are implementation details. You don't want to have to rewrite your
quota management or billing integration if those details change.

Doug

>
> 
>
> Jay:
>
> > Frankly, I believe all of these types of services should be built
> > as applications that run on OpenStack (or other)
> > infrastructure. In other words, they should not be part of the
> > infrastructure itself.
> >
> > There's really no need for a user of a DBaaS to have access to the
> > host or hosts the DB is running on. If the user really wanted
> > that, they would just spin up a VM/baremetal server and install
> > the thing themselves.
>
> and subsequently in follow-up with Zane:
>
> > Think only in terms of what a user of a DBaaS really wants. At the
> > end of the day, all they want is an address in the cloud where they
> > can point their application to write and read data from.
> > ...
> > At the end of the day, I think Trove is best implemented as a hosted
> > application that exposes an API to its users that is entirely
> > separate from the underlying infrastructure APIs like
> > Cinder/Nova/Neutron.
>
> Amrith: Yes, I agree, +1000
>
> 
>
> Clint (in response to Jay's proposal regarding the service making all
> resources multi-tenant) raised a concern about having multi-tenant
> shared 

Re: [openstack-dev] [TripleO] Let's use Ansible to deploy OpenStack services on Kubernetes

2017-07-14 Thread Jiří Stránský

On 14.7.2017 11:17, Flavio Percoco wrote:


Greetings,

As some of you know, I've been working on the second phase of TripleO's
containerization effort. This phase if about migrating the docker based
deployment onto Kubernetes.

These phase requires work on several areas: Kubernetes deployment, OpenStack
deployment on Kubernetes, configuration management, etc. While I've been diving
into all of these areas, this email is about the second point, OpenStack
deployment on Kubernetes.

There are several tools we could use for this task. kolla-kubernetes,
openstack-helm, ansible roles, among others. I've looked into these tools and
I've come to the conclusion that TripleO would be better of by having ansible
roles that would allow for deploying OpenStack services on Kubernetes.

The existing solutions in the OpenStack community require using Helm. While I
like Helm and both, kolla-kubernetes and openstack-helm OpenStack projects, I
believe using any of them would add an extra layer of complexity to TripleO,
which is something the team has been fighting for years years - especially now
that the snowball is being chopped off.

Adopting any of the existing projects in the OpenStack communty would require
TripleO to also write the logic to manage those projects. For example, in the
case of openstack-helm, the TripleO team would have to write either ansible
roles or heat templates to manage - install, remove, upgrade - the charts (I'm
happy to discuss this point further but I'm keepping it at a high-level on
purpose for the sake of not writing a 10k-words-long email).

James Slagle sent an email[0], a couple of days ago, to form TripleO plans
around ansible. One take-away from this thread is that TripleO is adopting
ansible more and more, which is great and it fits perfectly with the conclusion
I reached.

Now, what this work means is that we would have to write an ansible role for
each service that will deploy the service on a Kubernetes cluster. Ideally these
roles will also generate the configuration files (removing the need of puppet
entirely) and they would manage the lifecycle. The roles would be isolated and
this will reduce the need of TripleO Heat templates. Doing this would give
TripleO full control on the deployment process too.

In addition, we could also write Ansible Playbook Bundles to contain these roles
and run them using the existing docker-cmd implementation that is coming out in
Pike (you can find a PoC/example of this in this repo[1]).

Now, I do realize the amount of work this implies and that this is my
opinion/conclusion. I'm sending this email out to kick-off the discussion and
gather thoughts and opinions from the rest of the community.


I agree this is a direction we should explore further. This would give 
us the option to tailor things exactly as we need -- good for keeping 
our balance in having interfaces as stable as possible, while still 
making enough development progress. And we'd keep our ability to make 
important changes (e.g. bugfixes) without delays.


We'll have to write more code ourselves, but it's possible that if we 
picked up an existing tool, we'd have to spend that time (if not more) 
elsewhere. Migrating existing non-kubernetized TripleO deployments to 
kubernetized is going to be pretty difficult even if we do what you 
suggested. I imagine that if we also had to fit into some pre-existing 
external deployment/management interfaces, while trying to keep ours 
stable or make just iterative changes, it might turn out to be a surreal 
effort. We will have to design things with migration from "legacy 
TripleO" in mind, or make later amendments here and there solely for 
this purpose. Such design and patches would probably not be a good fit 
for non-tripleo projects.


What i recall from our old PoC [2], defining the resources and init 
containers etc. will probably not be the most difficult task, and 
furthermore we can largely draw inspiration from our current 
containerized solution too. I think the more challenging things might be 
e.g. config generation with Ansible, and how major upgrades and rolling 
updates will be done (how all this ties into the APB way of 
provisioning/deprovisioning). And of course how to fulfill the 
expectations that TripleO has set around network isolation and HA :)


I'm eager to give the latest code a try myself :) Thanks for working on 
this, it looks like there's been great progress lately!


Jirka



Finally, what I really like about writing pure ansible roles is that ansible is
a known, powerfull, tool that has been adopted by many operators already. It'll
provide the flexibility needed and, if structured correctly, it'll allow for
operators (and other teams) to just use the parts they need/want without
depending on the full-stack. I like the idea of being able to separate concerns
in the deployment workflow and the idea of making it simple for users of TripleO
to do the same at runtime. Unfortunately, going down this road means 

Re: [openstack-dev] [TripleO] Let's use Ansible to deploy OpenStack services on Kubernetes

2017-07-14 Thread Flavio Percoco

On 14/07/17 17:26 +0200, Bogdan Dobrelya wrote:

On 14.07.2017 11:17, Flavio Percoco wrote:


Greetings,

As some of you know, I've been working on the second phase of TripleO's
containerization effort. This phase if about migrating the docker based
deployment onto Kubernetes.

These phase requires work on several areas: Kubernetes deployment,
OpenStack
deployment on Kubernetes, configuration management, etc. While I've been
diving
into all of these areas, this email is about the second point, OpenStack
deployment on Kubernetes.

There are several tools we could use for this task. kolla-kubernetes,
openstack-helm, ansible roles, among others. I've looked into these
tools and
I've come to the conclusion that TripleO would be better of by having
ansible
roles that would allow for deploying OpenStack services on Kubernetes.

The existing solutions in the OpenStack community require using Helm.
While I
like Helm and both, kolla-kubernetes and openstack-helm OpenStack
projects, I
believe using any of them would add an extra layer of complexity to
TripleO,


It's hard to estimate that complexity w/o having a PoC of such an
integration. We should come up with a final choice once we have it done.

My vote would go for investing engineering resources into solutions that
have problems already solved, even by the price of added complexity (but
that sort of depends...). Added complexity may be compensated with
removed complexity (like those client -> Mistral -> Heat -> Mistral ->
Ansible manipulations discussed in the mail thread mentioned below [0])


I agree it's hard to estimate but you gotta draw the line somewhere. I actually
spent time on this and here's a small PoC of ansible+mariadb+helm. I wrote the
pyhelm lib (took some code from the openstack-helm folks) and I wrote the
ansible helm module myself. I'd say I've spent enough time on this research.

I don't think getting a full PoC working is worth it as that will require way
more work for not much value since we can anticipate some of the complexities
already.

As far as the complexity comment goes, I disagree with you. I don't think you're
evaluating the amount of complexity that there *IS* already in TripleO and how
adding more complexity (layers, states, services) would make things worse for
not much extra value.

By all means, I might be wrong here so, do let me know if you're seeing
something I'm not.
Flavio
--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] How to deal with confusion around "hosted projects"

2017-07-14 Thread Ed Leafe
On Jul 13, 2017, at 10:32 PM, Fei Long Wang  wrote:

> I agree with Zane for most of the parts. But one thing I don't really
> understand is, why OpenStack community is still confusing at the IaaS,
> PaaS and SaaS, does the classification really mater at nowadays? Do we
> really need a label/tag for OpenStack to limit it as an IaaS, PaaS or
> SaaS? I never see AWS says it's an IaaS, PaaS or SaaS. Did Azure or
> Google Cloud say that? I think they're just providing the service their
> customer want.

Sure, they may not distinguish those things publicly, but in their internal 
development teams it is very likely that they understand these boundaries. And 
just for another quick example, from the Azure site:

https://azure.microsoft.com/en-us/overview/azure-vs-aws/

"We are the only cloud provider recognized in the industry as having leading 
solutions in IaaS, PaaS, and SaaS. And Azure PaaS platform services can help 
you be more productive and increase your ROI according to this Forrester Total 
Economic Impact study.”

So I don’t think that this distinction is peculiar to OpenStack.

-- Ed Leafe






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] containers-multinode-upgrades-nv is stable, please respect the results

2017-07-14 Thread Emilien Macchi
Thanks Jirka. +1 with what you said, of course.

Just some context on "why the job isn't voting NOW":
https://review.openstack.org/#/c/479198/
Any feedback from folks working on CI & promotions is welcome.

Thanks,

On Fri, Jul 14, 2017 at 8:28 AM, Jiří Stránský  wrote:
> Hi all,
>
> i'm just sending this plea -- let's pay attention to the
> containers-multinode-upgrades-nv job results in the CI please, and treat it
> as voting if possible. There's been a fair amount of breakage lately but all
> was caused by merging TripleO patches on which the job failed. The job in
> itself has been reliable.
>
> We're on the way to make it voting, but this depends also on adding it to
> promotion jobs, so that RDO RPMs cannot be promoted if this job is failing
> (e.g. due to changes in non-TripleO projects which don't run the job in
> gerrit).
>
>
> Thanks, and have a good day!
>
> Jirka
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] containers-multinode-upgrades-nv is stable, please respect the results

2017-07-14 Thread Jiří Stránský

Hi all,

i'm just sending this plea -- let's pay attention to the 
containers-multinode-upgrades-nv job results in the CI please, and treat 
it as voting if possible. There's been a fair amount of breakage lately 
but all was caused by merging TripleO patches on which the job failed. 
The job in itself has been reliable.


We're on the way to make it voting, but this depends also on adding it 
to promotion jobs, so that RDO RPMs cannot be promoted if this job is 
failing (e.g. due to changes in non-TripleO projects which don't run the 
job in gerrit).



Thanks, and have a good day!

Jirka

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Let's use Ansible to deploy OpenStack services on Kubernetes

2017-07-14 Thread Bogdan Dobrelya
On 14.07.2017 11:17, Flavio Percoco wrote:
> 
> Greetings,
> 
> As some of you know, I've been working on the second phase of TripleO's
> containerization effort. This phase if about migrating the docker based
> deployment onto Kubernetes.
> 
> These phase requires work on several areas: Kubernetes deployment,
> OpenStack
> deployment on Kubernetes, configuration management, etc. While I've been
> diving
> into all of these areas, this email is about the second point, OpenStack
> deployment on Kubernetes.
> 
> There are several tools we could use for this task. kolla-kubernetes,
> openstack-helm, ansible roles, among others. I've looked into these
> tools and
> I've come to the conclusion that TripleO would be better of by having
> ansible
> roles that would allow for deploying OpenStack services on Kubernetes.
> 
> The existing solutions in the OpenStack community require using Helm.
> While I
> like Helm and both, kolla-kubernetes and openstack-helm OpenStack
> projects, I
> believe using any of them would add an extra layer of complexity to
> TripleO,

It's hard to estimate that complexity w/o having a PoC of such an
integration. We should come up with a final choice once we have it done.

My vote would go for investing engineering resources into solutions that
have problems already solved, even by the price of added complexity (but
that sort of depends...). Added complexity may be compensated with
removed complexity (like those client -> Mistral -> Heat -> Mistral ->
Ansible manipulations discussed in the mail thread mentioned below [0])

> which is something the team has been fighting for years years -
> especially now
> that the snowball is being chopped off.
> 
> Adopting any of the existing projects in the OpenStack communty would
> require
> TripleO to also write the logic to manage those projects. For example,
> in the
> case of openstack-helm, the TripleO team would have to write either ansible
> roles or heat templates to manage - install, remove, upgrade - the
> charts (I'm
> happy to discuss this point further but I'm keepping it at a high-level on
> purpose for the sake of not writing a 10k-words-long email).
> 
> James Slagle sent an email[0], a couple of days ago, to form TripleO plans
> around ansible. One take-away from this thread is that TripleO is adopting
> ansible more and more, which is great and it fits perfectly with the
> conclusion
> I reached.
> 
> Now, what this work means is that we would have to write an ansible role
> for
> each service that will deploy the service on a Kubernetes cluster.
> Ideally these
> roles will also generate the configuration files (removing the need of
> puppet
> entirely) and they would manage the lifecycle. The roles would be
> isolated and
> this will reduce the need of TripleO Heat templates. Doing this would give
> TripleO full control on the deployment process too.
> 
> In addition, we could also write Ansible Playbook Bundles to contain
> these roles
> and run them using the existing docker-cmd implementation that is coming
> out in
> Pike (you can find a PoC/example of this in this repo[1]).
> 
> Now, I do realize the amount of work this implies and that this is my
> opinion/conclusion. I'm sending this email out to kick-off the
> discussion and
> gather thoughts and opinions from the rest of the community.
> 
> Finally, what I really like about writing pure ansible roles is that
> ansible is
> a known, powerfull, tool that has been adopted by many operators
> already. It'll
> provide the flexibility needed and, if structured correctly, it'll allow
> for
> operators (and other teams) to just use the parts they need/want without
> depending on the full-stack. I like the idea of being able to separate
> concerns
> in the deployment workflow and the idea of making it simple for users of
> TripleO
> to do the same at runtime. Unfortunately, going down this road means
> that my
> hope of creating a field where we could collaborate even more with other
> deployment tools will be a bit limited but I'm confident the result
> would also
> be useful for others and that we all will benefit from it... My hopes
> might be a
> bit naive *shrugs*
> 
> Flavio
> 
> [0]
> http://lists.openstack.org/pipermail/openstack-dev/2017-July/119405.html
> [1] https://github.com/tripleo-apb/tripleo-apbs
> 
> -- 
> @flaper87
> Flavio Percoco
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


-- 
Best regards,
Bogdan Dobrelya,
Irc #bogdando

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] New setuptools release, and the world is broken

2017-07-14 Thread Jeremy Stanley
On 2017-07-14 10:50:50 -0400 (-0400), Doug Hellmann wrote:
> Excerpts from Jesse Pretorius's message of 2017-07-14 08:32:48 +:
> > FYI if you see the following error in your job logs, you have the new 
> > setuptools to thank:
> > 
> > AttributeError: Distribution instance has no attribute 'install_requires'
> > 
> > I’ve registered https://github.com/pypa/setuptools/issues/1086 to track the 
> > issue, and very quickly got a response and there’s a PR up to resolve it.
> > 
> > In our case we saw this when downgrading setuptools to our known, good 
> > working version.
> > 
> > I’d like to suggest that we include setuptools, pip, wheel and other core 
> > packages in the upper constraints management process and that all the 
> > images built for jobs make use of it. The number of times that a new 
> > release of pip/setuptools has completely ground development to a halt for a 
> > day, sometimes more, is a little too frequent for my liking.
> > 
> > IIRC we’d need to just change the u-c generation output from ‘pip freeze’ 
> > to ‘pip freeze –all’ for the output to include the versions for pip, 
> > setuptools and wheel. Then, with that spec, pip can be installed using u-c 
> > like so:
> > 
> > CURL_CMD="curl --silent --show-error --retry 5"
> > OUTPUT_FILE="get-pip.py"
> > ${CURL_CMD} https://bootstrap.pypa.io/get-pip.py > ${OUTPUT_FILE} ||\
> > ${CURL_CMD} 
> > https://raw.githubusercontent.com/pypa/get-pip/master/get-pip.py > 
> > ${OUTPUT_FILE}
> > 
> > python ${OUTPUT_FILE} pip setuptools wheel -c upper-constraints.txt
> > 
> > That will ensure a consistent, known good version set is installed and will 
> > also cater for the situation where the primary URL for get-pip.py is down 
> > (as happens sometimes).
> > 
> 
> I know we made the explicit decision not to pin setuptools, but I don't
> remember the motivation. Was it a technical decision (we can't) or
> because it seemed like a bad idea (we want to ensure we have the
> latest)?

Chicken and egg. Once you get to the point where pip can enforce
constraints, you already have a version of setuptools installed. And
as evidenced by, for example, this current bug you would just end up
breaking on the downgrade trying to replace your existing broken
version with whatever version is requested. Also you would need a
separate phase to upgrade/downgrade setuptools separate from other
packages using it.
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Let's use Ansible to deploy OpenStack services on Kubernetes

2017-07-14 Thread Emilien Macchi
On Fri, Jul 14, 2017 at 2:17 AM, Flavio Percoco  wrote:
>
> Greetings,
>
> As some of you know, I've been working on the second phase of TripleO's
> containerization effort. This phase if about migrating the docker based
> deployment onto Kubernetes.
>
> These phase requires work on several areas: Kubernetes deployment, OpenStack
> deployment on Kubernetes, configuration management, etc. While I've been
> diving
> into all of these areas, this email is about the second point, OpenStack
> deployment on Kubernetes.
>
> There are several tools we could use for this task. kolla-kubernetes,
> openstack-helm, ansible roles, among others. I've looked into these tools
> and
> I've come to the conclusion that TripleO would be better of by having
> ansible
> roles that would allow for deploying OpenStack services on Kubernetes.
>
> The existing solutions in the OpenStack community require using Helm. While
> I
> like Helm and both, kolla-kubernetes and openstack-helm OpenStack projects,
> I
> believe using any of them would add an extra layer of complexity to TripleO,
> which is something the team has been fighting for years years - especially
> now
> that the snowball is being chopped off.
>
> Adopting any of the existing projects in the OpenStack communty would
> require
> TripleO to also write the logic to manage those projects. For example, in
> the
> case of openstack-helm, the TripleO team would have to write either ansible
> roles or heat templates to manage - install, remove, upgrade - the charts
> (I'm
> happy to discuss this point further but I'm keepping it at a high-level on
> purpose for the sake of not writing a 10k-words-long email).
>
> James Slagle sent an email[0], a couple of days ago, to form TripleO plans
> around ansible. One take-away from this thread is that TripleO is adopting
> ansible more and more, which is great and it fits perfectly with the
> conclusion
> I reached.
>
> Now, what this work means is that we would have to write an ansible role for
> each service that will deploy the service on a Kubernetes cluster. Ideally
> these
> roles will also generate the configuration files (removing the need of
> puppet
> entirely) and they would manage the lifecycle. The roles would be isolated
> and
> this will reduce the need of TripleO Heat templates. Doing this would give
> TripleO full control on the deployment process too.
>
> In addition, we could also write Ansible Playbook Bundles to contain these
> roles
> and run them using the existing docker-cmd implementation that is coming out
> in
> Pike (you can find a PoC/example of this in this repo[1]).
>
> Now, I do realize the amount of work this implies and that this is my
> opinion/conclusion. I'm sending this email out to kick-off the discussion
> and
> gather thoughts and opinions from the rest of the community.
>
> Finally, what I really like about writing pure ansible roles is that ansible
> is
> a known, powerfull, tool that has been adopted by many operators already.
> It'll
> provide the flexibility needed and, if structured correctly, it'll allow for
> operators (and other teams) to just use the parts they need/want without
> depending on the full-stack. I like the idea of being able to separate
> concerns
> in the deployment workflow and the idea of making it simple for users of
> TripleO
> to do the same at runtime. Unfortunately, going down this road means that my
> hope of creating a field where we could collaborate even more with other
> deployment tools will be a bit limited but I'm confident the result would
> also
> be useful for others and that we all will benefit from it... My hopes might
> be a
> bit naive *shrugs*

Of course I'm biased since I've been (a little) involved in that work
but I like the idea of :

- Moving forward with our containerization. docker-cmd will help us
for sure for this transition (I insist on the fact TripleO is a
product that you can upgrade and we try to make it smooth for our
operators), so we can't just trash everything and switch to a new
tool. I think the approach that we're taking is great and made of baby
steps where we try to solve different problems.
- Using more Ansible - the right way - when it makes sense : with the
TripleO containerization, we only use Puppet for Configuration
Management, managing a few resources but not for orchestration (or not
all the features that Puppet provide) and for Data Binding (Hiera). To
me, it doesn't make sense for us to keep investing much in Puppet
modules if we go k8s & Ansible. That said, see the next point.
- Having a transition path between TripleO with Puppet and TripleO
with apbs and have some sort of binding between previous hieradata
generated by TripleO & a similar data binding within Ansible playbooks
would help. I saw your PoC Flavio, I found it great and I think we
should make 

Re: [openstack-dev] [all] New setuptools release, and the world is broken

2017-07-14 Thread Doug Hellmann
Excerpts from Jesse Pretorius's message of 2017-07-14 08:32:48 +:
> FYI if you see the following error in your job logs, you have the new 
> setuptools to thank:
> 
> AttributeError: Distribution instance has no attribute 'install_requires'
> 
> I’ve registered https://github.com/pypa/setuptools/issues/1086 to track the 
> issue, and very quickly got a response and there’s a PR up to resolve it.
> 
> In our case we saw this when downgrading setuptools to our known, good 
> working version.
> 
> I’d like to suggest that we include setuptools, pip, wheel and other core 
> packages in the upper constraints management process and that all the images 
> built for jobs make use of it. The number of times that a new release of 
> pip/setuptools has completely ground development to a halt for a day, 
> sometimes more, is a little too frequent for my liking.
> 
> IIRC we’d need to just change the u-c generation output from ‘pip freeze’ to 
> ‘pip freeze –all’ for the output to include the versions for pip, setuptools 
> and wheel. Then, with that spec, pip can be installed using u-c like so:
> 
> CURL_CMD="curl --silent --show-error --retry 5"
> OUTPUT_FILE="get-pip.py"
> ${CURL_CMD} https://bootstrap.pypa.io/get-pip.py > ${OUTPUT_FILE} ||\
> ${CURL_CMD} 
> https://raw.githubusercontent.com/pypa/get-pip/master/get-pip.py > 
> ${OUTPUT_FILE}
> 
> python ${OUTPUT_FILE} pip setuptools wheel -c upper-constraints.txt
> 
> That will ensure a consistent, known good version set is installed and will 
> also cater for the situation where the primary URL for get-pip.py is down (as 
> happens sometimes).
> 

I know we made the explicit decision not to pin setuptools, but I don't
remember the motivation. Was it a technical decision (we can't) or
because it seemed like a bad idea (we want to ensure we have the
latest)?

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Looking for Docker images for Cinder, Glance etc for oVirt

2017-07-14 Thread Walter Boring
Also,
  For what it's worth, Cinder has a docker compose contrib directory here
that we merged a month or so ago for standing up cinder.

https://github.com/openstack/cinder/tree/master/contrib/block-box

On Sat, Jul 8, 2017 at 2:03 PM, Leni Kadali Mutungi 
wrote:

> Hello all.
>
> I am trying to use the Cinder and Glance Docker images you provide in
> relation to the setup here:
> http://www.ovirt.org/develop/release-management/features/
> cinderglance-docker-integration/
>
> I tried to run `sudo docker pull
> kollaglue/centos-rdo-glance-registry:latest` and got an error of not
> found. I thought that it could possible to use a Dockerfile to spin up
> an equivalent of it, so I would like some guidance on how to go about
> doing that. Best practices and so on. Alternatively, if it is
> possible, may you point me in the direction of the equivalent images
> mentioned in the guides if they have been superseded by something else?
> Thanks.
>
> CCing the oVirt users and devel lists to see if anyone has experienced
> something similar.
>
> --
> - Warm regards
> Leni Kadali Mutungi
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Website shows talk submission closed - its not 11:59 PM PST yet

2017-07-14 Thread Jimmy McArthur
Our apologies for the problem! An automated timer went off and closed 
the CFP erroneously. It is now up and running.  Please proceed to your 
submission and complete the process.


Thank you,
Jimmy


Chandan kumar 
July 14, 2017 at 9:39 AM
Hi

I am unable to submit a talk for Sydney summit because the site says
Call for presentation is closed (attached screenshot). However, it is
not 11:59 P.M. PST yet.

Can somebody help me please.

Thanks

Chandan Kumar
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [docs][all] trouble enabling warning-is-error due to "already registered" warnings

2017-07-14 Thread Doug Hellmann
tl;dr
-

A few project that use wsme.sphinxext or some other extensions that
register similar directives multiple times have been unable to turn on
the warning-is-error flag as part of the doc migration. The solution to
that problem is to add some warning suppression information to the local
sphinx build. See [1] for an example.

Why am I seeing the warning "directive 'foo' is already registered, it
will be overridden"?
--

Our doc build jobs invoke sphinx through pbr by running "tox -e venv --
python setup.py build_sphinx". Doing that, instead of sphinx-build, has
allowed us to set several sphinx configuration options consistently
across all projects without having to land a patch in every repository.
One of those options was to suppress warnings like this, which are
informational but don't actually cause us any problems.

When we updated to Sphinx 1.6.2, the integration in pbr changed in a way
that makes it impossible to pass those options through. That means we
are no longer suppressing the warnings globally. When the
warning-is-error flag is turned on, the informational warning turns into
an error and the build fails.

How do I eliminate the warning?
---

To suppress the warning, set the "suppress_warnings" option in
doc/source/conf.py to the list of names of warnings to be suppressed. By
default the list is empty. You do not want to suppress *all* warnings,
because that would make the warning-is-error flag irrelevant. It is
safe, however, to suppress warnings related to things that extensions
do, like registering directives and roles. This is what pbr used to do.

For example, in [1] I have set the option to:

suppress_warnings = [
'app.add_directive', 'app.add_role',
'app.add_generic_role', 'app.add_node',
'image.nonlocal_uri',
]

This ignores warnings related to extension actions, over which you have
no control, and tells Sphinx that it is OK to refer to images that are
not in the local repository.

Again, please be conservative about adding things to this list. Consider
it like disabling a rule in a code linter.

Doug

[1] https://review.openstack.org/#/c/483953/1/doc/source/conf.py
[2] 
http://www.sphinx-doc.org/en/stable/config.html?highlight=suppress_warnings#confval-suppress_warnings

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Dev][Nova] - https://launchpad.net/bugs/1667794 Changing hostname not to be treated as a pattern instead exact match will be done.

2017-07-14 Thread Matt Riedemann

On 7/14/2017 6:49 AM, nidhi.h...@wipro.com wrote:

Hello all,

This is regarding bug 1667794 as mentioned in subject.

Its review is going on here.

https://review.openstack.org/#/c/474949/

*_Bug is - _**_Nova treats hostname as pattern_*

*_Description_*

Nova commands such as "hypervisor-list --matching ",

host-evacuate-live and host-evacuate and few more, treat the

user-specified "host-name" as the input to the HTTP

/os-hypervisors/{hypervisor_hostname_pattern}/search API.

*Nova checks "host-name" as a pattern instead of exact match,*

*which causes problem with some commands such as*

*nova host-evacuate-live compute-1 where in host-evacuate*

*action will apply to all "compute-1", "compute-10".*

*That is not right.*

Correcting it by using exact match.

We have fixed it and put it for review. We need your opinion on this.

*_Kindly share your opinion in case this does not seem to be an 
acceptable fix to anyone._*


Thanks

Nidhi

The information contained in this electronic message and any attachments 
to this message are intended for the exclusive use of the addressee(s) 
and may contain proprietary, confidential or privileged information. If 
you are not the intended recipient, you should not disseminate, 
distribute or copy this e-mail. Please notify the sender immediately and 
destroy all copies of this message and any attachments. WARNING: 
Computer viruses can be transmitted via email. The recipient should 
check this email and any attachments for the presence of viruses. The 
company accepts no liability for any damage caused by any virus 
transmitted by this email. www.wipro.com



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Thanks for bringing this up. Your fix is in the wrong place, see the 
comments in the patch.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] CFP Closes today - Get your Presentations in!

2017-07-14 Thread Jimmy McArthur

Hi everyone,

Don't forget - Call for Presentations for the OpenStack Summit Sydney 
closes *today at 11:59PM Pacific Time (July 15 at 6:59 UTC)!* Submit 
your proposals before the deadline. 



Interested in how the speaking sessions are selected? Review the 
selection process here 
.


If you have any questions, please email sum...@openstack.org 
.


Cheers,
Jimmy
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][all] next week is deadline for final release for non-client libraries

2017-07-14 Thread ChangBo Guo
Just a reminder,  I will add final releases of oslo libraries for Pike on
next Monday.

2017-07-10 22:37 GMT+08:00 ChangBo Guo :

> OpenStackers,
>
> According to Pike Schedule https://releases.openstack.
> org/pike/schedule.html
>
> Jul 17 - Jul 21 is the deadline for final release for oslo libraries, so
> please pay more attentions to your reviews which are needed for Pike. Feel
> free to ping me if you want to quicken the review process.
>
> --
> ChangBo Guo(gcb)
>



-- 
ChangBo Guo(gcb)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [OpenStack-Dev][Nova] - https://launchpad.net/bugs/1667794 Changing hostname not to be treated as a pattern instead exact match will be done.

2017-07-14 Thread nidhi.h...@wipro.com
Hello all,

This is regarding bug 1667794 as mentioned in subject.
Its review is going on here.
https://review.openstack.org/#/c/474949/

Bug is - Nova treats hostname as pattern

Description
Nova commands such as "hypervisor-list --matching ",
host-evacuate-live and host-evacuate and few more, treat the
user-specified "host-name" as the input to the HTTP
/os-hypervisors/{hypervisor_hostname_pattern}/search API.


Nova checks "host-name" as a pattern instead of exact match,
which causes problem with some commands such as
nova host-evacuate-live compute-1 where in host-evacuate
action will apply to all "compute-1", "compute-10".
That is not right.
Correcting it by using exact match.
We have fixed it and put it for review. We need your opinion on this.

Kindly share your opinion in case this does not seem to be an acceptable fix to 
anyone.

Thanks
Nidhi


The information contained in this electronic message and any attachments to 
this message are intended for the exclusive use of the addressee(s) and may 
contain proprietary, confidential or privileged information. If you are not the 
intended recipient, you should not disseminate, distribute or copy this e-mail. 
Please notify the sender immediately and destroy all copies of this message and 
any attachments. WARNING: Computer viruses can be transmitted via email. The 
recipient should check this email and any attachments for the presence of 
viruses. The company accepts no liability for any damage caused by any virus 
transmitted by this email. www.wipro.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [masakari]Where is the masakari-monitors spec repository?

2017-07-14 Thread Rikimaru Honjo

Hi Sampath,

Thank you for suggesting!
I will use masakari-spec repo!

On 2017/07/14 19:11, Sam P wrote:

Hi Honjo,

  There are no dedicated spec repositories for masakari-monitors and
python-masakariclient.
  Please use the masakari-spec repository[1] for spec discussion for
those 2 projects.

  [1] https://review.openstack.org/#/q/project:openstack/masakari-specs
--- Regards,
Sampath



On Fri, Jul 14, 2017 at 4:53 PM, Rikimaru Honjo
 wrote:

Hi all,

I want to push a new spec document of masakari-monitors.
But there is not a masakari-monitors-spec repository.
Can I push it to masakari-spec repository?

Best regards,
--
_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/
Rikimaru Honjo
E-mail:honjo.rikim...@po.ntt-tx.co.jp



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/
★社名とメールアドレスが変わりました。

NTTテクノクロス株式会社
クラウド&セキュリティ事業部 第二事業ユニット(CS2BU)
本上力丸
TEL.  :045-212-7539
E-mail:honjo.rikim...@po.ntt-tx.co.jp
〒220-0012
  横浜市西区みなとみらい4丁目4番5号
  横浜アイマークプレイス 13階



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [masakari]Where is the masakari-monitors spec repository?

2017-07-14 Thread Sam P
Hi Honjo,

 There are no dedicated spec repositories for masakari-monitors and
python-masakariclient.
 Please use the masakari-spec repository[1] for spec discussion for
those 2 projects.

 [1] https://review.openstack.org/#/q/project:openstack/masakari-specs
--- Regards,
Sampath



On Fri, Jul 14, 2017 at 4:53 PM, Rikimaru Honjo
 wrote:
> Hi all,
>
> I want to push a new spec document of masakari-monitors.
> But there is not a masakari-monitors-spec repository.
> Can I push it to masakari-spec repository?
>
> Best regards,
> --
> _/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/
> Rikimaru Honjo
> E-mail:honjo.rikim...@po.ntt-tx.co.jp
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] Let's use Ansible to deploy OpenStack services on Kubernetes

2017-07-14 Thread Flavio Percoco


Greetings,

As some of you know, I've been working on the second phase of TripleO's
containerization effort. This phase if about migrating the docker based
deployment onto Kubernetes.

These phase requires work on several areas: Kubernetes deployment, OpenStack
deployment on Kubernetes, configuration management, etc. While I've been diving
into all of these areas, this email is about the second point, OpenStack
deployment on Kubernetes.

There are several tools we could use for this task. kolla-kubernetes,
openstack-helm, ansible roles, among others. I've looked into these tools and
I've come to the conclusion that TripleO would be better of by having ansible
roles that would allow for deploying OpenStack services on Kubernetes.

The existing solutions in the OpenStack community require using Helm. While I
like Helm and both, kolla-kubernetes and openstack-helm OpenStack projects, I
believe using any of them would add an extra layer of complexity to TripleO,
which is something the team has been fighting for years years - especially now
that the snowball is being chopped off.

Adopting any of the existing projects in the OpenStack communty would require
TripleO to also write the logic to manage those projects. For example, in the
case of openstack-helm, the TripleO team would have to write either ansible
roles or heat templates to manage - install, remove, upgrade - the charts (I'm
happy to discuss this point further but I'm keepping it at a high-level on
purpose for the sake of not writing a 10k-words-long email).

James Slagle sent an email[0], a couple of days ago, to form TripleO plans
around ansible. One take-away from this thread is that TripleO is adopting
ansible more and more, which is great and it fits perfectly with the conclusion
I reached.

Now, what this work means is that we would have to write an ansible role for
each service that will deploy the service on a Kubernetes cluster. Ideally these
roles will also generate the configuration files (removing the need of puppet
entirely) and they would manage the lifecycle. The roles would be isolated and
this will reduce the need of TripleO Heat templates. Doing this would give
TripleO full control on the deployment process too.

In addition, we could also write Ansible Playbook Bundles to contain these roles
and run them using the existing docker-cmd implementation that is coming out in
Pike (you can find a PoC/example of this in this repo[1]).

Now, I do realize the amount of work this implies and that this is my
opinion/conclusion. I'm sending this email out to kick-off the discussion and
gather thoughts and opinions from the rest of the community.

Finally, what I really like about writing pure ansible roles is that ansible is
a known, powerfull, tool that has been adopted by many operators already. It'll
provide the flexibility needed and, if structured correctly, it'll allow for
operators (and other teams) to just use the parts they need/want without
depending on the full-stack. I like the idea of being able to separate concerns
in the deployment workflow and the idea of making it simple for users of TripleO
to do the same at runtime. Unfortunately, going down this road means that my
hope of creating a field where we could collaborate even more with other
deployment tools will be a bit limited but I'm confident the result would also
be useful for others and that we all will benefit from it... My hopes might be a
bit naive *shrugs*

Flavio

[0] http://lists.openstack.org/pipermail/openstack-dev/2017-July/119405.html
[1] https://github.com/tripleo-apb/tripleo-apbs

--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] New setuptools release, and the world is broken

2017-07-14 Thread Jesse Pretorius
FYI if you see the following error in your job logs, you have the new 
setuptools to thank:

AttributeError: Distribution instance has no attribute 'install_requires'

I’ve registered https://github.com/pypa/setuptools/issues/1086 to track the 
issue, and very quickly got a response and there’s a PR up to resolve it.

In our case we saw this when downgrading setuptools to our known, good working 
version.

I’d like to suggest that we include setuptools, pip, wheel and other core 
packages in the upper constraints management process and that all the images 
built for jobs make use of it. The number of times that a new release of 
pip/setuptools has completely ground development to a halt for a day, sometimes 
more, is a little too frequent for my liking.

IIRC we’d need to just change the u-c generation output from ‘pip freeze’ to 
‘pip freeze –all’ for the output to include the versions for pip, setuptools 
and wheel. Then, with that spec, pip can be installed using u-c like so:

CURL_CMD="curl --silent --show-error --retry 5"
OUTPUT_FILE="get-pip.py"
${CURL_CMD} https://bootstrap.pypa.io/get-pip.py > ${OUTPUT_FILE} ||\
${CURL_CMD} 
https://raw.githubusercontent.com/pypa/get-pip/master/get-pip.py > 
${OUTPUT_FILE}

python ${OUTPUT_FILE} pip setuptools wheel -c upper-constraints.txt

That will ensure a consistent, known good version set is installed and will 
also cater for the situation where the primary URL for get-pip.py is down (as 
happens sometimes).



Rackspace Limited is a company registered in England & Wales (company 
registered number 03897010) whose registered office is at 5 Millington Road, 
Hyde Park Hayes, Middlesex UB3 4AZ. Rackspace Limited privacy policy can be 
viewed at www.rackspace.co.uk/legal/privacy-policy - This e-mail message may 
contain confidential or privileged information intended for the recipient. Any 
dissemination, distribution or copying of the enclosed material is prohibited. 
If you receive this transmission in error, please notify us immediately by 
e-mail at ab...@rackspace.com and delete the original message. Your cooperation 
is appreciated.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [rpm-packaging][karbor]

2017-07-14 Thread Jiong Liu
Message: 2
Date: Fri, 14 Jul 2017 12:10:00 +0530
From: Chandan kumar 
To: "OpenStack Development Mailing List (not for usage questions)"

Subject: Re: [openstack-dev] [rpm-packaging][karbor]
Message-ID:

Content-Type: text/plain; charset="UTF-8"

Hello Jiong,

Thank you for packaging karbor.

On Fri, Jul 14, 2017 at 11:49 AM, Jiong Liu  wrote:
> Hello rpm-packaging team and folks,
>
>
>
> I got trouble with packaging OpenStack project(karbor), which depends 
> on two
> packages: icalendar and abclient.
>
> icalendar has pip package and RPM package, but RPM package can not be 
> found by RDO CI.

python-icalender is available in fedora:
https://koji.fedoraproject.org/koji/packageinfo?packageID=10783
We can pull it soon in RDO.

>
> While abclient only has pip package but no RPM package.
>

abclient is not available in Fedora or RDO. I am packaging it. It will be soon 
available in RDO.

>
>
> So in this case, what should I do to make sure these two packages can 
> be installed via RPM when packaing karbor?
>
>
>
> My patch is uploaded to rpm-package review list, as you can find here 
> https://review.openstack.org/#/c/480806/
>

Thanks,

Chandan Kumar




Hi Chandan,

Thank you so much for doing this.

So theoretically, how long will it take to have these two packages installed 
via RPM successfully by rpm-packaging CI? Would it possible to have them in 
before OpenStack Pike release?

Thanks!
Jeremy



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [rpm-packaging][karbor]

2017-07-14 Thread Chandan kumar
On Fri, Jul 14, 2017 at 12:10 PM, Chandan kumar  wrote:
> Hello Jiong,
>
> Thank you for packaging karbor.
>
> On Fri, Jul 14, 2017 at 11:49 AM, Jiong Liu  wrote:
>> Hello rpm-packaging team and folks,
>>
>>
>>
>> I got trouble with packaging OpenStack project(karbor), which depends on two
>> packages: icalendar and abclient.
>>
>> icalendar has pip package and RPM package, but RPM package can not be found
>> by RDO CI.
>
> python-icalender is available in fedora:
> https://koji.fedoraproject.org/koji/packageinfo?packageID=10783
> We can pull it soon in RDO.
>
>>
>> While abclient only has pip package but no RPM package.
>>
>
> abclient is not available in Fedora or RDO. I am packaging it. It will
> be soon available in RDO.

I have filed a python-abclient package review for Fedora :
https://bugzilla.redhat.com/show_bug.cgi?id=1470980

Thanks,

Chandan Kumar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [masakari]Where is the masakari-monitors spec repository?

2017-07-14 Thread Rikimaru Honjo

Hi all,

I want to push a new spec document of masakari-monitors.
But there is not a masakari-monitors-spec repository.
Can I push it to masakari-spec repository?

Best regards,
--
_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/
Rikimaru Honjo
E-mail:honjo.rikim...@po.ntt-tx.co.jp



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [rpm-packaging][karbor]

2017-07-14 Thread Chandan kumar
Hello Jiong,

Thank you for packaging karbor.

On Fri, Jul 14, 2017 at 11:49 AM, Jiong Liu  wrote:
> Hello rpm-packaging team and folks,
>
>
>
> I got trouble with packaging OpenStack project(karbor), which depends on two
> packages: icalendar and abclient.
>
> icalendar has pip package and RPM package, but RPM package can not be found
> by RDO CI.

python-icalender is available in fedora:
https://koji.fedoraproject.org/koji/packageinfo?packageID=10783
We can pull it soon in RDO.

>
> While abclient only has pip package but no RPM package.
>

abclient is not available in Fedora or RDO. I am packaging it. It will
be soon available in RDO.

>
>
> So in this case, what should I do to make sure these two packages can be
> installed via RPM when packaing karbor?
>
>
>
> My patch is uploaded to rpm-package review list, as you can find here
> https://review.openstack.org/#/c/480806/
>

Thanks,

Chandan Kumar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [rpm-packaging][karbor]

2017-07-14 Thread Jiong Liu
Hello rpm-packaging team and folks,

 

I got trouble with packaging OpenStack project(karbor), which depends on two
packages: icalendar and abclient.

icalendar has pip package and RPM package, but RPM package can not be found
by RDO CI.

While abclient only has pip package but no RPM package.

 

So in this case, what should I do to make sure these two packages can be
installed via RPM when packaing karbor?

 

My patch is uploaded to rpm-package review list, as you can find here
https://review.openstack.org/#/c/480806/

Your comments and help are much appreciated!

 

Thanks!

Jeremy

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev