Re: [openstack-dev] [TripleO] podman: varlink interface for nice API calls

2018-08-26 Thread Steve Baker



On 24/08/18 04:36, Fox, Kevin M wrote:

Or use kubelet in standalone mode. It can be configured for either Cri-o or 
Docker. You can drive the static manifests from heat/ansible per host as normal 
and it would be a step in the greater direction of getting to Kubernetes 
without needing the whole thing at once, if that is the goal.


I was an advocate for using kubectl standalone for our container 
orchestration needs well before we started containerizing TripleO. After 
talking to a few kubernetes folk I cooled on the idea, because they had 
one of two responses:
- cautious encouragement, but uncertainty about kubectl standalone 
interface support and consideration for those use cases

- googly eyed incomprehension followed by "why would you do that??"

This was a while ago now so this could be worth revisiting in the 
future. We'll be making gradual changes, the first of which is using 
podman to manage single containers. However podman has native support 
for the pod format, so I'm hoping we can switch to that once this 
transition is complete. Then evaluating kubectl becomes much easier.




Question. Rather then writing a middle layer to abstract both container 
engines, couldn't you just use CRI? CRI is CRI-O's native language, and there 
is support already for Docker as well.


We're not writing a middle layer, we're leveraging one which is already 
there.


CRI-O is a socket interface and podman is a CLI interface that both sit 
on top of the exact same Go libraries. At this point, switching to 
podman needs a much lower development effort because we're replacing 
docker CLI calls.

Thanks,
Kevin

From: Jay Pipes [jaypi...@gmail.com]
Sent: Thursday, August 23, 2018 8:36 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [TripleO] podman: varlink interface for nice API 
calls

Dan, thanks for the details and answers. Appreciated.

Best,
-jay

On 08/23/2018 10:50 AM, Dan Prince wrote:

On Wed, Aug 15, 2018 at 5:49 PM Jay Pipes  wrote:

On 08/15/2018 04:01 PM, Emilien Macchi wrote:

On Wed, Aug 15, 2018 at 5:31 PM Emilien Macchi mailto:emil...@redhat.com>> wrote:

  More seriously here: there is an ongoing effort to converge the
  tools around containerization within Red Hat, and we, TripleO are
  interested to continue the containerization of our services (which
  was initially done with Docker & Docker-Distribution).
  We're looking at how these containers could be managed by k8s one
  day but way before that we plan to swap out Docker and join CRI-O
  efforts, which seem to be using Podman + Buildah (among other things).

I guess my wording wasn't the best but Alex explained way better here:
http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-08-15.log.html#t2018-08-15T17:56:52

If I may have a chance to rephrase, I guess our current intention is to
continue our containerization and investigate how we can improve our
tooling to better orchestrate the containers.
We have a nice interface (openstack/paunch) that allows us to run
multiple container backends, and we're currently looking outside of
Docker to see how we could solve our current challenges with the new tools.
We're looking at CRI-O because it happens to be a project with a great
community, focusing on some problems that we, TripleO have been facing
since we containerized our services.

We're doing all of this in the open, so feel free to ask any question.

I appreciate your response, Emilien, thank you. Alex' responses to
Jeremy on the #openstack-tc channel were informative, thank you Alex.

For now, it *seems* to me that all of the chosen tooling is very Red Hat
centric. Which makes sense to me, considering Triple-O is a Red Hat product.

Perhaps a slight clarification here is needed. "Director" is a Red Hat
product. TripleO is an upstream project that is now largely driven by
Red Hat and is today marked as single vendor. We welcome others to
contribute to the project upstream just like anybody else.

And for those who don't know the history the TripleO project was once
multi-vendor as well. So a lot of the abstractions we have in place
could easily be extended to support distro specific implementation
details. (Kind of what I view podman as in the scope of this thread).


I don't know how much of the current reinvention of container runtimes
and various tooling around containers is the result of politics. I don't
know how much is the result of certain companies wanting to "own" the
container stack from top to bottom. Or how much is a result of technical
disagreements that simply cannot (or will not) be resolved among
contributors in the container development ecosystem.

Or is it some combination of the above? I don't know.

What I *do* know is that the current "NIH du jour" mentality currently
playing itself out in the container ecosystem -- reminding me very much
of the Javascript ecosystem -- makes it difficult for any 

Re: [openstack-dev] [TripleO] podman: varlink interface for nice API calls

2018-08-16 Thread Steve Baker



On 15/08/18 21:32, Cédric Jeanneret wrote:

Dear Community,

As you may know, a move toward Podman as replacement of Docker is starting.

One of the issues with podman is the lack of daemon, precisely the lack
of a socket allowing to send commands and get a "computer formatted
output" (like JSON or YAML or...).

In order to work that out, Podman has added support for varlink¹, using
the "socket activation" feature in Systemd.

On my side, I would like to push forward the integration of varlink in
TripleO deployed containers, especially since it will allow the following:
# proper interface with Paunch (via python link)
I'm not sure this would be desirable. If we're going to all container 
management via a socket I think we'd be better supported by using CRI-O. 
One of the advantages I see of podman is being able to manage services 
with systemd again.

# a way to manage containers from within specific containers (think
"healthcheck", "monitoring") by mounting the socket as a shared volume

# a way to get container statistics (think "metrics")

# a way, if needed, to get an ansible module being able to talk to
podman (JSON is always better than plain text)

# a way to secure the accesses to Podman management (we have to define
how varlink talks to Podman, maybe providing dedicated socket with
dedicated rights so that we can have dedicated users for specific tasks)
Some of these cases might prove to be useful, but I do wonder if just 
making podman calls would be just as simple without the complexity of 
having another host-level service to manage. We can still do podman 
operations inside containers by bind-mounting in the container state.



That said, I have some questions:
° Does any of you have some experience with varlink and podman interface?
° What do you think about that integration wish?
° Does any of you have concern with this possible addition?
I do worry a bit that it is advocating for a solution before we really 
understand the problems. The biggest unknown for me is what we do about 
healthchecks. Maybe varlink is part of the solution here, or maybe its a 
systemd timer which executes the healthcheck and restarts the service 
when required.

Thank you for your feedback and ideas.

Have a great day (or evening, or whatever suits the time you're reading
this ;))!

C.


¹ https://www.projectatomic.io/blog/2018/05/podman-varlink/




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] easily identifying how services are configured

2018-08-02 Thread Steve Baker



On 02/08/18 13:03, Alex Schultz wrote:

On Mon, Jul 9, 2018 at 6:28 AM, Bogdan Dobrelya  wrote:

On 7/6/18 7:02 PM, Ben Nemec wrote:



On 07/05/2018 01:23 PM, Dan Prince wrote:

On Thu, 2018-07-05 at 14:13 -0400, James Slagle wrote:


I would almost rather see us organize the directories by service
name/project instead of implementation.

Instead of:

puppet/services/nova-api.yaml
puppet/services/nova-conductor.yaml
docker/services/nova-api.yaml
docker/services/nova-conductor.yaml

We'd have:

services/nova/nova-api-puppet.yaml
services/nova/nova-conductor-puppet.yaml
services/nova/nova-api-docker.yaml
services/nova/nova-conductor-docker.yaml

(or perhaps even another level of directories to indicate
puppet/docker/ansible?)


I'd be open to this but doing changes on this scale is a much larger
developer and user impact than what I was thinking we would be willing
to entertain for the issue that caused me to bring this up (i.e. how to
identify services which get configured by Ansible).

Its also worth noting that many projects keep these sorts of things in
different repos too. Like Kolla fully separates kolla-ansible and
kolla-kubernetes as they are quite divergent. We have been able to
preserve some of our common service architectures but as things move
towards kubernetes we may which to change things structurally a bit
too.


True, but the current directory layout was from back when we intended to
support multiple deployment tools in parallel (originally
tripleo-image-elements and puppet).  Since I think it has become clear that
it's impractical to maintain two different technologies to do essentially
the same thing I'm not sure there's a need for it now.  It's also worth
noting that kolla-kubernetes basically died because there wasn't enough
people to maintain both deployment methods, so we're not the only ones who
have found that to be true.  If/when we move to kubernetes I would
anticipate it going like the initial containers work did - development for a
couple of cycles, then a switch to the new thing and deprecation of the old
thing, then removal of support for the old thing.

That being said, because of the fact that the service yamls are
essentially an API for TripleO because they're referenced in user


this ^^


resource registries, I'm not sure it's worth the churn to move everything
either.  I think that's going to be an issue either way though, it's just a
question of the scope.  _Something_ is going to move around no matter how we
reorganize so it's a problem that needs to be addressed anyway.


[tl;dr] I can foresee reorganizing that API becomes a nightmare for
maintainers doing backports for queens (and the LTS downstream release based
on it). Now imagine kubernetes support comes within those next a few years,
before we can let the old API just go...

I have an example [0] to share all that pain brought by a simple move of
'API defaults' from environments/services-docker to environments/services
plus environments/services-baremetal. Each time a file changes contents by
its old location, like here [1], I had to run a lot of sanity checks to
rebase it properly. Like checking for the updated paths in resource
registries are still valid or had to/been moved as well, then picking the
source of truth for diverged old vs changes locations - all that to loose
nothing important in progress.

So I'd say please let's do *not* change services' paths/namespaces in t-h-t
"API" w/o real need to do that, when there is no more alternatives left to
that.


Ok so it's time to dig this thread back up. I'm currently looking at
the chrony support which will require a new service[0][1]. Rather than
add it under puppet, we'll likely want to leverage ansible. So I guess
the question is where do we put services going forward?  Additionally
as we look to truly removing the baremetal deployment options and
puppet service deployment, it seems like we need to consolidate under
a single structure.  Given that we don't want force too much churn,
does this mean that we should align to the docker/services/*.yaml
structure or should we be proposing a new structure that we can try to
align on.

There is outstanding tech-debt around the nested stacks and references
within these services when we added the container deployments so it's
something that would be beneficial to start tackling sooner rather
than later.  Personally I think we're always going to have the issue
when we rename files that could have been referenced by custom
templates, but I don't think we can continue to carry the outstanding
tech debt around these static locations.  Should we be investing in
coming up with some sort of mappings that we can use/warn a user on
when we move files?


When Stein development starts, the puppet services will have been 
deprecated for an entire cycle. Can I suggest we use this reorganization 
as the time we delete the puppet services files? This would release us 
of the burden of maintaining a deployment method that we no 

[openstack-dev] [tripleo] FFE request for container-prepare-workflow

2018-07-25 Thread Steve Baker

I'd like to request a FFE for this blueprint[1].

Theremaining changes will be tracked as Depends-On on this oooq change[2].

Initially the aim of this blueprint was to do all container prepare 
operations in a mistral action before the overcloud deploy. However the 
priority for delivery switched to helping blueprint 
containerized-undercloud with its container prepare. Once this was 
complete it was apparent that the overcloud prepare could share the 
undercloud prepare approach.


The undercloud prepare does the following:

1) During undercloud_config, do a try-run prepare to populate the image 
parameters (but don't do any image transfers)


2) During tripleo-deploy, driven by tripleo-heat-templates, do the 
actual prepare after the undercloud registry is installed but before and 
containers are required


For the overcloud, 1) will be done by a mistral action[3] and 2) will be 
done during overcloud deploy[4].


The vast majority of code for this blueprint has landed and is exercised 
by containerized-undercloud. I don't expect issues with the overcloud 
changes landing, but in the worst case scenario the overcloud prepare 
can be done manually by running the new command "openstack tripleo 
container image prepare" as documented in this change [5].


[1] 
https://blueprints.launchpad.net/tripleo/+spec/container-prepare-workflow


[2] https://review.openstack.org/#/c/573476/

[3] https://review.openstack.org/#/c/558972/ (landed but currently being 
reverted)


[4] https://review.openstack.org/#/c/581919/ (plus the series before it)

[5] https://review.openstack.org/#/c/553104/


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Referring to the --templates directory?

2018-06-27 Thread Steve Baker



On 26/06/18 06:06, Lars Kellogg-Stedman wrote:

Is there a way to refer to the `--templates` directory when writing
service templates?  Existing service templates can use relative paths,
as in:

 resources:

   ContainersCommon:
 type: ./containers-common.yaml

But if I'm write a local service template (which I often do during
testing/development), I would need to use the full path to the
corresponding file:

   ContainersCommon:
 type: 
/usr/share/openstack-tripleo-heat-templates/docker/services/containers-common.yaml

But that breaks if I use another template directory via the
--templates option to the `openstack overcloud deploy` command.  Is
there a way to refer to "the current templates directory"?



You're only choice would be to either use an absolute path, or develop 
your local service template inside a checkout of tripleo-heat-templates, 
which is how other new services are developed.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] roadmap on containers workflow

2018-04-11 Thread Steve Baker



On 11/04/18 12:50, Emilien Macchi wrote:

Greetings,

Steve Baker and I had a quick chat today about the work that is being 
done around containers workflow in Rocky cycle.


If you're not familiar with the topic, I suggest to first read the 
blueprint to understand the context here:

https://blueprints.launchpad.net/tripleo/+spec/container-prepare-workflow

One of the great outcomes of this blueprint is that in Rocky, the 
operator won't have to run all the "openstack overcloud container" 
commands to prepare the container registry and upload the containers. 
Indeed, it'll be driven by Heat and Mistral mostly.
But today our discussion extended on 2 uses-cases that we're going to 
explore and find how we can address them:
1) I'm a developer and want to deploy a containerized undercloud with 
customized containers (more or less related to the all-in-one 
discussions on another thread [1]).
2) I'm submitting a patch in tripleo-common (let's say a workflow) and 
need my patch to be tested when the undercloud is containerized (see 
[2] for an excellent example).


I'm fairly sure the only use cases for this will be developer or CI 
based. I think we need to be strongly encouraging image modifications 
for production deployments to go through some kind of image building 
pipeline. See Next Steps below for the implications of this.



Both cases would require additional things:
- The container registry needs to be deployed *before* actually 
installing the undercloud.
- We need a tool to update containers from this registry and *before* 
deploying them. We already have this tool in place in our CI for the 
overcloud (see [3] and [4]). Now we need a similar thing for the 
undercloud.


One problem I see is that we use roles and environment files to filter 
the images to be pulled/modified/uploaded. Now we would need to assemble 
a list of undercloud *and* overcloud environments, and build some kind 
of aggregate role data for both. This would need to happen before the 
undercloud is even deployed, which is quite a different order from what 
quickstart does currently.


Either that or we do no image filtering and just process every image 
regardless of whether it will be used.




Next steps:
- Agree that we need to deploy the container-registry before the 
undercloud.
- If agreed, we'll create a new Ansible role called 
ansible-role-container-registry that for now will deploy exactly what 
we have in TripleO, without extra feature.

+1
- Drive the playbook runtime from tripleoclient to bootstrap the 
container registry (which of course could be disabled in undercloud.conf).
tripleoclient could switch to using this role instead of puppet-tripleo 
to install the registry, however since the only use-cases we have are 
dev/CI driven I wonder if quickstart/infrared can just invoke the role 
when required, before tripleoclient is involved.


- Create another Ansible role that would re-use container-check tool 
but the idea is to provide a role to modify containers when needed, 
and we could also control it from tripleoclient. The role would be 
using the ContainerImagePrepare parameter, which Steve is working on 
right now.


Since the use cases are all upstream CI/dev I do wonder if we should 
just have a dedicated container-check 
<https://github.com/imain/container-check> role inside 
tripleo-quickstart-extras which can continue to use the script[3] or 
whatever. Keeping the logic in quickstart will remove the temptation to 
use it instead of a proper image build pipeline for production deployments.


Alternatively it could still be a standalone role which quickstart 
invokes, just to accommodate development workflows which don't use 
quickstart.



Feedback is welcome, thanks.

[1] All-In-One thread: 
http://lists.openstack.org/pipermail/openstack-dev/2018-March/128900.html
[2] Bug report when undercloud is containeirzed 
https://bugs.launchpad.net/tripleo/+bug/1762422
[3] Tool to update containers if needed: 
https://github.com/imain/container-check
[4] Container-check running in TripleO CI: 
https://review.openstack.org/#/c/558885/ and 
https://review.openstack.org/#/c/529399/

--
Emilien Macchi


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] roadmap on containers workflow

2018-04-11 Thread Steve Baker



On 12/04/18 00:58, Wesley Hayutin wrote:



On Tue, 10 Apr 2018 at 20:51 Emilien Macchi <emil...@redhat.com 
<mailto:emil...@redhat.com>> wrote:


Greetings,

    Steve Baker and I had a quick chat today about the work that is
being done around containers workflow in Rocky cycle.

If you're not familiar with the topic, I suggest to first read the
blueprint to understand the context here:
https://blueprints.launchpad.net/tripleo/+spec/container-prepare-workflow

One of the great outcomes of this blueprint is that in Rocky, the
operator won't have to run all the "openstack overcloud container"
commands to prepare the container registry and upload the
containers. Indeed, it'll be driven by Heat and Mistral mostly.
But today our discussion extended on 2 uses-cases that we're going
to explore and find how we can address them:
1) I'm a developer and want to deploy a containerized undercloud
with customized containers (more or less related to the all-in-one
discussions on another thread [1]).
2) I'm submitting a patch in tripleo-common (let's say a workflow)
and need my patch to be tested when the undercloud is
containerized (see [2] for an excellent example).

Both cases would require additional things:
- The container registry needs to be deployed *before* actually
installing the undercloud.
- We need a tool to update containers from this registry and
*before* deploying them. We already have this tool in place in our
CI for the overcloud (see [3] and [4]). Now we need a similar
thing for the undercloud.

Next steps:
- Agree that we need to deploy the container-registry before the
undercloud.
- If agreed, we'll create a new Ansible role called
ansible-role-container-registry that for now will deploy exactly
what we have in TripleO, without extra feature.
- Drive the playbook runtime from tripleoclient to bootstrap the
container registry (which of course could be disabled in
undercloud.conf).
- Create another Ansible role that would re-use container-check
tool but the idea is to provide a role to modify containers when
needed, and we could also control it from tripleoclient. The role
would be using the ContainerImagePrepare parameter, which Steve is
working on right now.


This all looks really good Emilien, thanks for sending it out.
Regarding the update of containers, we would just want to be 100% sure 
that we can control which yum repositories are in play for the 
update.  Maybe it will be done by the user prior to running the 
command, or maybe with some flags to what ever command Steve is 
working on.


Is it enough to retain the existing container-check 
<https://github.com/imain/container-check> behavior of just mounting in 
the undercloud's /etc/yum.repos.d?


FYI.. we've noticed in CI that when the base os updates ( not baseos) 
are included you tend to fail on at least on package download on one 
of the 50+ containers due to infra/network.  In CI we only enable 
baseos, dlrn updates and the dependency change [1]


It would be interesting to see what speed/reliability change there would 
be if the concurrency of container-check 
<https://github.com/imain/container-check> was disabled and the 
undercloud's /var/cache/yum was mounted in to each container to avoid 
duplicate package download.



Thanks

[1] 
https://github.com/openstack/tripleo-quickstart-extras/blob/master/roles/overcloud-prep-containers/templates/overcloud-prep-containers.sh.j2#L104-L109



Feedback is welcome, thanks.

[1] All-In-One thread:
http://lists.openstack.org/pipermail/openstack-dev/2018-March/128900.html
[2] Bug report when undercloud is containeirzed
https://bugs.launchpad.net/tripleo/+bug/1762422
[3] Tool to update containers if needed:
https://github.com/imain/container-check
[4] Container-check running in TripleO CI:
https://review.openstack.org/#/c/558885/ and
https://review.openstack.org/#/c/529399/
-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://list

Re: [openstack-dev] [tripleo] critical situation with CI / upgrade jobs

2017-08-20 Thread Steve Baker
On Thu, Aug 17, 2017 at 4:13 PM, Steve Baker <sba...@redhat.com> wrote:

>
>
> On Thu, Aug 17, 2017 at 10:47 AM, Emilien Macchi <emil...@redhat.com>
> wrote:
>
>>
>> > Problem #3: from Ocata to Pike: all container images are
>> > uploaded/specified, even for services not deployed
>> > https://bugs.launchpad.net/tripleo/+bug/1710992
>> > The CI jobs are timeouting during the upgrade process because
>> > downloading + uploading _all_ containers in local cache takes more
>> > than 20 minutes.
>> > So this is where we are now, upgrade jobs timeout on that. Steve Baker
>> > is currently looking at it but we'll probably offer some help.
>>
>> Steve is still working on it: https://review.openstack.org/#/c/448328/
>> Steve, if you need any help (reviewing or coding) - please let us
>> know, as we consider this thing important to have and probably good to
>> have in Pike.
>>
>
> I have a couple of changes up now, one to capture the relationship between
> images and services[1], and another to add an argument to the prepare
> command to filter the image list based on which services are containerised
> [2]. Once these land, all the calls to prepare in CI can be modified to
> also specify these heat environment files, and this will reduce uploads to
> only the images required.
>
> [1] https://review.openstack.org/#/c/448328/
> [2] https://review.openstack.org/#/c/494367/
>
>
Just updating progress on this, with infra caching from docker.io I'm
seeing transfer times of 16 minutes (an improvement on 20 minutes ->
$timeout).

Only transferring the required images [3] reduces this to 8 minutes.

[3] https://review.openstack.org/#/c/494767/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] critical situation with CI / upgrade jobs

2017-08-16 Thread Steve Baker
On Thu, Aug 17, 2017 at 10:47 AM, Emilien Macchi <emil...@redhat.com> wrote:

>
> > Problem #3: from Ocata to Pike: all container images are
> > uploaded/specified, even for services not deployed
> > https://bugs.launchpad.net/tripleo/+bug/1710992
> > The CI jobs are timeouting during the upgrade process because
> > downloading + uploading _all_ containers in local cache takes more
> > than 20 minutes.
> > So this is where we are now, upgrade jobs timeout on that. Steve Baker
> > is currently looking at it but we'll probably offer some help.
>
> Steve is still working on it: https://review.openstack.org/#/c/448328/
> Steve, if you need any help (reviewing or coding) - please let us
> know, as we consider this thing important to have and probably good to
> have in Pike.
>

I have a couple of changes up now, one to capture the relationship between
images and services[1], and another to add an argument to the prepare
command to filter the image list based on which services are containerised
[2]. Once these land, all the calls to prepare in CI can be modified to
also specify these heat environment files, and this will reduce uploads to
only the images required.

[1] https://review.openstack.org/#/c/448328/
[2] https://review.openstack.org/#/c/494367/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ffe][release][requirements][tripleo] FFE for python-paunch-1.5.0

2017-08-03 Thread Steve Baker
Adding a subject ^^

On Fri, Aug 4, 2017 at 11:21 AM, Emilien Macchi <emil...@redhat.com> wrote:

> On Thu, Aug 3, 2017 at 4:05 PM, Steve Baker <sba...@redhat.com> wrote:
> > I would like to request a Feature Freeze Exemption (FFE) for the upper
> > constraints of the python-paunch library/tool in Pike.
> >
> > https://review.openstack.org/#/c/490287/
> >
> > TripleO recently switched from using paunch as a library via heat-agents
> to
> > using the paunch CLI tool directly using ansible. This lead to a high
> impact
> > regression bug [1] where failed operations don't raise an error, fixed by
> > [2]. This leads to false positives in TripleO CI, and later deployment
> > failures which are more difficult to debug.
> >
> > Release 1.5.0 contains a fix for this. It also contains the new "paunch
> > debug" command[3] which makes developing and debugging containers in
> TripleO
> > much easier. This feature has a low risk of causing regressions in
> paunch's
> > core function and is also desirable for the Pike release.
>
> +1 from me.
>
> > [1] https://bugs.launchpad.net/paunch/+bug/1707997
> > [2] https://review.openstack.org/#/c/489722/
> > [3] https://review.openstack.org/#/c/476654/
> >
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
>
>
> --
> Emilien Macchi
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ffe][release][requirements][tripleo]

2017-08-03 Thread Steve Baker
I would like to request a Feature Freeze Exemption (FFE) for the upper
constraints of the python-paunch library/tool in Pike.

https://review.openstack.org/#/c/490287/

TripleO recently switched from using paunch as a library via heat-agents to
using the paunch CLI tool directly using ansible. This lead to a high
impact regression bug [1] where failed operations don't raise an error,
fixed by [2]. This leads to false positives in TripleO CI, and later
deployment failures which are more difficult to debug.

Release 1.5.0 contains a fix for this. It also contains the new "paunch
debug" command[3] which makes developing and debugging containers in
TripleO much easier. This feature has a low risk of causing regressions in
paunch's core function and is also desirable for the Pike release.

[1] https://bugs.launchpad.net/paunch/+bug/1707997
[2] https://review.openstack.org/#/c/489722/
[3] https://review.openstack.org/#/c/476654/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Proposing Bogdan Dobrelya core on TripleO / Containers

2017-07-25 Thread Steve Baker
On Sat, Jul 22, 2017 at 2:55 AM, Emilien Macchi  wrote:

> Hi,
>
> Bogdan (bogdando on IRC) has been very active in Containerization of
> TripleO and his quality of review has increased over time.
> I would like to give him core permissions on container work in TripleO.
> Any feedback is welcome as usual, we'll vote as a team.
>
> +1!
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Forming our plans around Ansible

2017-07-11 Thread Steve Baker
On Wed, Jul 12, 2017 at 11:47 AM, James Slagle <james.sla...@gmail.com>
wrote:

> On Tue, Jul 11, 2017 at 6:53 PM, Steve Baker <sba...@redhat.com> wrote:
> >
> >
> > On Tue, Jul 11, 2017 at 6:51 AM, James Slagle <james.sla...@gmail.com>
> > wrote:
> >>
> >> On Mon, Jul 10, 2017 at 11:37 AM, Lars Kellogg-Stedman <l...@redhat.com
> >
> >> wrote:
> >> > On Fri, Jul 7, 2017 at 1:50 PM, James Slagle <james.sla...@gmail.com>
> >> > wrote:
> >> >>
> >> >> There are also some ideas forming around pulling the Ansible
> playbooks
> >> >>
> >> >> and vars out of Heat so that they can be rerun (or run initially)
> >> >> independently from the Heat SoftwareDeployment delivery mechanism:
> >> >
> >> >
> >> > I think the closer we can come to "the operator runs ansible-playbook
> to
> >> > configure the overcloud" the better, but not because I think Ansible
> is
> >> > inherently a great tool: rather, I think the many layers of
> indirection
> >> > in
> >> > our existing model make error reporting and diagnosis much more
> >> > complicated
> >> > that it needs to be.  Combined with Puppet's "fail as late as
> possible"
> >> > model, this means that (a) operators waste time waiting for a
> deployment
> >> > that is ultimately going to fail but hasn't yet, and (b) when it does
> >> > fail,
> >> > they need relatively intimate knowledge of our deployment tools to
> >> > backtrack
> >> > through logs and find the root cause of the failure.
> >> >
> >> > If we can offer a deployment mode that reduces the number of layers
> >> > between
> >> > the operator and the actions being performed on the hosts I think we
> >> > would
> >> > win on both fronts: faster failures and reporting errors as close as
> >> > possible to the actual problem will result in less frustration across
> >> > the
> >> > board.
> >> >
> >> > I do like Steve's suggestion of a split model where Heat is
> responsible
> >> > for
> >> > instantiating OpenStack resources while Ansible is used to perform
> host
> >> > configuration tasks.  Despite all the work done on Ansible's OpenStack
> >> > modules, they feel inflexible and frustrating to work with when
> compared
> >> > to
> >> > Heat's state-aware, dependency ordered deployments.  A solution that
> >> > allows
> >> > Heat to output configuration that can subsequently be consumed by
> >> > Ansible --
> >> > either running manually or perhaps via Mistral for
> >> > API-driven-deployments --
> >> > seems like an excellent goal.  Using Heat as a "front-end" to the
> >> > process
> >> > means that we get to keep the parameter validation and documentation
> >> > that is
> >> > missing in Ansible, while still following the Unix philosophy of
> giving
> >> > you
> >> > enough rope to hang yourself if you really want it.
> >>
> >> This is excellent input, thanks for providing it.
> >>
> >> I think it lends itself towards suggesting that we may like to persue
> >> (again) adding native Ironic resources to Heat. If those were written
> >> in a way that also addressed some of the feedback about TripleO and
> >> the baremetal deployment side, then we could continue to get the
> >> advantages from Heat that you mention.
> >>
> >> My personal opinion to date is that Ansible's os_ironic* modules are
> >> superior in some ways to the Heat->Nova->Ironic model. However, just a
> >> Heat->Ironic model may work in a way that has the advantages of both.
> >
> >
> > I too would dearly like to get nova out of the picture. Our placement
> needs
> > mean the scheduler is something we need to work around, and it discards
> > basically all context for the operator when ironic can't deploy for some
> > reason.
> >
> > Whether we use a mistral workflow[1], a heat resource, or ansible
> os_ironic,
> > there will still need to be some python logic to build the config drive
> ISO
> > that injects the ssh keys and os-collect-config bootstrap.
> >
> > Unfortunately ironic iPXE boot from iSCSI[2] doesn't support config-drive
> > (still?) so the only option to inject ssh keys is t

Re: [openstack-dev] [TripleO] Forming our plans around Ansible

2017-07-11 Thread Steve Baker
On Tue, Jul 11, 2017 at 6:51 AM, James Slagle 
wrote:

> On Mon, Jul 10, 2017 at 11:37 AM, Lars Kellogg-Stedman 
> wrote:
> > On Fri, Jul 7, 2017 at 1:50 PM, James Slagle 
> wrote:
> >>
> >> There are also some ideas forming around pulling the Ansible playbooks
> >>
> >> and vars out of Heat so that they can be rerun (or run initially)
> >> independently from the Heat SoftwareDeployment delivery mechanism:
> >
> >
> > I think the closer we can come to "the operator runs ansible-playbook to
> > configure the overcloud" the better, but not because I think Ansible is
> > inherently a great tool: rather, I think the many layers of indirection
> in
> > our existing model make error reporting and diagnosis much more
> complicated
> > that it needs to be.  Combined with Puppet's "fail as late as possible"
> > model, this means that (a) operators waste time waiting for a deployment
> > that is ultimately going to fail but hasn't yet, and (b) when it does
> fail,
> > they need relatively intimate knowledge of our deployment tools to
> backtrack
> > through logs and find the root cause of the failure.
> >
> > If we can offer a deployment mode that reduces the number of layers
> between
> > the operator and the actions being performed on the hosts I think we
> would
> > win on both fronts: faster failures and reporting errors as close as
> > possible to the actual problem will result in less frustration across the
> > board.
> >
> > I do like Steve's suggestion of a split model where Heat is responsible
> for
> > instantiating OpenStack resources while Ansible is used to perform host
> > configuration tasks.  Despite all the work done on Ansible's OpenStack
> > modules, they feel inflexible and frustrating to work with when compared
> to
> > Heat's state-aware, dependency ordered deployments.  A solution that
> allows
> > Heat to output configuration that can subsequently be consumed by
> Ansible --
> > either running manually or perhaps via Mistral for
> API-driven-deployments --
> > seems like an excellent goal.  Using Heat as a "front-end" to the process
> > means that we get to keep the parameter validation and documentation
> that is
> > missing in Ansible, while still following the Unix philosophy of giving
> you
> > enough rope to hang yourself if you really want it.
>
> This is excellent input, thanks for providing it.
>
> I think it lends itself towards suggesting that we may like to persue
> (again) adding native Ironic resources to Heat. If those were written
> in a way that also addressed some of the feedback about TripleO and
> the baremetal deployment side, then we could continue to get the
> advantages from Heat that you mention.
>
> My personal opinion to date is that Ansible's os_ironic* modules are
> superior in some ways to the Heat->Nova->Ironic model. However, just a
> Heat->Ironic model may work in a way that has the advantages of both.
>

I too would dearly like to get nova out of the picture. Our placement needs
mean the scheduler is something we need to work around, and it discards
basically all context for the operator when ironic can't deploy for some
reason.

Whether we use a mistral workflow[1], a heat resource, or ansible
os_ironic, there will still need to be some python logic to build the
config drive ISO that injects the ssh keys and os-collect-config bootstrap.

Unfortunately ironic iPXE boot from iSCSI[2] doesn't support config-drive
(still?) so the only option to inject ssh keys is the nova ec2-metadata
service (or equivalent). I suspect if we can't make every ironic deployment
method support config-drive then we're stuck with nova.

I don't have a strong preference for a heat resource vs mistral vs ansible
os_ironic, but given there is some python logic required anyway, I would
lean towards a heat resource. If the resource is general enough we could
propose it to heat upstream, otherwise we could carry it in tripleo-common.

Alternatively, we can implement a config-drive builder in tripleo-common
and invoke that from mistral or ansible.

[1] https://review.openstack.org/#/c/313048/1
[2] http://specs.openstack.org/openstack/ironic-specs/
specs/approved/boot-from-volume-reference-drivers.html#
scenario-1-ipxe-boot-from-iscsi-volume
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Forming our plans around Ansible

2017-07-11 Thread Steve Baker
On Tue, Jul 11, 2017 at 3:37 AM, Lars Kellogg-Stedman 
wrote:

> On Fri, Jul 7, 2017 at 1:50 PM, James Slagle 
> wrote:
>
>> There are also some ideas forming around pulling the Ansible playbooks
>>
> and vars out of Heat so that they can be rerun (or run initially)
>> independently from the Heat SoftwareDeployment delivery mechanism:
>>
>
> I think the closer we can come to "the operator runs ansible-playbook to
> configure the overcloud" the better, but not because I think Ansible is
> inherently a great tool: rather, I think the many layers of indirection in
> our existing model make error reporting and diagnosis much more complicated
> that it needs to be.  Combined with Puppet's "fail as late as possible"
> model, this means that (a) operators waste time waiting for a deployment
> that is ultimately going to fail but hasn't yet, and (b) when it does fail,
> they need relatively intimate knowledge of our deployment tools to
> backtrack through logs and find the root cause of the failure.
>
> If we can offer a deployment mode that reduces the number of layers
> between the operator and the actions being performed on the hosts I think
> we would win on both fronts: faster failures and reporting errors as close
> as possible to the actual problem will result in less frustration across
> the board.
>
> I do like Steve's suggestion of a split model where Heat is responsible
> for instantiating OpenStack resources while Ansible is used to perform host
> configuration tasks.  Despite all the work done on Ansible's OpenStack
> modules, they feel inflexible and frustrating to work with when compared to
> Heat's state-aware, dependency ordered deployments.  A solution that allows
> Heat to output configuration that can subsequently be consumed by Ansible
> -- either running manually or perhaps via Mistral for
> API-driven-deployments -- seems like an excellent goal.  Using Heat as a
> "front-end" to the process means that we get to keep the parameter
> validation and documentation that is missing in Ansible, while still
> following the Unix philosophy of giving you enough rope to hang yourself if
> you really want it.
>

I think this nicely sums up what we should be aiming for, but I'd like to
elaborate on "either running manually or perhaps via Mistral for
API-driven-deployments".

I think its important that we allow full support for both mistral-driven
and manually running playbooks. If there was no option to run
ansible-playbook directly then operators would miss one of the main
benefits of using ansible in the first place (which is leveraging their
knowledge of inventory, playbooks and roles to deploy things).

I'm thinking specifically about upgrade scenarios where a step fails.
Currently the only option is a manual diagnosis of the problem, manual
modification of state, then re-running the entire stack update to see if it
can get past the failing step.

What would be nice is when a heat->mistral->ansible upgrade step fails, the
operator is given an ansible-playbook command to run which skips directly
to the failing step. This would dramatically reduce the debug cycle and
also make it possible for the operator to automate any required fixes over
every host in a role. This would likely mean rendering out ansible config
files, playbooks, (and roles?) to the operator's working directory. What
happens to these rendered files after deployment is an open question.
Delete them? Encourage the operator to track them in source control?
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] overcloud_containers.yaml: container versioning, tags and use cases ?

2017-05-29 Thread Steve Baker
On Sat, May 27, 2017 at 7:07 AM, David Moreau Simard  wrote:

> Hi,
>
> Today we discussed a challenge around image tags that mostly boils
> down to limitations in how overcloud_containers.yaml is constructed
> and used.
>
> TL;DR, we need a smart and easy way to work with the
> overcloud_containers.yaml file (especially tags).
>
> Let's highlight a few use cases that we need to work through:
>
> #1. Building containers
>   For building containers, all we really care is the name of the
> images we need to build.
>   Today, we install a trunk repository and then install
> tripleo-common-containers from that trunk repository.
>   We then mostly grep/sed/awk our way from overcloud_containers.yaml
> to a clean list of images to build and then build those.
>   Relatively okay with this but prone to things breaking -- a clean
> way to get just the list of images out of there would be nice.
>
>
The command "openstack overcloud container image build" also has some
string matching logic, but then invokes kolla-build directly.

Can I suggest that we add a --list-images option to this command so that it
just returns a list of images for other image building tools to consume?


> #2. Testing and promoting containers
>   This comes right after use case #1 where we build containers in the
> pipeline.
>   For those familiar with the CI pipeline to do promotions [1], this
> would look a bit like this [2].
>
>   In practice, this works basically the same way as we build, test and
> promote tripleo-quickstart images.
>   We pick a particular trunk repository hash and build containers for
> that hash. These are then pushed with both the tags ":latest" and
> ":".
>   We're then supposed to test those containers in the pipeline but to
> do that, we need to be pulling from :, not :latest...
> although they are in theory equivalent at that given time, this might
> not always be true.
>   So the testing job(s) need a way to easily customize/pull from that
> particular hash instead of the hardcoded latest we have right now.
>
>
I would like to see another "openstack overcloud container image ..."
command which is pointed at an image registry and a
canonical overcloud_containers.yaml file, then generates
another overcloud_containers.yaml (and heat environment file) which
contains the proper latest  tags. This tool could work too for
stable version-style tags.

How about "openstack overcloud container image discover"?

This would be easier to implement if the canonical
overcloud_containers.yaml file was a template rather than a file with
hard-coded namespace and tags.


> #3. Upstream gate jobs
>   Ideally, gate jobs should never use ":latest". This is in trunk/dlrn
> terms the equivalent of "/current/" or "/consistent/".
>   They'd use something like ":latest-passed-ci" which would be the
> proper equivalent of "/current-passed-ci/" or "/current-tripleo/".
>
>
There is nothing special about the word latest. Can we give these images
the same tag as the name of the package repo they came from? so
:current-passed-ci :current-tripleo?


>   This brings an interesting challenge around how we currently add new
> images to overcloud_containers.yaml (example [3]).
>   It is expected that, when you add the image, the image is already
> present on the registry because otherwise the container jobs will fail
> since this new image cannot be pulled (example [4]).
>   My understanding currently is that humans may build and push images
> to the registry ahead of time so that this works.
>   We can keep a similar approach if that's what we really want with
> the new private registry, the job that builds container is made
> generic exactly to be able to build just a specific set of image(s) if
> we want.
>   Here's the catch, though: this new container image will have the
> ":latest" tag, it will not have ":latest-passed-ci" because it hasn't
> passed CI yet, it's being added just now.
>   So how do we address this ?
>
>
Here is an idea, the "discover" command mentioned above could filter images
based on their presence in the registry with the required tags, so the
resulting generated overcloud_containers.yaml would have less entries if
there is no image with the requested tag.


>   Note:
>   We've already discussed that some containers need to pick up the
> latest and the greatest from the "/current/" repository, either
> because they are "direct" tripleo packages or if "Depends-On" is used.
>   So far, the solution we seem to be going towards is to pick up the
> containers from ":latest-passed-ci" and then more or less add a 'yum
> update' layer to the images needing an update.
>   This is the option that is in the best interest of time, we'd
> otherwise be spending too much time building containers in jobs that
> are already taking way too long to run.
>

That is a shame, I have no suggestions to avoid this though.


> #4. Test days
>   When doing test days, we know to point testers to
> /current-passed-ci/ as well as tested 

Re: [openstack-dev] [TripleO][Kolla] default docker storage backend for TripleO

2017-05-17 Thread Steve Baker
On Thu, May 18, 2017 at 12:38 PM, Fox, Kevin M  wrote:

> I've only used btrfs and devicemapper on el7. btrfs has worked well.
> devicemapper ate may data on multiple occasions. Is redhat supporting
> overlay in the el7 kernels now?
>

overlay2 is documented as a Technology Preview graph driver in the Atomic
Host 7.3.4 release notes:
https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux_atomic_host/7/html-single/release_notes/




> _
> From: Dan Prince [dpri...@redhat.com]
> Sent: Wednesday, May 17, 2017 5:24 PM
> To: openstack-dev
> Subject: [openstack-dev] [TripleO][Kolla] default docker storage backend
> forTripleO
>
> TripleO currently uses the default "loopback" docker storage device.
> This is not recommended for production (see 'docker info').
>
> We've been poking around with docker storage backends in TripleO for
> almost 2 months now here:
>
>  https://review.openstack.org/#/c/451916/
>
> For TripleO there are a couple of considerations:
>
>  - we intend to support in place upgrades from baremetal to containers
>
>  - when doing in place upgrades re-partitioning disks is hard, if not
> impossible. This makes using devicemapper hard.
>
>  - we'd like to to use a docker storage backend that is production
> ready.
>
>  - our target OS is latest Centos/RHEL 7
>
> As we approach pike 2 I'm keen to move towards a more production docker
> storage backend. Is there consensus that 'overlay2' is a reasonable
> approach to this? Or is it too early to use that with the combinations
> above?
>
> Looking around at what is recommended in other projects it seems to be
> a mix as well from devicemapper to btrfs.
>
> [1] https://docs.openshift.com/container-platform/3.3/install_config/in
> stall/host_preparation.html#configuring-docker-storage
> [2] http://git.openstack.org/cgit/openstack/kolla/tree/tools/setup_RedH
> at.sh#n30
>
>
I'd love to be able to use overlay2. I've CCed Daniel Walsh with the hope
we can get a general overview of the maturity of overlay2 on rhel/centos.

I tried using overlay2 recently to create an undercloud and hit an issue
doing a "cp -a *" on deleted files. This was with kernel-3.10.0-514.16.1
and docker-1.12.6.

I want to get to the bottom of it so I'll reproduce and raise a bug as
appropriate.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa][heat][murano][daisycloud] Removing Heat support from Tempest

2017-05-04 Thread Steve Baker
On Thu, May 4, 2017 at 3:56 PM, Matthew Treinish 
wrote:

> On Wed, May 03, 2017 at 11:51:13AM +, Andrea Frittoli wrote:
> > On Tue, May 2, 2017 at 5:33 PM Matthew Treinish 
> > wrote:
> >
> > > On Tue, May 02, 2017 at 09:49:14AM +0530, Rabi Mishra wrote:
> > > > On Fri, Apr 28, 2017 at 2:17 PM, Andrea Frittoli <
> > > andrea.fritt...@gmail.com>
> > > > wrote:
> > > >
> > > > >
> > > > >
> > > > > On Fri, Apr 28, 2017 at 10:29 AM Rabi Mishra 
> > > wrote:
> > > > >
> > > > >> On Thu, Apr 27, 2017 at 3:55 PM, Andrea Frittoli <
> > > > >> andrea.fritt...@gmail.com> wrote:
> > > > >>
> > > > >>> Dear stackers,
> > > > >>>
> > > > >>> starting in the Liberty cycle Tempest has defined a set of
> projects
> > > > >>> which are in scope for direct
> > > > >>> testing in Tempest [0]. The current list includes keystone, nova,
> > > > >>> glance, swift, cinder and neutron.
> > > > >>> All other projects can use the same Tempest testing
> infrastructure
> > > (or
> > > > >>> parts of it) by taking advantage
> > > > >>> the Tempest plugin and stable interfaces.
> > > > >>>
> > > > >>> Tempest currently hosts a set of API tests as well as a service
> > > client
> > > > >>> for the Heat project.
> > > > >>> The Heat service client is used by the tests in Tempest, which
> run in
> > > > >>> Heat gate as part of the grenade
> > > > >>> job, as well as in the Tempest gate (check pipeline) as part of
> the
> > > > >>> layer4 job.
> > > > >>> According to code search [3] the Heat service client is also
> used by
> > > > >>> Murano and Daisycore.
> > > > >>>
> > > > >>
> > > > >> For the heat grenade job, I've proposed two patches.
> > > > >>
> > > > >> 1. To run heat tree gabbi api tests as part of grenade
> 'post-upgrade'
> > > > >> phase
> > > > >>
> > > > >> https://review.openstack.org/#/c/460542/
> > > > >>
> > > > >> 2. To remove tempest tests from the grenade job
> > > > >>
> > > > >> https://review.openstack.org/#/c/460810/
> > > > >>
> > > > >>
> > > > >>
> > > > >>> I proposed a patch to Tempest to start the deprecation counter
> for
> > > Heat
> > > > >>> / orchestration related
> > > > >>> configuration items in Tempest [4], and I would like to make sure
> > > that
> > > > >>> all tests and the service client
> > > > >>> either find a new home outside of Tempest, or are removed, by
> the end
> > > > >>> the Pike cycle at the latest.
> > > > >>>
> > > > >>> Heat has in-tree integration tests and Gabbi based API tests,
> but I
> > > > >>> don't know if those provide
> > > > >>> enough coverage to replace the tests on Tempest side.
> > > > >>>
> > > > >>>
> > > > >> Yes, the heat gabbi api tests do not yet have the same coverage
> as the
> > > > >> tempest tree api tests (lacks tests using nova, neutron and swift
> > > > >> resources),  but I think that should not stop us from *not*
> running
> > > the
> > > > >> tempest tests in the grenade job.
> > > > >>
> > > > >> I also don't know if the tempest tree heat tests are used by any
> other
> > > > >> upstream/downstream jobs. We could surely add more tests to bridge
> > > the gap.
> > > > >>
> > > > >> Also, It's possible to run the heat integration tests (we've
> enough
> > > > >> coverage there) with tempest plugin after doing some initial
> setup,
> > > as we
> > > > >> do in all our dsvm gate jobs.
> > > > >>
> > > > >> It would propose to move tests and client to a Tempest plugin
> owned /
> > > > >>> maintained by
> > > > >>> the Heat team, so that the Heat team can have full flexibility in
> > > > >>> consolidating their integration
> > > > >>> tests. For Murano and Daisycloud - and any other team that may
> want
> > > to
> > > > >>> use the Heat service
> > > > >>> client in their tests, even if the client is removed from
> Tempest, it
> > > > >>> would still be available via
> > > > >>> the Heat Tempest plugin. As long as the plugin implements the
> service
> > > > >>> client interface,
> > > > >>> the Heat service client will register automatically in the
> service
> > > > >>> client manager and be available
> > > > >>> for use as today.
> > > > >>>
> > > > >>>
> > > > >> if I understand correctly, you're proposing moving the existing
> > > tempest
> > > > >> tests and service clients to a separate repo managed by heat team.
> > > Though
> > > > >> that would be collective decision, I'm not sure that's something I
> > > would
> > > > >> like to do. To start with we may look at adding some of the
> missing
> > > pieces
> > > > >> in heat tree itself.
> > > > >>
> > > > >
> > > > > I'm proposing to move tests and the service client outside of
> tempest
> > > to a
> > > > > new home.
> > > > >
> > > > > I also suggested that the new home could be a dedicate repo, since
> that
> > > > > would allow you to maintain the
> > > > > current branchless nature of those tests. A more detailed
> discussion
> > > about
> > > > > the topic can be found
> > > > > in the corresponding proposed queens 

Re: [openstack-dev] [qa][heat][murano][daisycloud] Removing Heat support from Tempest

2017-05-01 Thread Steve Baker
On Tue, May 2, 2017 at 5:27 AM, MONTEIRO, FELIPE C  wrote:

> Murano currently uses the Tempest orchestration client for its scenario
> Tempest tests [0], which are not turned on by default in the Murano Tempest
> gate due to resource constraints.
>
> However, I'm hesitant to switch to Heat's testing client, because it is
> not a Tempest client, but rather the python-heatclient. I would like to
> know whether there are plans to change this to a Tempest-based client?
>

There are no plans to switch the heat integration/functional tests to using
the tempest based client. The heat tests will use heatclient for most
tests, and gabbi for testing the REST API.

Since you're testing Murano rather than the Heat API, I think converting
your tests to heatclient would be reasonable.


> [0] https://github.com/openstack/murano/blob/master/murano_
> tempest_tests/tests/scenario/application_catalog/base.py#L100
> [1] https://github.com/openstack/heat/blob/master/heat_
> integrationtests/common/clients.py#L120
>
> Felipe
>
> -Original Message-
> From: Ghanshyam Mann [mailto:ghanshyamm...@gmail.com]
> Sent: Sunday, April 30, 2017 1:53 AM
> To: OpenStack Development Mailing List (not for usage questions) <
> openstack-dev@lists.openstack.org>
> Subject: Re: [openstack-dev] [qa][heat][murano][daisycloud] Removing Heat
> support from Tempest
>
> On Fri, Apr 28, 2017 at 5:47 PM, Andrea Frittoli
>  wrote:
> >
> >
> > On Fri, Apr 28, 2017 at 10:29 AM Rabi Mishra 
> wrote:
> >>
> >> On Thu, Apr 27, 2017 at 3:55 PM, Andrea Frittoli
> >>  wrote:
> >>>
> >>> Dear stackers,
> >>>
> >>> starting in the Liberty cycle Tempest has defined a set of projects
> which
> >>> are in scope for direct
> >>> testing in Tempest [0]. The current list includes keystone, nova,
> glance,
> >>> swift, cinder and neutron.
> >>> All other projects can use the same Tempest testing infrastructure (or
> >>> parts of it) by taking advantage
> >>> the Tempest plugin and stable interfaces.
> >>>
> >>> Tempest currently hosts a set of API tests as well as a service client
> >>> for the Heat project.
> >>> The Heat service client is used by the tests in Tempest, which run in
> >>> Heat gate as part of the grenade
> >>> job, as well as in the Tempest gate (check pipeline) as part of the
> >>> layer4 job.
> >>> According to code search [3] the Heat service client is also used by
> >>> Murano and Daisycore.
> >>
> >>
> >> For the heat grenade job, I've proposed two patches.
> >>
> >> 1. To run heat tree gabbi api tests as part of grenade 'post-upgrade'
> >> phase
> >>
> >> https://urldefense.proofpoint.com/v2/url?u=https-3A__review.
> openstack.org_-23_c_460542_=DwIGaQ=LFYZ-o9_HUMeMTSQicvjIg=X4GwEru-
> SJ9DRnCxhze-aw=aN-OTm6qpDxNIXC86mUeowDuZe9O-NeCWHJdSvrVsYA=
> d2pZwZ8xKsFLHxQ0YNiM4itJjUHzgE0ibHNu7v28mXM=
> >>
> >> 2. To remove tempest tests from the grenade job
> >>
> >> https://urldefense.proofpoint.com/v2/url?u=https-3A__review.
> openstack.org_-23_c_460810_=DwIGaQ=LFYZ-o9_HUMeMTSQicvjIg=X4GwEru-
> SJ9DRnCxhze-aw=aN-OTm6qpDxNIXC86mUeowDuZe9O-NeCWHJdSvrVsYA=07__
> zljUdvdtD_K5ltoKwdjaBwrs0fYJKaXSr93AAiU=
> >>
> >>
> >>>
> >>> I proposed a patch to Tempest to start the deprecation counter for
> Heat /
> >>> orchestration related
> >>> configuration items in Tempest [4], and I would like to make sure that
> >>> all tests and the service client
> >>> either find a new home outside of Tempest, or are removed, by the end
> the
> >>> Pike cycle at the latest.
> >>>
> >>> Heat has in-tree integration tests and Gabbi based API tests, but I
> don't
> >>> know if those provide
> >>> enough coverage to replace the tests on Tempest side.
> >>>
> >>
> >> Yes, the heat gabbi api tests do not yet have the same coverage as the
> >> tempest tree api tests (lacks tests using nova, neutron and swift
> >> resources),  but I think that should not stop us from *not* running the
> >> tempest tests in the grenade job.
> >>
> >> I also don't know if the tempest tree heat tests are used by any other
> >> upstream/downstream jobs. We could surely add more tests to bridge the
> gap.
> >>
> >> Also, It's possible to run the heat integration tests (we've enough
> >> coverage there) with tempest plugin after doing some initial setup, as
> we do
> >> in all our dsvm gate jobs.
> >>
> >>> It would propose to move tests and client to a Tempest plugin owned /
> >>> maintained by
> >>> the Heat team, so that the Heat team can have full flexibility in
> >>> consolidating their integration
> >>> tests. For Murano and Daisycloud - and any other team that may want to
> >>> use the Heat service
> >>> client in their tests, even if the client is removed from Tempest, it
> >>> would still be available via
> >>> the Heat Tempest plugin. As long as the plugin implements the service
> >>> client interface,
> >>> the Heat service client will register automatically in the service
> client
> >>> manager and be 

Re: [openstack-dev] [tripleo][deployment][helm][kolla][openstack-ansible] paunch - a library/tool for yaml driven docker configuration

2017-04-27 Thread Steve Baker
On Fri, Apr 28, 2017 at 11:11 AM, Paul Belanger <pabelan...@redhat.com>
wrote:

> On Wed, Apr 12, 2017 at 10:53:17PM -0400, Emilien Macchi wrote:
> > [adding more tags to make sure folks can see this thread]
> >
> > What we would like to hear from Deployment projects (Kolla, OSA, Helm,
> etc) is:
> >
> > 1) Is there any overlap with some ongoing efforts?
> > 2) Would you be interested by contributing to this effort?
> >
> Had a chance to look at this today, the main question I have, why are we
> building something from scratch? I know today there is both
> ansible-container
> and docker-compose, even linuxkit now.  Is that not something you could
> build
> off or work upstream?
>
> I did see anything in the documentation explaining why that was.
>
>
It forklifts logic that TripleO already uses to manage containers. Long
term TripleO will likely want to move to a widely adopted tool for
container orchestration. Now that our logic is in its own library we can
safely evolve it with the aim of switching in the future with minimal
disruption.



> > Any feedback is welcome,
> > Thanks.
> >
> > On Wed, Apr 12, 2017 at 10:25 PM, Steve Baker <sba...@redhat.com> wrote:
> > > This is just a heads-up that in a week or so I intend to propose a new
> git
> > > repo to be hosted by OpenStack and adopted by the TripleO project.
> > >
> > > paunch [1] is a python library and CLI tool which forklifts the logic
> of the
> > > docker-cmd heat-agents hook[2] into its own project.
> > >
> > > The change to switch the docker-cmd hook to paunch[3] deletes a
> satisfying
> > > number of lines of code. Typically a hook is a thin wrapper over
> another
> > > configuration tool, and the docker-cmd hook was an unfortunate
> exception.
> > >
> > > The YAML format used by paunch is currently driven by the needs of
> TripleO
> > > and is derived from the docker compose v1 format. Over time I'd like to
> > > evolve the format to faithfully implement defacto standard formats,
> > > specifically to ease the transition for TripleO to orchestrate
> containers
> > > with kubernetes.
> > >
> > > At this point I wouldn't advocate for the CLI to be a generally used
> tool
> > > for single node container orchestration, but it will gain some commands
> > > aimed at making developing and deploying containerised TripleO easier.
> > >
> > > I'll wait for about a week to get feedback on this proposal, in the
> meantime
> > > I'll continue to develop and document the format within [1].
> > >
> > > cheers
> > >
> > > [1] https://github.com/steveb/paunch
> > > [2]
> > > https://github.com/openstack/heat-agents/tree/master/heat-
> config-docker-cmd
> > > [3] https://review.openstack.org/#/c/455845/
> > >
> > > 
> __
> > > OpenStack Development Mailing List (not for usage questions)
> > > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >
> >
> >
> >
> > --
> > Emilien Macchi
> >
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][tripleo] Heat dependency visualisation (an old topic revived)

2017-04-23 Thread Steve Baker
Could you please try the "--format dot" formatter for the "openstack stack
resource list" command? It may meet your needs now, and I'm sure patches
would be welcome to the dot formatter to control what diagrams get
generated.

On Sat, Apr 22, 2017 at 6:28 AM, Bogdan Dobrelya 
wrote:

> Hello.
>
> [tl;dr] It is hard to read dependencies from the heat templates, for
> humans, without accessing Heat deployed your stacks live. So robots for
> the rescue!
>
> Original topic [0]. Also there is related blog post [1].
> The latter [2] expects your changes under test to be deployed live and
> queries Heat API, IIUC.
>
> While the former [3] can be used "offline" and helps to visualize
> dependency graph *very* fast, but didn't work for me as is (tried with
> tripleo heat templates Ocata).
>
> I reworked it a little bit [4] to fit my needs with t-h-t, which is I
> want to know which things is followed by which another things and so on,
> especially while those things is being changed all the time these days
> of containers and unicorns :)
>
> I hope this reworked tool can as well help other folks who wanted to
> know how tripleo heat templates deployment graph looks like and how to
> move things across that graph w/o troubles and w/o asking too many
> questions to Steven Hardy (like I did initially) haha.
>
> Kudos Alexis and Lars for great tools!
>
> [0] https://goo.gl/ajUMSi
> [1] http://blog.oddbit.com/2014/09/02/visualizing-heat-stacks/
> [2] https://github.com/larsks/dotstack
> [3] https://github.com/lxsli/heat-viz
> [4] https://github.com/bogdando/heat-viz
>
> --
> Best regards,
> Bogdan Dobrelya,
> Irc #bogdando
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Tags, revisions, dockerhub

2017-04-19 Thread Steve Baker
e:
> >> On 19/04/17 11:20 +0100, Paul Bourke wrote:
> >>>
> >>> I'm wondering if moving to using docker labels is a better way of
> solving
> >>> the various issue being raised here.
> >>>
> >>> We can maintain a tag for each of master/ocata/newton/etc, and on each
> >>> image have a LABEL with info such as 'pbr of service/pbr of kolla/link
> to CI
> >>> of build/etc'. I believe this solves all points Kevin mentioned except
> >>> rollback, which afaik, OpenStack doesn't support anyway. It also solves
> >>> people's concerns with what is actually in the images, and is a
> standard
> >>> Docker mechanism.
> >>>
> >>> Also as Michal mentioned, if users are concerned about keeping images,
> >>> they can tag and stash them away themselves. It is overkill to maintain
> >>> hundreds of (imo meaningless) tags in a registry, the majority of which
> >>> people don't care about - they only want the latest of the branch
> they're
> >>> deploying.
> >>>
> >>> Every detail of a running Kolla system can be easily deduced by
> scanning
> >>> across nodes and printing the labels of running containers,
> functionality
> >>> which can be shipped by Kolla. There are also methods for fetching
> labels of
> >>> remote images[0][1] for users wishing to inspect what they are
> upgrading to.
> >>>
> >>> [0] https://github.com/projectatomic/skopeo
> >>> [1] https://github.com/docker/distribution/issues/1252
> >>
> >>
> >>
> >> You beat me to it, Paul.
> >>
> >> I think using lables to communicate the version of each openstack
> software
> >> installed in the image is the way to go here. We're looking into doing
> this
> >> ourselves as part of the RDO pipeline and it'd be awesome to have it
> being
> >> part
> >> of kolla-build itself. Steve Baker, I believe, was working on this.
> >>
> >> The more explicit we are about the contents of the image, the better.
> People
> >> want to know what's in there, rather than assuming based on the tag.
> >>
> >> Flavio
> >>
> >>
> >>> -Paul
> >>>
> >>> On 18/04/17 22:10, Michał Jastrzębski wrote:
> >>>>
> >>>> On 18 April 2017 at 13:54, Doug Hellmann <d...@doughellmann.com>
> wrote:
> >>>>>
> >>>>> Excerpts from Michał Jastrzębski's message of 2017-04-18 13:37:30
> -0700:
> >>>>>>
> >>>>>> On 18 April 2017 at 12:41, Doug Hellmann <d...@doughellmann.com>
> wrote:
> >>>>>>>
> >>>>>>> Excerpts from Steve Baker's message of 2017-04-18 10:46:43 +1200:
> >>>>>>>>
> >>>>>>>> On Tue, Apr 18, 2017 at 9:57 AM, Doug Hellmann
> >>>>>>>> <d...@doughellmann.com>
> >>>>>>>> wrote:
> >>>>>>>>
> >>>>>>>>> Excerpts from Michał Jastrzębski's message of 2017-04-12 15:59:34
> >>>>>>>>> -0700:
> >>>>>>>>>>
> >>>>>>>>>> My dear Kollegues,
> >>>>>>>>>>
> >>>>>>>>>> Today we had discussion about how to properly name/tag images
> being
> >>>>>>>>>> pushed to dockerhub. That moved towards general discussion on
> >>>>>>>>>> revision
> >>>>>>>>>> mgmt.
> >>>>>>>>>>
> >>>>>>>>>> Problem we're trying to solve is this:
> >>>>>>>>>> If you build/push images today, your tag is 4.0
> >>>>>>>>>> if you do it tomorrow, it's still 4.0, and will keep being 4.0
> >>>>>>>>>> until
> >>>>>>>>>> we tag new release.
> >>>>>>>>>>
> >>>>>>>>>> But image built today is not equal to image built tomorrow, so
> we
> >>>>>>>>>> would like something like 4.0.0-1, 4.0.0-2.
> >>>>>>>>>> While we can reasonably detect history of revisions in
> dockerhub,
> >>>>>>>>>> local env will be extremely hard to do.
> >>>>>>>>>>
> >>>>>

Re: [openstack-dev] [tripleo][release] release-model of tripleo-common

2017-04-18 Thread Steve Baker
On Wed, Apr 19, 2017 at 1:14 PM, Doug Hellmann 
wrote:

> Excerpts from Steve Baker's message of 2017-04-19 13:05:37 +1200:
> > Other than being consumed as a library, tripleo-common is the home for a
> > number of tripleo related files, image building templates, heat plugins,
> > mistral workbooks.
> >
> > I have a python-tripleoclient[1] change which is failing unit tests
> because
> > it depends on changes in tripleo-common which have landed in the current
> > cycle. Because tripleo-common is release-model cycle-trailing,
> > tripleo-common 7.0.0.0b1 exists but the unit test job pulls in the last
> > full release (6.0.0).
> >
> > I'd like to know the best way of dealing with this, options are:
> > a) make the python import optional, change the unit test to not require
> the
> > newer tripleo-common
> > b) allow the unit test job to pull in pre-release versions like 7.0.0.0b1
> > c) change tripleo-common release-model to cycle-with-intermediary and
> > immediately release a 7.0.0
> >
> > I think going with c) would mean doing a major release at the start of
> each
> > development cycle instead of at the end, then doing releases throughout
> the
> > cycle following our standard semver.
> >
> > [1] https://review.openstack.org/#/c/448300/
>
> As a library, tripleo-common should not use pre-release versioning like
> alphas and betas because of exactly the problem you've discovered: pip
> does not allow them to be installed by default, and so we don't put them
> in our constraint list.
>
> So, you can keep tripleo-common as cycle-trailing, but since it's a
> library use regular versions following semantic versioning rules to
> ensure the new releases go out and can be installed.
>
> You probably do want to start with a 7.0.0 release now, and from
> there on use SemVer to increment (rather than automatically releasing
> a new major version at the start of each cycle).
>
>
>
OK, thanks. We need to determine now whether to release 7.0.0.0b1 as 7.0.0,
or release current master:
http://git.openstack.org/cgit/openstack/tripleo-common/log/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo][release] release-model of tripleo-common

2017-04-18 Thread Steve Baker
Other than being consumed as a library, tripleo-common is the home for a
number of tripleo related files, image building templates, heat plugins,
mistral workbooks.

I have a python-tripleoclient[1] change which is failing unit tests because
it depends on changes in tripleo-common which have landed in the current
cycle. Because tripleo-common is release-model cycle-trailing,
tripleo-common 7.0.0.0b1 exists but the unit test job pulls in the last
full release (6.0.0).

I'd like to know the best way of dealing with this, options are:
a) make the python import optional, change the unit test to not require the
newer tripleo-common
b) allow the unit test job to pull in pre-release versions like 7.0.0.0b1
c) change tripleo-common release-model to cycle-with-intermediary and
immediately release a 7.0.0

I think going with c) would mean doing a major release at the start of each
development cycle instead of at the end, then doing releases throughout the
cycle following our standard semver.

[1] https://review.openstack.org/#/c/448300/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Tags, revisions, dockerhub

2017-04-17 Thread Steve Baker
On Tue, Apr 18, 2017 at 9:57 AM, Doug Hellmann 
wrote:

> Excerpts from Michał Jastrzębski's message of 2017-04-12 15:59:34 -0700:
> > My dear Kollegues,
> >
> > Today we had discussion about how to properly name/tag images being
> > pushed to dockerhub. That moved towards general discussion on revision
> > mgmt.
> >
> > Problem we're trying to solve is this:
> > If you build/push images today, your tag is 4.0
> > if you do it tomorrow, it's still 4.0, and will keep being 4.0 until
> > we tag new release.
> >
> > But image built today is not equal to image built tomorrow, so we
> > would like something like 4.0.0-1, 4.0.0-2.
> > While we can reasonably detect history of revisions in dockerhub,
> > local env will be extremely hard to do.
> >
> > I'd like to ask you for opinions on desired behavior and how we want
> > to deal with revision management in general.
> >
> > Cheers,
> > Michal
> >
>
> What's in the images, kolla? Other OpenStack components?


Yes, each image will typically contain all software required for one
OpenStack service, including dependencies from OpenStack projects or the
base OS. Installed via some combination of git, pip, rpm, deb.


> Where does the
> 4.0.0 come from?
>
>
Its the python version string from the kolla project itself, so ultimately
I think pbr. I'm suggesting that we switch to using the
version.release_string[1] which will tag with the longer version we use for
other dev packages.

[1]https://review.openstack.org/#/c/448380/1/kolla/common/config.py
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] paunch - a library/tool for yaml driven docker configuration

2017-04-12 Thread Steve Baker
This is just a heads-up that in a week or so I intend to propose a new git
repo to be hosted by OpenStack and adopted by the TripleO project.

paunch [1] is a python library and CLI tool which forklifts the logic of
the docker-cmd heat-agents hook[2] into its own project.

The change to switch the docker-cmd hook to paunch[3] deletes a satisfying
number of lines of code. Typically a hook is a thin wrapper over another
configuration tool, and the docker-cmd hook was an unfortunate exception.

The YAML format used by paunch is currently driven by the needs of TripleO
and is derived from the docker compose v1 format. Over time I'd like to
evolve the format to faithfully implement defacto standard formats,
specifically to ease the transition for TripleO to orchestrate containers
with kubernetes.

At this point I wouldn't advocate for the CLI to be a generally used tool
for single node container orchestration, but it will gain some commands
aimed at making developing and deploying containerised TripleO easier.

I'll wait for about a week to get feedback on this proposal, in the
meantime I'll continue to develop and document the format within [1].

cheers

[1] https://github.com/steveb/paunch
[2]
https://github.com/openstack/heat-agents/tree/master/heat-config-docker-cmd
[3] https://review.openstack.org/#/c/455845/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Tags, revisions, dockerhub

2017-04-12 Thread Steve Baker
On Thu, Apr 13, 2017 at 10:59 AM, Michał Jastrzębski 
wrote:

> My dear Kollegues,
>
> Today we had discussion about how to properly name/tag images being
> pushed to dockerhub. That moved towards general discussion on revision
> mgmt.
>
> Problem we're trying to solve is this:
> If you build/push images today, your tag is 4.0
> if you do it tomorrow, it's still 4.0, and will keep being 4.0 until
> we tag new release.
>
> But image built today is not equal to image built tomorrow, so we
> would like something like 4.0.0-1, 4.0.0-2.
> While we can reasonably detect history of revisions in dockerhub,
> local env will be extremely hard to do.
>
> I'd like to ask you for opinions on desired behavior and how we want
> to deal with revision management in general.
>
>
I already have a change which proposes tagging images with a pbr built
version [1]. I think if users want tags which are stable for the duration
of a major release they should switch to using the tag specified by
kolla-build.conf base_tag, which can be set to latest, ocata, pike, etc.
This would leave the version tag to at least track changes to the kolla
repo itself. Since the contents of upstream kolla images come from such
diverse sources, all I could suggest to ensure unique tags are created for
unique images is to append a datestamp to [1] (or have an extra datestamp
based tag). Bonus points for only publishing a new datestamp tag if the
contents of the image really changes.

In the RDO openstack-kolla package we now tag images with the
{Version}-{Release} of the openstack-kolla package which built it[2]. I
realise this doesn't solve the problem of the tag needing to change when
other image contents need to be updated, but I believe this can be solved
within the RDO image build pipeline by incrementing the {Release} whenever
a new image needs to be published.

[1] https://review.openstack.org/#/c/448380/
[2] https://review.rdoproject.org/r/#/c/5923/1/openstack-kolla.spec
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][heat] Nominate huangtianhua for heat-stable-maint

2017-04-03 Thread Steve Baker
+1 for both changes

On Fri, Mar 31, 2017 at 9:45 AM, Zane Bitter  wrote:

> We are feeling the pinch on stable-branch reviewers in Heat, so now that I
> understand the process a bit better, let's try this again.
>
> I'd like to nominate Huang Tianhua to join the heat-stable-maint team.
> Tianhua is a heat-core member and one of our most prolific stable branch
> reviewers:
>
> https://review.openstack.org/#/q/reviewer:huangtianhua+-owne
> r:huangtianhua+projects:openstack/heat+branch:%22%255Estable/.*%22
>
> IMHO her track record displays an understanding of the stable branch
> policies appropriate to a stable branch core. e.g.
>
> * https://review.openstack.org/#/c/434030/
> * https://review.openstack.org/#/c/371135/
> * https://review.openstack.org/#/c/244948/
>
> Also, I suggest we take this opportunity to remove Angus Salkeld, since he
> is no longer actively working on OpenStack (http://stackalytics.com/?rele
> ase=all_id=asalkeld)
>
> thanks,
> Zane.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] container jobs are unstable

2017-03-29 Thread Steve Baker
On Thu, Mar 30, 2017 at 9:39 AM, Emilien Macchi  wrote:

> On Mon, Mar 27, 2017 at 8:00 AM, Flavio Percoco  wrote:
> > On 23/03/17 16:24 +0100, Martin André wrote:
> >>
> >> On Wed, Mar 22, 2017 at 2:20 PM, Dan Prince  wrote:
> >>>
> >>> On Wed, 2017-03-22 at 13:35 +0100, Flavio Percoco wrote:
> 
>  On 22/03/17 13:32 +0100, Flavio Percoco wrote:
>  > On 21/03/17 23:15 -0400, Emilien Macchi wrote:
>  > > Hey,
>  > >
>  > > I've noticed that container jobs look pretty unstable lately; to
>  > > me,
>  > > it sounds like a timeout:
>  > > http://logs.openstack.org/19/447319/2/check-tripleo/gate-tripleo-
>  > > ci-centos-7-ovb-containers-oooq-nv/bca496a/console.html#_2017-03-
>  > > 22_00_08_55_358973
>  >
>  > There are different hypothesis on what is going on here. Some
>  > patches have
>  > landed to improve the write performance on containers by using
>  > hostpath mounts
>  > but we think the real slowness is coming from the images download.
>  >
>  > This said, this is still under investigation and the containers
>  > squad will
>  > report back as soon as there are new findings.
> 
>  Also, to be more precise, Martin André is looking into this. He also
>  fixed the
>  gate in the last 2 weeks.
> >>>
> >>>
> >>> I spoke w/ Martin on IRC. He seems to think this is the cause of some
> >>> of the failures:
> >>>
> >>> http://logs.openstack.org/32/446432/1/check-tripleo/gate-
> tripleo-ci-cen
> >>> tos-7-ovb-containers-oooq-nv/543bc80/logs/oooq/overcloud-controller-
> >>> 0/var/log/extra/docker/containers/heat_engine/log/heat/heat-
> >>> engine.log.txt.gz#_2017-03-21_20_26_29_697
> >>>
> >>>
> >>> Looks like Heat isn't able to create Nova instances in the overcloud
> >>> due to "Host 'overcloud-novacompute-0' is not mapped to any cell'. This
> >>> means our cells initialization code for containers may not be quite
> >>> right... or there is a race somewhere.
> >>
> >>
> >> Here are some findings. I've looked at time measures from CI for
> >> https://review.openstack.org/#/c/448533/ which provided the most
> >> recent results:
> >>
> >> * gate-tripleo-ci-centos-7-ovb-ha [1]
> >>undercloud install: 23
> >>overcloud deploy: 72
> >>total time: 125
> >> * gate-tripleo-ci-centos-7-ovb-nonha [2]
> >>undercloud install: 25
> >>overcloud deploy: 48
> >>total time: 122
> >> * gate-tripleo-ci-centos-7-ovb-updates [3]
> >>undercloud install: 24
> >>overcloud deploy: 57
> >>total time: 152
> >> * gate-tripleo-ci-centos-7-ovb-containers-oooq-nv [4]
> >>undercloud install: 28
> >>overcloud deploy: 48
> >>total time: 165 (timeout)
> >>
> >> Looking at the undercloud & overcloud install times, the most task
> >> consuming tasks, the containers job isn't doing that bad compared to
> >> other OVB jobs. But looking closer I could see that:
> >> - the containers job pulls docker images from dockerhub, this process
> >> takes roughly 18 min.
> >
> >
> > I think we can optimize this a bit by having the script that populates
> the
> > local
> > registry in the overcloud job to run in parallel. The docker daemon can
> do
> > multiple pulls w/o problems.
> >
> >> - the overcloud validate task takes 10 min more than it should because
> >> of the bug Dan mentioned (a fix is in the queue at
> >> https://review.openstack.org/#/c/448575/)
> >
> >
> > +A
> >
> >> - the postci takes a long time with quickstart, 13 min (4 min alone
> >> spent on docker log collection) whereas it takes only 3 min when using
> >> tripleo.sh
> >
> >
> > mmh, does this have anything to do with ansible being in between? Or is
> that
> > time specifically for the part that gets the logs?
> >
> >>
> >> Adding all these numbers, we're at about 40 min of additional time for
> >> oooq containers job which is enough to cross the CI job limit.
> >>
> >> There is certainly a lot of room for optimization here and there and
> >> I'll explore how we can speed up the containers CI job over the next
> >
> >
> > Thanks a lot for the update. The time break down is fantastic,
> > Flavio
>
> TBH the problem is far from being solved:
>
> 1. Click on https://status-tripleoci.rhcloud.com/
> 2. Select gate-tripleo-ci-centos-7-ovb-containers-oooq-nv
>
> Container job has been failing more than 55% of the time.
>
> As a reference,
> gate-tripleo-ci-centos-7-ovb-nonha has 90% of success.
> gate-tripleo-ci-centos-7-ovb-ha has 64% of success.
>
> It clearly means the ovb-containers job was and is not ready to be run
> in the check pipeline, it's not reliable enough.
>
> The current queue time in TripleO OVB is 11 hours. This is not
> acceptable for TripleO developers and we need a short term solution,
> which is disabling this job from the check pipeline:
> https://review.openstack.org/#/c/451546/
>
>
Yes, given resource constraints I don't see an alternative in the short
term.


> 

Re: [openstack-dev] [tripleo] [ci]

2016-12-06 Thread Steve Baker
On Tue, Dec 6, 2016 at 9:34 PM, Ian Main  wrote:

> Wesley Hayutin wrote:
> > Greetings,
> >
> > I wanted to send a status update on the quickstart based containerized
> > compute ci.
> >
> > The work is here:
> > https://review.openstack.org/#/c/393348/
> >
> > I had two passes on the morning of Nov 30 in a row, then later that day
> the
> > deployment started to fail due the compute node loosing it's networking
> and
> > became unpingable.   After poking around and talking to a few folks its
> > likely that we're hitting at least one of two possible bugs [1-2]
> >
> > I am on pto next week but will periodically check in and can easily
> retest
> > if these resolve.
>
> I've been seeing this a lot too.  It's happening to both the controller and
> compute for me.  Probably because the controller is ALSO running the
> firstboot
> script in docker/ which is not what we want (or we need it to be smarter
> anyway).
>
> So far it appears that cloud-init is running our firstboot script but is
> not
> configuring networking.  If I run dhclient eth0 it comes up and has
> internet
> access etc.  Going to look into this more tomorrow.
>
>
>
This change might fix the issue https://review.openstack.org/#/c/407289/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] oaktree - a friendly end-user oriented API layer - anybody want to help?

2016-11-21 Thread Steve Baker
On Tue, Nov 22, 2016 at 9:36 AM, Zane Bitter  wrote:

> On 18/11/16 16:56, Clint Byrum wrote:
>
>> Excerpts from Zane Bitter's message of 2016-11-18 14:24:43 -0500:
>>
>>> So, say that I want to create my servers in Heat so that I can use Heat
>>> software deployments for orchestration. How would I go about e.g. making
>>> sure that the servers are always connected to the networks I expect on a
>>> variety of different clouds? Would Oaktree figure out the networks and
>>> pass them in to the stack when creating it? (I don't believe shade has
>>> Heat support yet, but we should definitely add it & I don't foresee any
>>> great obstacle.) Or would Heat have to add Oaktree resource types?
>>>
>>>
>> If you're wanting to use Heat, you are a) already cutting off a large
>> quantity of interoperable clouds because many do not have Heat,
>>
>
> (Roughly one third, according to the user survey.) We have a mechanism to
> resolve that though (defcore), and I'm confident that that will happen in
> due course. I'm less confident that we have any mechanism for resolving
> these other questions.
>
> Perhaps we could use defcore-required Tempest tests to drive alignment on
> some of those too. But we'd have to decide what we want to align on first.
>
> and b)
>> you already have provider templates to deal with the inconsistencies
>> across clouds.
>>
>
> Indeed, and environment files and conditionals as well.
>
> And Shade has had Heat support in some for or another for a long time:
>>
>> 9922cfbb(Monty Taylor   2015-12-03 18:12:32 -0500 32)import
>> heatclient.client
>>
>
> Oh, great! I knew it had been on the agenda for a while but I didn't know
> if it had actually happened or not, so I had a quick glance at
> http://docs.openstack.org/infra/shade/model.html and there was no mention.
>
>
Shade has decent heat support [1] which is behind the os_stack module
available in Ansible 2.2 [2]

My personal workflow involves a lot of ansible playbooks creating/updating
heat stacks then doing other things, and having the os_stack module has
made this much cleaner.

[1]
http://docs.openstack.org/infra/shade/usage.html#shade.OpenStackCloud.create_stack
etc
[2] https://docs.ansible.com/ansible/os_stack_module.html
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][tripleo] New repository for software-config elements

2016-11-15 Thread Steve Baker
On Tue, Nov 15, 2016 at 10:26 PM, Thomas Herve  wrote:

> Hi all,
>
> Historically elements to create images using software config were
> developed in the heat-templates repository, which turned out to mean
> that this had to be packaged, etc. Today we were asked if tags could
> be added to help maintaining the packages. Before we do that, I wonder
> if we should extract the elements in a different repository. We
> already have tests which are only applicable to this specific subset
> of the repo, so it shouldn't be too hard.
>
> In summary: let's create a new repository
> heat-software-config-elements, and move everything from
> hot/software-config/elements/ in the heat-templates repository to it
> (and the associated tests).
>
> Thoughts?
>
>
Yes, these have definitely outgrown their current home.

RDO already generates the following sub-packages from heat-templates:
python-heat-agent
python-heat-agent-puppet
python-heat-agent-ansible
python-heat-agent-apply-config
python-heat-agent-hiera

Therefore can I suggest we call the new repository "heat-agents"?

I do wonder about the usefulness of the diskimage-builder elements-based
directory layout, since image builders can just install the package. But I
suppose having elements will be useful for heat-agents CI jobs, and a more
appropriate layout doesn't occur to me currently.

Also we should consider if we want to retain the git history of these files
in the new repo - I'm in favour if its not too much effort and the
resulting history looks clean.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Support for an "undo" operation for migrating back resource_properties_data

2016-08-31 Thread Steve Baker

On 01/09/16 12:57, Crag Wolfe wrote:

I'm working on a migrate utility for
https://review.openstack.org/#/c/363415 . Quick summary: that means
moving resource.properties_data and event.properties_data to a new
table, resource_properties_data. Migrating to the new model is easy. The
questions come up with the inverse operation.

1) Would we even want to support undoing a migrate? I lean towards "no"
but if the answer is "yes," the next question comes up:
No, OpenStack hasn't supported data migration downgrades for a while 
now. Migration failures are ideally fixed by failing forward. As a last 
resort rollbacks can be performed by restoring the database from backup.

2)

(redacted)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] New bug tagging policy

2016-08-25 Thread Steve Baker

On 25/08/16 22:30, Julie Pichon wrote:

Hi folks,

The bug tagging proposal has merged, behold the new policy:

http://specs.openstack.org/openstack/tripleo-specs/specs/policy/bug-tagging.html

TL;DR The TripleO Launchpad tracker encompasses a lot of sub-projects,
let's use a consistent list of Launchpad tags where they make sense in
order to help understand which area(s) are affected. The tags get
autocompleted by Launchpad (or will be soon).


There is one remaining action to create the missing tags: I don't have
bug wrangling permissions on the TripleO project so, if someone with
the appropriate permissions could update the list [1] to match the
policy I would appreciate it. Should I be deemed trustworthy enough
I'm just as happy to do it myself and help out with the occasional
bout of triaging as well.

Thanks,

Julie

[1] https://bugs.launchpad.net/tripleo/+manage-official-tags

I'm not seeing any tag appropriate for the configuration agent projects 
os-collect-config, os-apply-config, os-refresh-config. Is it possible to 
add a tag like config-agent?


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][CI] Need more undercloud resources

2016-08-24 Thread Steve Baker

On 25/08/16 06:11, James Slagle wrote:

The latest recurring problem that is failing a lot of the nonha ssl
jobs in tripleo-ci is:

https://bugs.launchpad.net/tripleo/+bug/1616144
tripleo-ci: nonha jobs failing with Unable to establish connection to
https://192.0.2.2:13004/v1/a90407df1e7f4f80a38a1b1671ced2ff/stacks/overcloud/f9f6f712-8e89-4ea9-a34b-6084dc74b5c1

This error happens while polling for events from the overcloud stack
by tripleoclient.

I can reproduce this error very easily locally by deploying with an
ssl undercloud with 6GB ram and 2 vcpus. If I don't enable swap,
something gets OOM killed. If I do enable swap, swap gets used (< 1GB)
and then I hit this error almost every time.

The stack keeps deploying but the client has died, so the job fails.
My investigation so far has only pointed out that it's the swap
allocation that is delaying things enough to cause the failure.

We do not see this error in the ha job even though it deploys more
nodes. As of now, my only suspect is that it's the overhead of the
initial SSL connections causing the error.

If I test with 6GB ram and 4 vcpus I can't reproduce the error,
although much more swap is used due to the increased number of default
workers for each API service.

However, I suggest we just raise the undercloud specs in our jobs to
8GB ram and 4 vcpus. These seem reasonable to me because those are the
default specs used by infra in all of their devstack single and
multinode jobs spawned on all their other cloud providers. Our own
multinode job for the undercloud/overcloud and undercloud only job are
running on instances of these sizes.

Yes, this is just sidestepping the problem by throwing more resources
at it. The reality is that we do not prioritize working on optimizing
for speed/performance/resources. We prioritize feature work that
indirectly (or maybe it's directly?) makes everything slower,
especially at this point in the development cycle.

We should therefore expect to have to continue to provide more and
more resources to our CI jobs until we prioritize optimizing them to
run with less.

Let me know if there is any disagreement on making these changes. If
there isn't, I'll apply them in the next day or so. If there are any
other ideas on how to address this particular bug for some immediate
short term relief, please let me know.

Heat now has efficient polling of nested events, but it doesn't look 
like tripleoclient is using that.


Its not clear if the current polling is contributing to the above issue 
but I'd definitely recommend switching over.


This is the recommended approach:
http://git.openstack.org/cgit/openstack/python-heatclient/tree/heatclient/osc/v1/stack.py#n180

This is what tripleoclient does currently:

http://git.openstack.org/cgit/openstack/python-tripleoclient/tree/tripleoclient/utils.py#n272

The get_events call is low-overhead, but the get_stack call isn't, and 
calling it in a loop won't be helping.


poll_for_events currently doesn't have an argument for specifying the 
nested_depth for what events to log. I can add that to heatclient unless 
you can live with only logging the events for the top level resources.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Current status of observe reality feature implementation (Convergence Phase 2)

2016-08-17 Thread Steve Baker

On 17/08/16 01:58, Peter Razumovsky wrote:

Hi all!

I'd like to provide current status of observe reality implementation. 
I remember, that community wished to land before N release time as 
much observe reality patches as possible.


Next resource plugins have observe reality feature patches on review 
and should be reviewed *as soon as possible*:


_Nova resources:_
Nova::Server - https://review.openstack.org/#/c/244066/

_Cinder resources:_
Cinder::VolumeType - https://review.openstack.org/#/c/250451/
Cinder::Volume - https://review.openstack.org/261445
Cinder::EncryptedVolumeType - https://review.openstack.org/276730

_Aodh resources:_
Aodh::GnocchiAggregationByResourcesAlarm - 
https://review.openstack.org/#/c/314540/
Aodh::GnocchiAggregationByMetricsAlarm - 
https://review.openstack.org/314517

Aodh::GnocchiResourcesAlarm - https://review.openstack.org/#/c/314488/
Aodh::CombinationAlarm - https://review.openstack.org/#/c/313513/
Aodh::Alarm - https://review.openstack.org/#/c/313499/

I've added these to the priority reviews etherpad

https://etherpad.openstack.org/p/heat-newton-reviews

Next resource plugins have observe reality feature patches on review 
and should be reviewed *when it would be convenient*:


_Neutron resources:_
Neutron::Net - https://review.openstack.org/255287
Neutron::Subnet - https://review.openstack.org/255753
Neutron::Router - https://review.openstack.org/255776
Neutron::FloatingIP - https://review.openstack.org/256264
Neutron::Port - https://review.openstack.org/259074
vpnservice.py resources - https://review.openstack.org/266910
firewall.py resources - https://review.openstack.org/271992
Neutron::ProviderNet - https://review.openstack.org/273055

_Sahara resources:_
sahara/templates resources - https://review.openstack.org/274073
Sahara::ImageRegistry - https://review.openstack.org/274648
Sahara::DataSource - https://review.openstack.org/274654
Sahara::JobBinary - https://review.openstack.org/274658

_Manila resources:_
Manila::SecurityService - https://review.openstack.org/275344
Manila::ShareType - https://review.openstack.org/275363
Manila::ShareNetwork - https://review.openstack.org/275363
Manila::Share - https://review.openstack.org/276151

Next resource plugins will be available after testing and rebasing:

_Keystone resources: _

https://review.openstack.org/#/q/status:open+project:openstack/heat+branch:master+topic:bp/get-reality-for-resources+message:%22Keystone::%22

---
Best regards,
Peter Razumovsky


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] 2 million requests / sec, 100s of nodes

2016-08-07 Thread Steve Baker

On 05/08/16 21:48, Ricardo Rocha wrote:

Hi.

Quick update is 1000 nodes and 7 million reqs/sec :) - and the number 
of requests should be higher but we had some internal issues. We have 
a submission for barcelona to provide a lot more details.


But a couple questions came during the exercise:

1. Do we really need a volume in the VMs? On large clusters this is a 
burden, and local storage only should be enough?


2. We observe a significant delay (~10min, which is half the total 
time to deploy the cluster) on heat when it seems to be crunching the 
kube_minions nested stacks. Once it's done, it still adds new stacks 
gradually, so it doesn't look like it precomputed all the info in advance


Anyone tried to scale Heat to stacks this size? We end up with a stack 
with:

* 1000 nested stacks (depth 2)
* 22000 resources
* 47008 events

And already changed most of the timeout/retrial values for rpc to get 
this working.


This delay is already visible in clusters of 512 nodes, but 40% of the 
time in 1000 nodes seems like something we could improve. Any hints on 
Heat configuration optimizations for large stacks very welcome.



Yes, we recommend you set the following in /etc/heat/heat.conf [DEFAULT]:
max_resources_per_stack = -1

Enforcing this for large stacks has a very high overhead, we make this 
change in the TripleO undercloud too.



Cheers,
  Ricardo

On Sun, Jun 19, 2016 at 10:59 PM, Brad Topol > wrote:


Thanks Ricardo! This is very exciting progress!

--Brad


Brad Topol, Ph.D.
IBM Distinguished Engineer
OpenStack
(919) 543-0646
Internet: bto...@us.ibm.com 
Assistant: Kendra Witherspoon (919) 254-0680

Inactive hide details for Ton Ngo---06/17/2016 12:10:33
PM---Thanks Ricardo for sharing the data, this is really
encouraging! TTon Ngo---06/17/2016 12:10:33 PM---Thanks Ricardo
for sharing the data, this is really encouraging! Ton,

From: Ton Ngo/Watson/IBM@IBMUS
To: "OpenStack Development Mailing List \(not for usage
questions\)" >
Date: 06/17/2016 12:10 PM
Subject: Re: [openstack-dev] [magnum] 2 million requests / sec,
100s of nodes






Thanks Ricardo for sharing the data, this is really encouraging!
Ton,

Inactive hide details for Ricardo Rocha ---06/17/2016 08:16:15
AM---Hi. Just thought the Magnum team would be happy to hear
:)Ricardo Rocha ---06/17/2016 08:16:15 AM---Hi. Just thought the
Magnum team would be happy to hear :)

From: Ricardo Rocha >
To: "OpenStack Development Mailing List (not for usage questions)"
>
Date: 06/17/2016 08:16 AM
Subject: [openstack-dev] [magnum] 2 million requests / sec, 100s
of nodes




Hi.

Just thought the Magnum team would be happy to hear :)

We had access to some hardware the last couple days, and tried some
tests with Magnum and Kubernetes - following an original blog post
from the kubernetes team.

Got a 200 node kubernetes bay (800 cores) reaching 2 million
requests / sec.

Check here for some details:_

__https://openstack-in-production.blogspot.ch/2016/06/scaling-magnum-and-kubernetes-2-million.html_



We'll try bigger in a couple weeks, also using the Rally work from
Winnie, Ton and Spyros to see where it breaks. Already identified a
couple issues, will add bugs or push patches for those. If you have
ideas or suggestions for the next tests let us know.

Magnum is looking pretty good!

Cheers,
Ricardo

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
_
__http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev_




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev






Re: [openstack-dev] [sahara][heat][infra] breakage of Sahara gate and images from openstack.org

2016-08-01 Thread Steve Baker

On 02/08/16 03:11, Luigi Toscano wrote:

On Monday, 1 August 2016 10:56:21 CEST Zane Bitter wrote:

On 29/07/16 13:12, Luigi Toscano wrote:

Hi all,
the Sahara jobs on the gate run the scenario tests (from sahara-tests)
using the fake plugin, so no real Hadoop/Spark/BigData operations are
performed, but other the other expected operations are executed on the
image. In order to do this we used for long time this image:
http://tarballs.openstack.org/heat-test-image/fedora-heat-test-image.qcow2

which was updated early on this Friday (July 29th) from Fedora 22 to
Fedora 24 breaking our jobs with some cryptic error, maybe something
related to the repositories:
http://logs.openstack.org/46/335946/12/check/gate-sahara-tests-dsvm-scenar
io-nova-heat/5eeff52/logs/screen-sahara-eng.txt.gz?level=WARNING

So AFAICT from the log:

"rpm -q xfsprogs" prints "package xfsprogs is not installed" which is
expected if xfsprogs is not installed.

"yum install -y xfsprogs" redirects to "/usr/bin/dnf install -y
xfsprogs" which is expected on F24.

dnf fails with "Error: Failed to synchronize cache for repo 'fedora'"
which means it couldn't download the Fedora repository data.

"sudo mount -o data=writeback,noatime,nodiratime /dev/vdb
/volumes/disk1" then fails, doubtlessly because xfsprogs in not installed.

The absence of "sudo" in the yum command (when it does appear in the
mount command) is suspicious, but unlikely to be the reason it can't
sync the cache.

This is why I mentioned the repositories, yes.


It's not obvious why this change of image would suddenly result in not
being able to install packages. It seems more likely that you've never
been able to install packages, but the previous image had xfsprogs
preinstalled and the new one doesn't. I don't know the specifics of how
that image is built, but certainly Fedora has been making an ongoing
effort to strip the cloud image back to basics.

But this is not a normal Fedora image. If I read project-config correctly,
this is generated by this job:

http://git.openstack.org/cgit/openstack-infra/project-config/tree/jenkins/
jobs/heat.yaml#n34

 From a brief chat on #heat on Friday it seems that the image is not gated or
checked or even used right now. Is it the case? The image is almost a simple
Fedora with few extra packages:
http://git.openstack.org/cgit/openstack/heat-templates/tree/hot/software-config/test-image/build-heat-test-image.sh

We've stopped using this image recently because the download failure 
rate from tarballs.openstack.org was impacting heat's gate job 
reliability. We've switched to a vanilla fedora for now because none of 
our tests actually require a customized image. When we do have such 
tests we'll likely do boot-time install of packages from an AFS infra 
mirror.


We had no idea that Sahara was using this image in their gate, and it 
was certainly never intended for broader consumption.


Sahara would have a few options for an alternative:

- changing the test to work on a vanilla image

- do boot-time installation of the required packages

- work with infra on creating and hosting a custom image

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat][tripleo] Tripleo holding on to old, bad data

2016-06-26 Thread Steve Baker
Assuming the stack is deleted and nova is showing no servers, you likely 
have ironic nodes which are not in a state which can be scheduled.


Do an ironic node-list, you want Power State: Off, Provisioning State: 
available, Maintenance: False



On 25/06/16 09:27, Adam Young wrote:
A coworker and I have both had trouble recovering from failed 
overcloud deploys.  I've wiped out whatever data I can, but, even with 
nothing in the Heat Database, doing an


openstack overcloud deploy

seems to be looking for a specific Nova server by UUID:


heat resource-show 93afc25e-1ab2-4773-9949-6906e2f7c115 0

| resource_status_reason | ResourceInError: 
resources[0].resources.Controller: Went to status ERROR due 
t│·
o "Message: No valid host was found. There are not enough hosts 
available., Code: 500" | 
│·

| resource_type  | OS::TripleO::Controller


Inside the Nova log I see:


2016-06-24 21:05:06.973 15551 DEBUG nova.api.openstack.wsgi 
[req-c8a5179c-2adf-45a6-b186-7d7b29cd8f39 
bcd│·fefb36f3ca9a8f3cfa445ab40 
ec662f250a85453cb40054f3aff49b58 - - -] Returning 404 to user: 
Instance 
8f9│·0c961-4609-4c9b-9d62-360a40f88eed 
could not be found. __call__ 
/usr/lib/python2.7/site-packages/nova/api/│·

openstack/wsgi.py:1070


How can I get the undercloud back to a clean state?


__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][infra][deployment] Adding multinode CI jobs for TripleO in nodepool

2016-05-30 Thread Steve Baker

On 28/05/16 06:03, James Slagle wrote:

I've been working on various patches to TripleO to make it possible
for the baremetal provisioning part of the workflow to be optional. In
such a scenario, TripleO wouldn't use Nova or Ironic to boot any
baremetal nodes. Instead it would rely on the nodes to be already
installed with an OS and powered on. We then use Heat to drive the
deployment of OpenStack on those nodes...that part of the process is
largely unchanged.

One of the things this would allow TripleO to do is make use of CI
jobs using nodes just from the regular cloud providers in nodepool
instead of having to use our own TripleO cloud
(tripleo-test-cloud-rh1) to run all our jobs.

I'm at a point where I can start working on patches to try and set
this up, but I wanted to provide this context so folks were aware of
the background.

We'd probably start with our simplest configuration of a job with at
least 3 nodes (undercloud/controller/compute), and using CentOS
images. It looks like right now all multinode jobs are 2 nodes only
and use Ubuntu. My hope is that I/we can make some progress in
different multinode configurations and collaborate on any setup
scripts or ansible playbooks in a generally useful way. I know there
was interest in different multinode setups from the various deployment
teams at the cross project session in Austin.

If there are any pitfalls or if there are any concerns about TripleO
going in this direction, I thought we could discuss those here. Thanks
for any feedback.

This raises the possibility of an alternative to OVB for 
trying/developing TripleO on a host cloud.


If a vm version of the overcloud-full image is also generated then the 
host cloud can boot these directly. The approach above can then be used 
to treat these nodes as pre-existing nodes to adopt.


I did this for a while configuring the undercloud nova to use the fake 
virt driver, but it sounds like the approach above doesn't interact with 
nova at all.


So I'm +1 on this approach for *some* development environments too. Can 
you provide a list of the changes?


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Kolla][Heat][Higgins][Magnum][Kuryr] Gap analysis: Heat as a k8s orchestrator

2016-05-29 Thread Steve Baker

On 29/05/16 08:16, Hongbin Lu wrote:



-Original Message-
From: Zane Bitter [mailto:zbit...@redhat.com]
Sent: May-27-16 6:31 PM
To: OpenStack Development Mailing List
Subject: [openstack-dev] [TripleO][Kolla][Heat][Higgins][Magnum][Kuryr]
Gap analysis: Heat as a k8s orchestrator

I spent a bit of time exploring the idea of using Heat as an external
orchestration layer on top of Kubernetes - specifically in the case of
TripleO controller nodes but I think it could be more generally useful
too - but eventually came to the conclusion it doesn't work yet, and
probably won't for a while. Nevertheless, I think it's helpful to
document a bit to help other people avoid going down the same path, and
also to help us focus on working toward the point where it _is_
possible, since I think there are other contexts where it would be
useful too.

We tend to refer to Kubernetes as a "Container Orchestration Engine"
but it does not actually do any orchestration, unless you count just
starting everything at roughly the same time as 'orchestration'. Which
I wouldn't. You generally handle any orchestration requirements between
services within the containers themselves, possibly using external
services like etcd to co-ordinate. (The Kubernetes project refer to
this as "choreography", and explicitly disclaim any attempt at
orchestration.)

What Kubernetes *does* do is more like an actively-managed version of
Heat's SoftwareDeploymentGroup (emphasis on the _Group_). Brief recap:
SoftwareDeploymentGroup is a type of ResourceGroup; you give it a map
of resource names to server UUIDs and it creates a SoftwareDeployment
for each server. You have to generate the list of servers somehow to
give it (the easiest way is to obtain it from the output of another
ResourceGroup containing the servers). If e.g. a server goes down you
have to detect that externally, and trigger a Heat update that removes
it from the templates, redeploys a replacement server, and regenerates
the server list before a replacement SoftwareDeployment is created. In
constrast, Kubernetes is running on a cluster of servers, can use rules
to determine where to run containers, and can very quickly redeploy
without external intervention in response to a server or container
falling over. (It also does rolling updates, which Heat can also do
albeit in a somewhat hacky way when it comes to SoftwareDeployments -
which we're planning to fix.)

So this seems like an opportunity: if the dependencies between services
could be encoded in Heat templates rather than baked into the
containers then we could use Heat as the orchestration layer following
the dependency-based style I outlined in [1]. (TripleO is already
moving in this direction with the way that composable-roles uses
SoftwareDeploymentGroups.) One caveat is that fully using this style
likely rules out for all practical purposes the current Pacemaker-based
HA solution. We'd need to move to a lighter-weight HA solution, but I
know that TripleO is considering that anyway.

What's more though, assuming this could be made to work for a
Kubernetes cluster, a couple of remappings in the Heat environment file
should get you an otherwise-equivalent single-node non-HA deployment
basically for free. That's particularly exciting to me because there
are definitely deployments of TripleO that need HA clustering and
deployments that don't and which wouldn't want to pay the complexity
cost of running Kubernetes when they don't make any real use of it.

So you'd have a Heat resource type for the controller cluster that maps
to either an OS::Nova::Server or (the equivalent of) an OS::Magnum::Bay,
and a bunch of software deployments that map to either a
OS::Heat::SoftwareDeployment that calls (I assume) docker-compose
directly or a Kubernetes Pod resource to be named later.

The first obstacle is that we'd need that Kubernetes Pod resource in
Heat. Currently there is no such resource type, and the OpenStack API
that would be expected to provide that API (Magnum's /container
endpoint) is being deprecated, so that's not a long-term solution.[2]
Some folks from the Magnum community may or may not be working on a
separate project (which may or may not be called Higgins) to do that.
It'd be some time away though.

An alternative, though not a good one, would be to create a Kubernetes
resource type in Heat that has the credentials passed in somehow. I'm
very against that though. Heat is just not good at handling credentials
other than Keystone ones. We haven't ever created a resource type like
this before, except for the Docker one in /contrib that serves as a
prime example of what *not* to do. And if it doesn't make sense to wrap
an OpenStack API around this then IMO it isn't going to make any more
sense to wrap a Heat resource around it.

There are ways to alleviate the credential handling issue. First, Kubernetes 
supports Keystone authentication [1]. Magnum has a BP [2] to turn on this 
feature. In addition, there is a Kubernetes 

Re: [openstack-dev] [heat]informal meetup during summit

2016-04-25 Thread Steve Baker
We are now at Terry black's BBQ now for anyone who wants to join usThanks for 
organising Rico! See you when you get there.

- ZB


On 22/04/16 20:01, Rico Lin wrote:
> Let's settle down on with
>
> A meet up on Monday night 7:00pm
> At continentalclub
> 
> address : 1315 S Congress Ave
> Austin, TX 78704 http://continentalclub.com
> 
> And
> Friday morning 10:00 venue:TBD
>
> Is the time and venue find with everyone?
>
> Everyone are welcome :)
> Feel free to let me know if you're coming, just for easy pre-booking
> purpose:)
>
> On Apr 22, 2016 12:13 AM, "Zane Bitter"  > wrote:
>
> On 20/04/16 13:00, Rico Lin wrote:
>
> Hi team
> Let plan for more informal meetup(relax) time! Let all heaters
> and any
> other projects can have fun and chance for technical discussions
> together.
>
> After discuss in meeting, we will have a pre-meetup-meetup on Friday
> morning to have a cup of cafe or some food. Would like to ask if
> anyone
> knows any nice place for this meetup?:)
>
>
> According to
> https://www.openstack.org/summit/austin-2016/guide-to-austin/ if we
> line up at Franklin's at 7am then we can be eating barbeque by 11
> and still make it back in time for the afternoon meetup :))
>
> Also open for other chance for all can go out for a nice dinner and
> beer. Right now seems maybe Monday or Friday night could be the best
> candidate for this wonderful task, what all think about this? :)
>
>
> +1. I'll be around on Friday, but I imagine a few people will be
> leaving so Monday is probably better.
>
> cheers,
> Zane.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] openstack client slowness / client-as-a-service

2016-04-19 Thread Steve Baker

On 20/04/16 06:17, Monty Taylor wrote:

On 04/19/2016 10:16 AM, Daniel P. Berrange wrote:

On Tue, Apr 19, 2016 at 09:57:56AM -0500, Dean Troyer wrote:

On Tue, Apr 19, 2016 at 9:06 AM, Adam Young  wrote:

I wonder how much of that is Token caching.  In a typical CLI use 
patter,
a new token is created each time a client is called, with no 
passing of a
token between services.  Using a session can greatly decrease the 
number of

round trips to Keystone.



Not as much as you think (or hope?).  Persistent token caching to 
disk will
help some, at other expenses though.  Using --timing on OSC will 
show how

much time the Identity auth round trip cost.

I don't have current numbers, the last time I instrumented OSC there 
were

significant load times for some modules, so we went a good distance to
lazy-load as much as possible.

What Dan sees WRT a persistent client process, though, is a 
combination of
those two things: saving the Python loading and the Keystone round 
trips.


The 1.5sec overhead I eliminated doesn't actually have anything todo
with network round trips at all. Even if you turn off all network
services and just run 'openstack ' and let it fail due
to inability to connect it'll still have that 1.5 sec overhead. It
is all related to python runtime loading and work done during module
importing.

eg run 'unstack.sh' and then compare the main openstack client:

$ time /usr/bin/openstack server list
Discovering versions from the identity service failed when creating 
the password plugin. Attempting to determine version from URL.
Unable to establish connection to 
http://192.168.122.156:5000/v2.0/tokens


real0m1.555s
user0m1.407s
sys0m0.147s

Against my client-as-a-service version:

$ time $HOME/bin/openstack server list
[Errno 111] Connection refused

real0m0.045s
user0m0.029s
sys0m0.016s


I'm sure there is scope for also optimizing network traffic / round
trips, but I didn't investigate that at all.


I have (had!) a version of DevStack that put OSC into a subprocess and
called it via pipes to do essentially what Dan suggests.  It saves some
time, at the expense of complexity that may or may not be worth the 
effort.


devstack doesn't actually really need any significant changes beyond
making sure $PATH pointed to the replacement client programs and that
the server was running - the latter could be automated as a launch on
demand thing which would limit devstack changes.

It actually doesn't technically need any devstack change - these
replacement clients could simply be put in some 3rd party git repo
and let developers who want the speed benefit simply put them in
their $PATH before running devstack.

One thing missing is any sort of transactional control in the I/O 
with the
subprocess, ie, an EOT marker.  I planned to add a -0 option (think 
xargs)
to handle that but it's still down a few slots on my priority list.  
Error

handling is another problem, and at this point (for DevStack purposes
anyway) I stopped the investigation, concluding that reliability 
trumped a

few seconds saved here.


For I/O I simply replaced stdout + stderr with a new StringIO handle to
capture the data when running each command, and for error handling I
ensured the exit status was fed back & likewise stderr printed.

It is more than just a few seconds saved - almost 4 minutes, or
nearly 20% of entire time to run stack.sh on my machine


Ultimately, this is one of the two giant nails in the coffin of 
continuing

to persue CLIs in Python.  The other is co-installability. (See that
current thread on the ML for pain points).  Both are easily solved with
native-code-generating languages.  Go and Rust are at the top of my
personal list here...


Using entrypoints and plugins in python is slow, so loading them is 
slow, as is loading all of the dependent libraries. Those were choices 
made for good reason back in the day, but I'm not convinced either are 
great anymore.


A pluginless CLI that simply used REST calls rather than the 
python-clientlibs should be able to launch in get to the business of 
doing work in 0.2 seconds - counting time to load and parse 
clouds.yaml. That time could be reduced - the time spent in occ 
parsing vendor json files is not strictly necessary and certainly 
could go faster. It's not as fast as 0.004 seconds, but with very 
little effort it's 6x faster.


Rather than ditching python for something like go, I'd rather put 
together a CLI with no plugins and that only depended on keystoneauth 
and os-client-config as libraries. No?




There is a middle ground between discovering plugins on every run and 
not allowing plugins at all. There should be significant performance 
gains by having a plugin registry which caches the result of discovery. 
The cost is the complexity of managing registry rebuilds when plugins 
change. This should be mitigated for most users by having registry 
rebuild hooks triggered by package/pip installs. The needs of 

Re: [openstack-dev] [tripleo][heat] Summit session clashes

2016-04-19 Thread Steve Baker

On 20/04/16 13:26, Zane Bitter wrote:

On 19/04/16 18:04, Steve Baker wrote:

On 19/04/16 20:29, Steven Hardy wrote:

On Tue, Apr 19, 2016 at 04:24:46PM +1200, Steve Baker wrote:
All of the TripleO design summit sessions are on Thursday 
afternoon in
slots which clash with Heat sessions. Heat is a core component 
of TripleO
and as a contributor to both projects I was rather hoping to 
attend as
many of both sessions as possible - I don't think I'm alone in 
this

desire.

Is it possible that some horse trading could take place to 
reduce the

clashes? Maybe TripleO sessions could move to Wednesday morning?

Yes I agree this is unfortunate.  I already queried the clashes wrt the
contributor meetups, and was told we can only adjust if we can find 
another
project willing to switch - I'm Open to negotiation if any other 
PTLs wish

to change sessions at this late stage.

I see the current Heat schedule has SoftwareDeployment improvements 
[1] and
Issues with very large stacks [2] at non-conflicting times, which is 
good
as these are probably amongst the top priorities for TripleO (other 
than

performance improvements, which relates to very-large-stacks).

One observation I would make is that Heat does have a pretty large 
number

of sessions (12 in total plus meetup), this is always going to present
challenges from a scheduling point of view - perhaps we can ask for a
volunteer or two (other than myself) from the Heat community who is 
willing

to cover at least the Upgrades fishbowl[3] and Composable Services
workroom[4] sessions if we can't resolve the conflicts.

Thanks,

Steve

[1]https://www.openstack.org/summit/austin-2016/summit-schedule/events/9115 

[2]https://www.openstack.org/summit/austin-2016/summit-schedule/events/9117 

[3]https://www.openstack.org/summit/austin-2016/summit-schedule/events/9118 

[4]https://www.openstack.org/summit/austin-2016/summit-schedule/events/9292 



I think we could reduce the topic overlap just by shuffling the heat
sessions.

Thomas, what do you think of the following?
Swap work sessions "Release model and versioning" and "Validation
improvements"
https://www.openstack.org/summit/austin-2016/summit-schedule/events/9240
https://www.openstack.org/summit/austin-2016/summit-schedule/events/9247
This would let me attend the tripleo CI work session, and the release
model session which I proposed ;)


I'm driving the validation improvements session, but I don't care 
about the conflicting TripleO session "Reducing the CI pain" so that's 
OK.



Swap work sessions "Performance improvements" and "hot-parser"
https://www.openstack.org/summit/austin-2016/summit-schedule/events/9236
https://www.openstack.org/summit/austin-2016/summit-schedule/events/9248


I think this is a no-go because it would put the hot-parser session on 
at the same time as the Tacker/heat-translator joint session that will 
necessarily have a bunch of the same people in it.



This will make the performance work session before the performance
fishbowl, but I'm sure we could make that work. Its not like we need the
fishbowl to solicit areas of improvement.


Umm, I thought that was the idea.

I need to be at both of those Heat ones anyway, so this doesn't really 
help me. I'd rather have the DLM session in this slot instead. (The 
only sessions I can really skip are the Release Model, Functional 
Tests and DLM.) That would give us:


  HeatTripleO

 Wed 3:30 Release Model
 Wed 4:30 HOT Parser
 Wed 5:20 Functional Tests

 Thu 1:30 DLM Upgrades
 Thu 2:20 Convergence switchover  Containers
 Thu 3:10 Convergence cleanup Composable Roles
 Thu 4:10 Performance API
 Thu 5:00 Validation  CI

I think that way Steve and I could probably both cover upgrades, and 
he could cover the rest.


I'd like to get to the composable roles and containers sessions too, 
but we'd have to rejig basically every Heat session and I think it's 
too late to be doing that.



+1, this proposal works for me


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][heat][all] Summit session clashes

2016-04-19 Thread Steve Baker

On 19/04/16 20:29, Steven Hardy wrote:

On Tue, Apr 19, 2016 at 04:24:46PM +1200, Steve Baker wrote:

All of the TripleO design summit sessions are on Thursday afternoon in
slots which clash with Heat sessions. Heat is a core component of TripleO
and as a contributor to both projects I was rather hoping to attend as
many of both sessions as possible - I don't think I'm alone in this
desire.

Is it possible that some horse trading could take place to reduce the
clashes? Maybe TripleO sessions could move to Wednesday morning?

Yes I agree this is unfortunate.  I already queried the clashes wrt the
contributor meetups, and was told we can only adjust if we can find another
project willing to switch - I'm Open to negotiation if any other PTLs wish
to change sessions at this late stage.

I see the current Heat schedule has SoftwareDeployment improvements [1] and
Issues with very large stacks [2] at non-conflicting times, which is good
as these are probably amongst the top priorities for TripleO (other than
performance improvements, which relates to very-large-stacks).

One observation I would make is that Heat does have a pretty large number
of sessions (12 in total plus meetup), this is always going to present
challenges from a scheduling point of view - perhaps we can ask for a
volunteer or two (other than myself) from the Heat community who is willing
to cover at least the Upgrades fishbowl[3] and Composable Services
workroom[4] sessions if we can't resolve the conflicts.

Thanks,

Steve

[1] https://www.openstack.org/summit/austin-2016/summit-schedule/events/9115
[2] https://www.openstack.org/summit/austin-2016/summit-schedule/events/9117
[3] https://www.openstack.org/summit/austin-2016/summit-schedule/events/9118
[4] https://www.openstack.org/summit/austin-2016/summit-schedule/events/9292


I think we could reduce the topic overlap just by shuffling the heat 
sessions.


Thomas, what do you think of the following?
Swap work sessions "Release model and versioning" and "Validation 
improvements"

https://www.openstack.org/summit/austin-2016/summit-schedule/events/9240
https://www.openstack.org/summit/austin-2016/summit-schedule/events/9247
This would let me attend the tripleo CI work session, and the release 
model session which I proposed ;)


Swap work sessions "Performance improvements" and "hot-parser"
https://www.openstack.org/summit/austin-2016/summit-schedule/events/9236
https://www.openstack.org/summit/austin-2016/summit-schedule/events/9248
This will make the performance work session before the performance 
fishbowl, but I'm sure we could make that work. Its not like we need the 
fishbowl to solicit areas of improvement.


Sorry for the hassle.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo][heat][all] Summit session clashes

2016-04-18 Thread Steve Baker
All of the TripleO design summit sessions are on Thursday afternoon in 
slots which clash with Heat sessions. Heat is a core component of 
TripleO and as a contributor to both projects I was rather hoping to 
attend as many of both sessions as possible - I don't think I'm alone in 
this desire.


Is it possible that some horse trading could take place to reduce the 
clashes? Maybe TripleO sessions could move to Wednesday morning?


cheers
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack Foundation] [board][tc][all] One Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)

2016-04-12 Thread Steve Baker

On 13/04/16 11:07, Joshua Harlow wrote:

Fox, Kevin M wrote:

I think my head just exploded. :)

That idea's similar to neutron sfc stuff, where you just say what 
needs to connect to what, and it figures out the plumbing.


Ideally, it would map somehow to heat&  docker COE& neutron sfc to 
produce a final set of deployment scripts and then just runs it 
through the meat grinder. :)


It would be awesome to use. It may be very difficult to implement.


+1 it will not be easy.

Although the complexity of it is probably no different than what a SQL 
engine has to do to parse a SQL statement into a actionable plan, just 
in this case it's a application deployment 'statement' and the 
realization of that plan are of course where the 'meat' of the program 
is.


It would be nice to connect what neutron SFC stuff is being worked 
on/exists and have a single project for this kind of stuff, but maybe 
I am dreaming to much right now :-P




This sounds a lot like what the TOSCA spec[1] is aiming to achieve. I 
could imagine heat-translator[2] gaining the ability to translate TOSCA 
templates to either nova or COE specific heat templates which are then 
created as stacks.


[1] 
http://docs.oasis-open.org/tosca/TOSCA-Simple-Profile-YAML/v1.0/csprd01/TOSCA-Simple-Profile-YAML-v1.0-csprd01.html

[2] https://github.com/openstack/heat-translator



If you ignore the non container use case, I think it might be fairly 
easily mappable to all three COE's though.


Thanks,
Kevin


From: Joshua Harlow [harlo...@fastmail.com]
Sent: Tuesday, April 12, 2016 2:23 PM
To: OpenStack Development Mailing List (not for usage questions)
Cc: foundat...@lists.openstack.org
Subject: Re: [openstack-dev] [OpenStack Foundation] [board][tc][all] 
One Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)


Fox, Kevin M wrote:
I think part of the problem is containers are mostly orthogonal to 
vms/bare metal. Containers are a package for a single service. 
Multiple can run on a single vm/bare metal host. Orchestration like 
Kubernetes comes in to turn a pool of vm's/bare metal into a system 
that can easily run multiple containers.




Is the orthogonal part a problem because we have made it so or is it
just how it really is?

Brainstorming starts here:

Imagine a descriptor language like (which I stole from
https://review.openstack.org/#/c/210549 and modified):

  ---
  components:
  -   label: frontend
  count: 5
  image: ubuntu_vanilla
  requirements: high memory, low disk
  stateless: true
  -   label: database
  count: 3
  image: ubuntu_vanilla
  requirements: high memory, high disk
  stateless: false
  -   label: memcache
  count: 3
  image: debian-squeeze
  requirements: high memory, no disk
  stateless: true
  -   label: zookeeper
  count: 3
  image: debian-squeeze
  requirements: high memory, medium disk
  stateless: false
  backend: VM
  networks:
  -   label: frontend_net
  flavor: "public network"
  associated_with:
  - frontend
  -   label: database_net
  flavor: high bandwidth
  associated_with:
  - database
  -   label: backend_net
  flavor: high bandwidth and low latency
  associated_with:
  - zookeeper
  - memchache
  constraints:
  -   ref: container_only
  params:
  - frontend
  -   ref: no_colocated
  params:
  -   database
  -   frontend
  -   ref: spread
  params:
  -   database
  -   ref: no_colocated
  params:
  -   database
  -   frontend
  -   ref: spread
  params:
  -   memcache
  -   ref: spread
  params:
  -   zookeeper
  -   ref: isolated_network
  params:
  - frontend_net
  - database_net
  - backend_net
  ...


Now nothing in the above is about container, or baremetal or vms,
(although a 'advanced' constraint can be that a component must be on a
container, and it must say be deployed via docker image XYZ...); instead
it's just about the constraints that a user has on there deployment and
the components associated with it. It can be left up to some consuming
project of that format to decide how to turn that desired description
into an actual description (aka a full expanding of that format into an
actual deployment plan), possibly say by optimizing for density (packing
as many things container) or optimizing for security (by using VMs) or
optimizing for performance (by using bare-metal).

So, rather then concern itself with supporting launching through a 
COE and through Nova, which are two totally different code paths, 
OpenStack advanced services like Trove could just use a Magnum COE 
and have a UI 

Re: [openstack-dev] heat resource-signal

2016-04-12 Thread Steve Baker

On 12/04/16 18:16, Monika Parkar wrote:

Hi,

I am new to the openstack.
I was going through the heat usecase "heat resource-signal", I would 
like to know what kind of signal we can send and how.

I have executed the below command:
# heat resource-signal stack-name resource-name
But internal workflow I am Unable to understand.

Can anybody help me out to understand the workflow of this usecase.

Thanks & Regards,
Monika


This is used to move signal/waitcondition/deployment resources out of 
the IN_PROGRESS state, along with some json data for that resource to 
consume.


The format of the data depends on what resource type you're actually 
using. Can you elaborate on what you're trying to do?


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Can we create some subteams?

2016-04-11 Thread Steve Baker

On 12/04/16 11:48, Jeremy Stanley wrote:

On 2016-04-12 11:43:06 +1200 (+1200), Steve Baker wrote:

Can I suggest a sub-team for
os-collect-config/os-refresh-config/os-apply-config? I ask since
these tools also make up the default heat agent, and there is
nothing in them which is TripleO specific.

Could make sense similarly for diskimage-builder, as there is a lot
of TripleO/Infra cross-over use and contribution happening there.

+1, this tool is general purpose and has diverse contributors and consumers
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Can we create some subteams?

2016-04-11 Thread Steve Baker

On 11/04/16 22:19, Steven Hardy wrote:

On Mon, Apr 11, 2016 at 05:54:11AM -0400, John Trowbridge wrote:

Hola OOOers,

It came up in the meeting last week that we could benefit from a CI
subteam with its own meeting, since CI is taking up a lot of the main
meeting time.

I like this idea, and think we should do something similar for the other
informal subteams (tripleoclient, UI), and also add a new subteam for
tripleo-quickstart (and maybe one for releases?).

+1, from the meeting and other recent discussions it sounds like defining
some sub-teams would be helpful, let's try to enumerate those discussed:

- tripleo-ci
- API (Mistral based API which is landing in tripleo-common atm)
- Tripleo-UI
- os-net-config
- python-tripleoclient
- tripleo-quickstart
Can I suggest a sub-team for 
os-collect-config/os-refresh-config/os-apply-config? I ask since these 
tools also make up the default heat agent, and there is nothing in them 
which is TripleO specific.




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] tripleo-quickstart import

2016-04-03 Thread Steve Baker

On 30/03/16 13:37, John Trowbridge wrote:


On 03/29/2016 08:30 PM, John Trowbridge wrote:

Hola,

With the approval of the tripleo-quickstart spec[1], it is time to
actually start doing the work. The first work item is moving it to the
openstack git. The spec talks about moving it as is, and this would
still be fine.

However, there are roles in the tripleo-quickstart tree that are not
directly related to the instack-virt-setup replacement aspect that is
approved in the spec (image building, deployment). I think these should
be split into their own ansible-role-* repos, so that they can be
consumed using ansible-galaxy. It would actually even make sense to do
that with the libvirt role responsible for setting up the virtual
environment. The tripleo-quickstart would then be just an integration
layer making consuming these roles for virtual deployments easy.

This way if someone wanted to make a different role for say OVB
deployments, it would be easy to use the other roles on top of a
differently provisioned undercloud.
I'm maintaining my own OVB playbooks and have been pondering how to make 
them more broadly consumable, so I'm +1 on a role structure which allows 
this.

Similarly, if we wanted to adopt ansible to drive tripleo-ci, it would
be very easy to only consume the roles that make sense for the tripleo
cloud.

So the first question is, should we split the roles out of
tripleo-quickstart?

If so, should we do that before importing it to the openstack git?

Also, should the split out roles also be on the openstack git?

Maybe this all deserves its own spec and we tackle it after completing
all of the work for the first spec. I put this on the meeting agenda for
today, but we didn't get to it.

- trown


whoops
[1]
https://github.com/openstack/tripleo-specs/blob/master/specs/mitaka/tripleo-quickstart.rst

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Nomination Oleksii Chuprykov to Heat core reviewer

2016-03-19 Thread Steve Baker

+1

On 16/03/16 23:57, Sergey Kraynev wrote:

Hi Heaters,

The Mitaka release is close to finish, so it's good time for reviewing
results of work.
One of this results is analyze contribution results for the last release cycle.
According to the data [1] we have one good candidate for nomination to
core-review team:
Oleksii Chuprykov.
During this release he showed significant value of review metric.
His review were valuable and useful. Also He has enough level of
expertise in Heat code.
So I think he is worthy to join to core-reviewers team.

I ask you to vote and decide his destiny.
  +1 - if you agree with his candidature
  -1  - if you disagree with his candidature

[1] http://stackalytics.com/report/contribution/heat-group/120




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Dealing with nonexistent resources during resource-list / stack-delete

2016-03-08 Thread Steve Baker

On 09/03/16 05:41, Johannes Grassler wrote:

Hello,

On 03/08/2016 04:57 PM, Zane Bitter wrote:

On 08/03/16 10:40, Johannes Grassler wrote:

On 03/07/2016 04:48 PM, Zane Bitter wrote:

On 04/03/16 04:35, Johannes Grassler wrote:

[Uncaught client exceptions in resource plugins' add_dependencies()
methods]

In the meantime, we need to find and squash every instance of this
problem
wherever we can like you said.


It might also be a good idea to caution against unchecked API client
invocations in

http://docs.openstack.org/developer/heat/developing_guides/pluginguide.html 


[...]

It's best if they *do* omit it entirely. The only reason we override 
it in
the Neutron resources is that the Neutron API is terrible for 
orchestration

purposes[1]. It adds a bunch of invisible, fragile magic that breaks in
subtle ways when e.g. resources are moved into nested stacks.


I never saw the Neutron API that way before (I guess I just got used 
to the

unintuitive bits), but seeing it spelled out in your rant makes it very
obvious, yes. I didn't know that was the root cause for overriding
add_dependencies() and that very ignorance of mine...

The default implementation provides everything that we *ought* to 
need, so if
we document anything I think it should be that plugin developers 
should not

touch add_dependencies() at all.


...suggests mentioning that much is probably a good idea (lest 
somebody pick
one of the Neutron plugins as a template to base their own resource 
plugin

on).


Definitely not big enough to require a spec IMO.


Yes, I can see that, given how it's not something plugin writers 
should do
anyway. Then I'll just write a little paragraph cautioning against 
overriding

add_dependcies() and add a Related-Bug: line.

Thanks for that. The paragraph could say that it is preferable to not 
override add_dependencies, but if they do then they should *never* make 
REST API calls inside it. This is for the reason you've discovered, and 
also because it will kill the performance of some stack operations.


From what I can see you've discovered the only 2 places where REST API 
calls are made from add_dependencies, I've added comments to the 
review[1] to suggest how they can be removed.


[1] https://review.openstack.org/#/c/289371

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Show TripleO: A terminal dashboard

2016-02-28 Thread Steve Baker

On 27/02/16 09:06, Ben Nemec wrote:

Interesting!  So this could conceivably be another consumer of the API,
right?  A sort of CLI UI?

On 02/25/2016 01:10 PM, Dougal Matthews wrote:

Hi all,

Over the past couple of weeks in my spare time I put together a basic
Python urwid dashboard for TripleO. You can see the usage and some
screenshots here:

http://python-tripleodash.readthedocs.org

The project is in very early stages (read as: very limited and buggy),
but I've found it useful already. At the moment it is read only but
there is no reason that needs to be the case going forward.

Ultimately I think it could become both a dashboard and a handy getting
started wizard.Good q It does this to a small extent now by listing the
commands needed to register nodes if none are found.

I wanted to share this for now and see if it interested anyone else.

Cheers,
Dougal


Very nice, I wonder if this dashboard would be a better home for my 
proposed commands for monitoring software deployments on the nodes [1]? 
It seems like a good match because it is aimed at monitoring the status 
of the deployment rather than triggering changes.


The tripleo-common changes would still need to land, but the UI would 
move from tripleoclient to tripleodash.


[1] 
https://review.openstack.org/#/q/status:open++branch:master+topic:bp/tripleo-manage-software-deployments


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Intra-column wrapping in python-neutronclient

2016-02-24 Thread Steve Baker

On 25/02/16 06:23, Carl Baldwin wrote:

Hi,

I've noticed a new behavior from the python-neutronclient which
disturbs me.  For me, this just started happening with my latest build
of devstack which I built yesterday.  It didn't happen with another
recent but little bit older devstack.

The issue is that the client is now wrapping content within columns.
For example:

   
+-+-+--+
   | id  | name| subnets
   |
   
+-+-+--+
   | eb850219-6a42-42ed-ac6a-| public  |
099745e5-4925-4572-a88f- |
   | 927b03a0dc77| | a5376206c892
172.24.4.0/24   |
   | | | 5b6dfb0d-c97e-48ae-
   |
   | | | 98c9-7fe3e1e8e88b
2001:db8::/64  |
   | ec73110f-   | private | 4073b9e7-a58e-4d6e-
   |
   | 86ad-4292-9547-7c2789a7023b | | a2e4-7a45ae899671
10.0.0.0/24|
   | | |
f12aee80-fc13-4adf-a0eb- |
   | | | 706af4319094
fd9d:e27:3eab::/64  |
   
+-+-+--+

Notice how the ids in the first column are wrapped within the column.
I personally don't see this as an aesthetic improvement.  It destroys
by ability to cut and paste the data within this column.  When I
stretch my console out to fix it, I have to rerun the command with the
new window width to fix it.  I used to be able to stretch my console
horizontally and the wrapped would automatically go away.
My intention was that it be a usability improvement rather than merely 
an aesthetic one. Yes, it is unfortunate that it affects this specific 
copy paste scenario but there are others where it is improved. I've 
often been in the situation where I don't know which uuid to copy 
because of the amount of overlap of unrelated columns.

How can I turned this off now?  Also, can I request that this new
"feature" be disabled by default?

Table resizing only occurs when a tty is present. This means that any 
existing script which parses table output will not be affected. It also 
means that you can disable it by piping your command to cat.


If you're unwilling to adapt, or specify formatting options, or pipe to 
cat, then I would recommend that you submit a change to cliff to read a 
user set environment variable to switch off table resizing.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat]How to manage the life cycle of resources in a stack?

2016-02-24 Thread Steve Baker

On 25/02/16 05:27, zhu4236926 wrote:

Hi guys,
I  get some resources by creating a stack, e.g. ,the resources may 
be 2 volumes, volume A and volume B.  If the volume A and volume B are 
useless, we can delete them by deleting the stack , we also can delete 
them by cinder.If  they are deleted by cinder, though the volumes have 
been delete, the resources and stack are still exist,  a tenant has 
the maxmum quantity of stacks, so I may couldn't  create the stack if 
the number of left stacks exceed the limit . If I delete by deleting 
the stack, the volume A and volume B would be deleted both, may be I 
just want to delete the volume A and reservce volume B.
So how should I manage the resources(volumeA and volumeB) created 
by heat, deleted by cinder or heat?


The best approach would be to modify the template to remove the volumes 
that you no longer need then do a stack-update. Once the stack ends up 
being empty you can choose to delete it.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [heat] heatclient 1.0.0 and landing osc commands

2016-02-08 Thread Steve Baker
We're on a 3 week countdown before the freeze on client libraries[1] and 
I'd like to aim for a python-heatclient-1.0.0 to be released with the 
new openstackclient commands.


However there are currently ~25 of these changes needing reviews, and 
they all cause merge conflicts with each other when they append to the 
entry point list in setup.cfg.


I'd like to suggest some coordination to minimize the chances of these 
merge conflicts:
- we should use strict sorting of the openstack.orchestration.v1 list as 
per this change [2]
- existing changes should be rebased so that they add their entries in 
sort order
- ideally, related commands from the same author can be combined into a 
single series of changes (2 - 5 changes long)


With this there should be less churn on getting these changes to land, 
which will hopefully encourage some more reviews too.


[1] 
http://lists.openstack.org/pipermail/openstack-dev/2016-February/085705.html

[2] https://review.openstack.org/#/c/277595/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Changing the default Server/SoftwareDeployment transports?

2016-02-02 Thread Steve Baker

On 22/01/16 22:36, Steven Hardy wrote:

Hi all,

Wanted to start some discussion re $subject, context is:

https://bugs.launchpad.net/heat/+bug/1507568

Here, we're hitting issues because by default OS::Nova::Server uses the
POLL_SERVER_CFN transport.  This made sense back when the
SoftwareDeployment stuff was first implemented, but now we have other
options, and there are some negative consequenses of choosing this default:

1. All heat deployments rely on the heat-api-cfn service by default, when
this should really be a CFN compatibility layer.

2. Related to (1) we then require the keystone ec2tokens extension to be
enabled

3. The cfn API action DescribeStackResource is used to retrieve server
metadata.  Because that API has no action to only show the metadata, we get
*all* data for that resource - and recent changes to show all attributes by
default have made this *much* higher overhead than it once was.

4. As mentioned in the bug above, trying to resolve all the attributes
doesn't work, because we're using stack domain user credentials to poll the
CFN API, which don't have access to the related nova API for the server
resource.  This can probably be fixed, but an alternative is just don't use
this API.

So, my question is, now that we have other (better) alternatives, can we
consider switching the Server transport e.g to POLL_SERVER_HEAT by default,
and relatedly the SoftwareDeployment signal_transport to HEAT_SIGNAL?

http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Nova::Server-prop-software_config_transport

http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Heat::SoftwareDeployment-prop-signal_transport

The advantage of this is it only requires the native heat-api service, when
all other options require some other service/API to exist.

Long term, we should probably consider deprecating the CFN transport for
these (native) resources, but switching the default would be the first step
- what do people think?



I'm OK with switching to POLL_SERVER_HEAT in theory, however I have a 
couple of practical considerations:
1. POLL_SERVER_HEAT doesn't work for me at the moment, I haven't 
investigated why:
   WARNING os_collect_config.heat [-] Invalid username or password 
(Disable debug mode to suppress these details.) (HTTP 401)
2. We *must* ensure that existing stacks that were launched with default 
POLL_SERVER_CFN continue to work when the default changes to 
POLL_SERVER_HEAT


What I think would be more useful than changing the default in our 
release is making puppet-heat set the default to POLL_TEMP_URL if a 
swift or ceph is configured, falling back to POLL_SERVER_HEAT otherwise.




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Any chances to backport port_security_enabled support into Heat for Kilo 2015.1.3?

2016-01-31 Thread Steve Baker

On 29/01/16 09:45, Martinx - ジェームズ wrote:

Guys,

  This is important and Kilo is missing it:

  https://review.openstack.org/#/c/179989/

  Is it possible to backport it to Kilo 2015.1.3?

  Currently, I am manually patching Kilo / Heat by using the following diff:

  
https://review.openstack.org/gitweb?p=openstack%2Fheat.git;a=commitdiff;h=811c8714aa2442e68980561d3e8dd435378f164c

  But it is a pain to maintain...


Rather than carrying a backport you can always modify your templates to 
set port_security_enabled via the value_specs property:


  value_specs: {port_security_enabled: false}

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][heat-client] Question about heat command: deployment-list and deployment-metadata-show

2016-01-25 Thread Steve Baker

On 22/01/16 18:25, 邸小丽 Di XiaoLi wrote:

Hi:

When using heat command: heat deployment-list  and heat 
deloyment-metadata-show like this:

# heat deployment-list -s *non-exist-server-id*
++---+---+++---+---+
| id | config_id | server_id | action | status | creation_time | 
status_reason |

++---+---+++---+---+
++---+---+++---+---+
# heat deployment-metadata-show *non-exist-server-id
*[]
*Here, I give the invalid server_id. But heat client did not show me 
that the server_id is not exist. *
I think it is may be a bug as the case of invalid server_id and valid 
server_id with no deployments will both just return same empty output.

So, My questions are:
1) Is this a bug or consistent with design ?
2) If this is a bug, we should do the validation on the server_id and 
return a Not Found message as appropriate.
I would like to know whether we should do the validation  in heat 
client or heat ?


The server_id is treated as an arbitrary string with minimal validation 
- this is by design to accommodate use cases where there is not 
necessarily a nova server associated with the deployment (for example, a 
server external to the stack or the cloud, or no server at all, such as 
the functional tests for the deployment REST API)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] spec-lite for simple feature requests

2016-01-20 Thread Steve Baker

On 21/01/16 04:21, Rabi Mishra wrote:

Hi All,

As discussed in the team meeting, below is the proposed spec-lite process for 
simple feature requests. This is already being used in Glance project. 
Feedback/comments/concerns are welcome, before we update the contributor docs 
with this:).


tl;dr - spec-lite is a simple feature request created as a bug with enough 
details and with a `spec-lite` tag. Once triaged with status 'Triaged' and 
importance changed to 'Whishlist', it's approved. Status 'Won’t fix' signifies 
the request is rejected and 'Invalid' means it would require a full spec.


Heat Spec Lite
--

Lite specs are small feature requests tracked as Launchpad bugs, with status 
'Wishlist' and tagged with 'spec-lite' tag. These allow for submission and 
review of these feature requests before code is submitted.

These can be used for simple features that don’t warrant a detailed spec to be 
proposed, evaluated, and worked on. The team evaluates these requests as it 
evaluates specs. Once a bug has been approved as a Request for Enhancement 
(RFE), it’ll be targeted for a release.


The workflow for the life of a spec-lite in Launchpad is as follows:

1. File a bug with a small summary of what the request change is and tag it as 
spec-lite.
2. The bug is triaged and importance changed to Wishlist.
3. The bug is evaluated and marked as Triaged to announce approval or to Won’t 
fix to announce rejection or Invalid to request a full spec.
4. The bug is moved to In Progress once the code is up and ready to review.
5. The bug is moved to Fix Committed once the patch lands.

In summary the states are:

New:This is where spec-lite starts, as filed by the community.
Triaged:Drivers - Move to this state to mean, “you can start working on 
it”
Won’t Fix:  Drivers - Move to this state to reject a lite-spec.
Invalid:Drivers - Move to this state to request a full spec for this 
request

Lite spec Submission Guidelines
---

When a bug is submitted, there are two fields that must be filled: ‘summary’ 
and ‘further information’. The ‘summary’ must be brief enough to fit in one 
line.

The ‘further information’ section must be a description of what you would like 
to see implemented in heat. The description should provide enough details for a 
knowledgeable developer to understand what is the existing problem and what’s 
the proposed solution.

Add spec-lite tag to the bug.


Thanks,
Rabi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

+1, this sounds useful for small features.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Rico Lin for heat-core

2015-12-07 Thread Steve Baker

On 08/12/15 01:39, Sergey Kraynev wrote:

Hi all.

I'd like to nominate Rico Lin for heat-core. He did awesome job with 
providing useful and valuable reviews. Also his contribution is really 
high [1] .


[1] http://stackalytics.com/report/contribution/heat-group/60

Heat core-team, please vote with:
 +1 - if you agree
  -1 - if you disagree


+1

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Summary of In-Progress TripleO Workflow and REST API Development

2015-12-06 Thread Steve Baker

On 04/12/15 23:04, Dmitry Tantsur wrote:

On 12/03/2015 08:45 PM, Tzu-Mainn Chen wrote:

Hey all,

Over the past few months, there's been a lot of discussion and work 
around
creating a new REST API-supported TripleO deployment workflow. 
However most
of that discussion has been fragmented within spec reviews and weekly 
IRC
meetings, so I thought it might make sense to provide a high-level 
overview
of what's been going on.  Hopefully it'll provide some useful 
perspective for

those that are curious!

Thanks,
Tzu-Mainn Chen

-- 


1. Explanation for Deployment Workflow Change

TripleO uses Heat to deploy clouds.  Heat allows tremendous 
flexibility at the

cost of enormous complexity.  Fortunately TripleO has the space to allow
developers to create tools to simplify the process tremendously,  
resulting in

a deployment process that is both simple and flexible to user needs.

The current CLI-based TripleO workflow asks the deployer to modify a 
base set
of Heat environment files directly before calling Heat's stack-create 
command.
This requires much knowledge and precision, and is a process prone to 
error.


However this process can be eased by understanding that there is a 
pattern to

these modifications; for example, if a deployer wishes to enable network
isolation, a specific set of modifications must be made.  These 
modification
sets can be encapsulated through pre-created Heat environment files, 
and TripleO

contains a library of these
(https://github.com/openstack/tripleo-heat-templates/tree/master/environments). 



These environments are further categorized through the proposed 
environment
capabilities map (https://review.openstack.org/#/c/242439). This 
mapping file
contains programmatic metadata, adding items such as user-friendly 
text around
environment files and marking certain environments as mutually 
exclusive.



2. Summary of Updated Deployment Workflow

Here's a summary of the updated TripleO deployment workflow.

 1. Create a Plan: Upload a base set of heat templates and 
environment files
into a Swift container.  This Swift container will be 
versioned to allow

for future work with respect to updates and upgrades.

 2. Environment Selection: Select the appropriate environment 
files for your

deployment.

 3. Modify Parameters: Modify additional deployment parameters.  
These
parameters are influenced by the environment selection in 
step 2.


 4. Deploy: Send the contents of the plan's Swift container to 
Heat for

deployment.

Note that the current CLI-based workflow still fits here: a deployer 
can modify
Heat files directly prior to step 1, follow step 1, and then skip 
directly to
step 4.  This also allows for trial deployments with test 
configurations.



3. TripleO Python Library, REST API, and GUI

Right now, much of the existing TripleO deployment logic lives within 
the TripleO
CLI code, making it inaccessible to non-Python based UIs. Putting 
both old and
new deployment logic into tripleo-common and then creating a REST API 
on top of
that logic will enable modern Javascript-based GUIs to create cloud 
deployments

using TripleO.


4. Future Work - Validations

A possible next step is to add validations to the TripleO toolkit: 
scripts that
can be used to check the validity of your deployment pre-, in-, and  
post-flight.
These validations will be runnable and queryable through a  REST 
API.  Note that
the above deployment workflow should not be a requirement for 
validations to be

run.


5. In-Progress Development

The initial spec for the tripleo-common library has already been 
approved, and

various people have been pushing work forward.  Here's a summary:

* Move shared business logic out of CLI
   * https://review.openstack.org/249134 - simple validations (WIP)


When is this going to be finished? It's going to get me a huge merge 
conflict in https://review.openstack.org/#/c/250405/ (and make it 
impossible to backport to liberty btw).


This plan would be fine if Mitaka development was the only consideration 
but I hope that it can be adapted a little bit to take into account the 
Liberty branches, and the significant backports that will be happening 
there. The rdomanager-plugin->tripleoclient transition made backports 
painful, and having moved on for that it would be ideal if we didn't 
create the same situation again.


What I would propose is the following:
- the tripleo_common repo is renamed to tripleo and consumed by Mitaka
- the tripleo_common repo continues to exist in Liberty
- the change to rename the package tripleo_common to tripleo occurs on 
the tripleo repo in the master branch using oslo-style wildcard 
imports[1], and initially no deprecation message
- this change is backported to the tripleo_common repo on the 
stable/liberty branch


Once this is in place, stable/liberty tripleoclient can gradually move 

Re: [openstack-dev] [heat][tripleo] User Initiated Rollback

2015-12-03 Thread Steve Baker

On 04/12/15 03:41, Steven Hardy wrote:

On Thu, Dec 03, 2015 at 08:11:41AM -0500, Dan Prince wrote:

On Wed, 2015-12-02 at 16:02 +, Steven Hardy wrote:

So, chatting with Giulio today about https://bugs.launchpad.net/heat/
+bug/1521944
has be thinking about $subject.

The root case of that issue is essentially a corner case of a stack-
update,
combined with some coupling within the Neutron API which prevents the
update traversal from working.

But it raises the broader question of what a "rollback" actually is,
and
how a user can potentially use it to get out of the kind of mess
described
in that bug (where, otherwise, your only option is to delete the
entire
stack).

Currently, we treat rollback as a special type of update, where, if
an
in-progress update fails, we then try to update again, to the
previous
stack definition[1], but as Giulio has discovered, there are times
when
that doesn't work, because what you actually want is to recover the
existing resource from the backup stack, not create a new one with
the same
properties.

Is there more information about this case (a bug perhaps)? Presumably
it is an OpenStack resource you are talking about here... like a Nova
Server or Neutron Network Port?

Well the bug is linked above (1521944), but there's no bug specific to
rollback.

As Zane has pointed out, heat is actually working as desired here, because
we aren't able to differentiate an attempt to delete a neutron port which
results in "not allowed, in use" with "500, I am broken".

I was hoping there was some way to make this easier via rollback, but
increasingly it seems the solution is not to tell Heat to do the wrong
thing (which is the root cause of this issue).

There are a few ways we can do that:

1. Stop defining default "noop" resources in
overcloud-resource-registry-puppet.yaml - it makes it too easy to
accidentally switch to a noop (destructive) implementation on update.
Splitting out the noop stubs into their own environment that only gets 
included on overcloud create would certainly lower the risk of 
customizations being overwritten by stubs. We would just need a strategy 
for when new types are added that need to be stubbed by default.

2. Improve heat stack update preview, so it handles nested stacks, then we
can easily have a pre-update validation step, which for example checks (and
warns, loudly) if any resources will be deleted (particularly network and
server resources..)  I'm working on this ref:

https://bugs.launchpad.net/heat/+bug/1521971
We should definitely do this once pre-update works for nested stacks. 
tripleoclient could have a whitelist of resource types which generally 
shouldn't be replaced (subnets, ports, servers) and prompt the user with 
a list of resource which will be replaced and a N/y question to continue.



3. Implement a template annotation which allows you to say "don't update"
for certain resources, such as servers and network ports etc.  Rabi is
working on this, here's the (old) BP which didn't get implemented but I
think will help us:

https://github.com/openstack/heat-specs/blob/master/specs/kilo/stack-update-restrict.rst
Yes, a way of declaring a resource as not replaceable would also 
increase safety (in-place updates should be fine though)



Then, looking at convergence, we have a different definition of
rollback,
it's not yet clear to me how this should behave in a similar
scenario, e.g
when the resource we want to roll back to failed to get deleted but
still
exists (so, the resource is FAILED, but the underlying resource is
fine)?

Finally, the interface to rollback - atm you have to know before
something
fails that you'd like to enable rollback for a specific update.  This
seems
suboptimal, since invariably by the time you know you need rollback,
it's
too late.  Can we enable a user-initiated rollback from a FAILED
state, via
one of:

  - Introduce a new heat API that allows an explicit heat stack-
rollback?
  - (ab)use PATCH to trigger rollback on heat stack-update -x --
rollback=True?

The former approach fits better with the current stack.Stack
implementation, because the ROLLBACK stack state already exists.  The
latter has the advantage that it doesn't need a new API so might be
backportable.

Any thoughts on how we might proceed to make this situation better,
and
enable folks to roll back in the least destructive way possible when
they
end up in a FAILED state?

 From a TripleO standpoint I would really like to end up in a place
where we aren't thinking of Heat as a rollback tool and more of a make
it so tool. I think there might be a small case for the
"infrastructure" side where Heat is creating OpenStack objects for us
(servers and ports). We'd like not to destroy/replace these when we
update the "infrastructure" pieces of our stack and if things go badly
on an update you just want to stay in the (hopefully still working)
previous state.

Yeah, keeping the infrastructure and software configuration more cleanly
separated will help, 

Re: [openstack-dev] [TripleO/heat] openstack debug command

2015-12-01 Thread Steve Baker

On 02/12/15 03:18, Lennart Regebro wrote:

On Tue, Dec 1, 2015 at 3:39 AM, Steve Baker <sba...@redhat.com> wrote:

I mean _here_

https://review.openstack.org/#/c/251587/

OK, that's great! If you want any help implementing it, I can try.



Hey Lennart, help is always appreciated.

I can elaborate on the implementation approach for ``openstack overcloud 
failed list`` and you can take a crack at that if you like while I work 
on the other two commands.


I think before we start on the commands proper I will need to implement 
some yaml printing utility functions, so lets coordinate on that.


cheers

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][ironic][heat] Adding back the tripleo check job

2015-11-30 Thread Steve Baker

On 01/12/15 04:19, Derek Higgins wrote:

Hi All,

A few months tripleo switch from its devtest based CI to one that 
was based on instack. Before doing this we anticipated disruption in 
the ci jobs and removed them from non tripleo projects.


We'd like to investigate adding it back to heat and ironic as 
these are the two projects where we find our ci provides the most 
value. But we can only do this if the results from the job are treated 
as voting.


In the past most of the non tripleo projects tended to ignore the 
results from the tripleo job as it wasn't unusual for the job to 
broken for days at a time. The thing is, ignoring the results of the 
job is the reason (the majority of the time) it was broken in the 
first place.
To decrease the number of breakages we are now no longer running 
master code for everything (for the non tripleo projects we bump the 
versions we use periodically if they are working). I believe with this 
model the CI jobs we run have become a lot more reliable, there are 
still breakages but far less frequently.


What I proposing is we add at least one of our tripleo jobs back to 
both heat and ironic (and other projects associated with them e.g. 
clients, ironicinspector etc..), tripleo will switch to running latest 
master of those repositories and the cores approving on those projects 
should wait for a passing CI jobs before hitting approve. So how do 
people feel about doing this? can we give it a go? A couple of people 
have already expressed an interest in doing this but I'd like to make 
sure were all in agreement before switching it on.


+1 for heat from me. It sounds like the job won't be voting, but heat 
cores should be strongly encouraged to treat it as such.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO/heat] openstack debug command

2015-11-30 Thread Steve Baker

On 30/11/15 23:21, Steven Hardy wrote:

On Mon, Nov 30, 2015 at 10:03:29AM +0100, Lennart Regebro wrote:

I'm tasked to implement a command that shows error messages when a
deployment has failed. I have a vague memory of having seen scripts
that do something like this, if that exists, can somebody point me in
teh right direction?

I wrote a super simple script and put it in a blog post a while back:

http://hardysteven.blogspot.co.uk/2015/05/tripleo-heat-templates-part-3-cluster.html

All it does is find the failed SoftwareDeployment resources, then do heat
deployment-show on the resource, so you can see the stderr associated with
the failure.

Having tripleoclient do that by default would be useful.


Any opinions on what that should do, specifically? Traverse failed
resources to find error messages, I assume. Anything else?

Yeah, but I think for this to be useful, we need to go a bit deeper than
just showing the resource error - there are a number of typical failure
modes, and I end up repeating the same steps to debug every time.

1. SoftwareDeployment failed (mentioned above).  Every time, you need to
see the name of the SoftwareDeployment which failed, figure out if it
failed on one or all of the servers, then look at the stderr for clues.

2. A server failed to build (OS::Nova::Server resource is FAILED), here we
need to check both nova and ironic, looking first to see if ironic has the
node(s) in the wrong state for scheduling (e.g nova gave us a no valid
host error), and then if they are OK in ironic, do nova show on the failed
host to see the reason nova gives us for it failing to go ACTIVE.

3. A stack timeout happened.  IIRC when this happens, we currently fail
with an obscure keystone related backtrace due to the token expiring.  We
should instead catch this error and show the heat stack status_reason,
which should say clearly the stack timed out.

If we could just make these three cases really clear and easy to debug, I
think things would be much better (IME the above are a high proportion of
all failures), but I'm sure folks can come up with other ideas to add to
the list.

I'm actually drafting a spec which includes a command which does this. I 
hope to submit it soon, but here is the current state of that command's 
description:


Diagnosing resources in a FAILED state
--

One command will be implemented:
- openstack overcloud failed list

This will print a yaml tree showing the hierarchy of nested stacks until it
gets to the actual failed resource, then it will show information 
regarding the

failure. For most resource types this information will be the status_reason,
but for software-deployment resources the deploy_stdout, deploy_stderr and
deploy_status code will be printed.

In addition to this stand-alone command, this output will also be 
printed when

an ``openstack overcloud deploy`` or ``openstack overcloud update`` command
results in a stack in a FAILED state.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO/heat] openstack debug command

2015-11-30 Thread Steve Baker

On 01/12/15 10:28, Steven Hardy wrote:

On Tue, Dec 01, 2015 at 08:47:20AM +1300, Steve Baker wrote:

On 30/11/15 23:21, Steven Hardy wrote:

On Mon, Nov 30, 2015 at 10:03:29AM +0100, Lennart Regebro wrote:

I'm tasked to implement a command that shows error messages when a
deployment has failed. I have a vague memory of having seen scripts
that do something like this, if that exists, can somebody point me in
teh right direction?

I wrote a super simple script and put it in a blog post a while back:

http://hardysteven.blogspot.co.uk/2015/05/tripleo-heat-templates-part-3-cluster.html

All it does is find the failed SoftwareDeployment resources, then do heat
deployment-show on the resource, so you can see the stderr associated with
the failure.

Having tripleoclient do that by default would be useful.


Any opinions on what that should do, specifically? Traverse failed
resources to find error messages, I assume. Anything else?

Yeah, but I think for this to be useful, we need to go a bit deeper than
just showing the resource error - there are a number of typical failure
modes, and I end up repeating the same steps to debug every time.

1. SoftwareDeployment failed (mentioned above).  Every time, you need to
see the name of the SoftwareDeployment which failed, figure out if it
failed on one or all of the servers, then look at the stderr for clues.

2. A server failed to build (OS::Nova::Server resource is FAILED), here we
need to check both nova and ironic, looking first to see if ironic has the
node(s) in the wrong state for scheduling (e.g nova gave us a no valid
host error), and then if they are OK in ironic, do nova show on the failed
host to see the reason nova gives us for it failing to go ACTIVE.

3. A stack timeout happened.  IIRC when this happens, we currently fail
with an obscure keystone related backtrace due to the token expiring.  We
should instead catch this error and show the heat stack status_reason,
which should say clearly the stack timed out.

If we could just make these three cases really clear and easy to debug, I
think things would be much better (IME the above are a high proportion of
all failures), but I'm sure folks can come up with other ideas to add to
the list.


I'm actually drafting a spec which includes a command which does this. I
hope to submit it soon, but here is the current state of that command's
description:

Diagnosing resources in a FAILED state
--

One command will be implemented:
- openstack overcloud failed list

This will print a yaml tree showing the hierarchy of nested stacks until it
gets to the actual failed resource, then it will show information regarding
the
failure. For most resource types this information will be the status_reason,
but for software-deployment resources the deploy_stdout, deploy_stderr and
deploy_status code will be printed.

In addition to this stand-alone command, this output will also be printed
when
an ``openstack overcloud deploy`` or ``openstack overcloud update`` command
results in a stack in a FAILED state.

This sounds great!

The spec is here.

Another piece of low-hanging-fruit in the meantime is we should actually
print the stack_status_reason on failure:

https://github.com/openstack/python-tripleoclient/blob/master/tripleoclient/v1/overcloud_deploy.py#L280

The DeploymentError raised could include the stack_status_reason vs the
unqualified "Heat Stack create failed".

I guess your event listing partially overlaps with this, as you can now
derive the stack_status_reason from the last event, but it's still be good
to loudly output it so folks can see more quickly when things such as
timeouts happen that are clearly displayed in the top-level stack status.


Yes, this would be a trivially implemented quick win.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] workflow

2015-11-23 Thread Steve Baker

On 24/11/15 03:37, Dan Prince wrote:

There are lots of references to "workflow" within TripleO conversations
these days. We are at (or near) the limit of what we can do within Heat
with regards to upgrades. We've got a new TripleO API in the works (a
new version of Tuskar basically) that is specifically meant to
encapsulates business logic workflow around deployment. And, Lots of
interest in using Ansible for this and that.

So... Last week I spent a bit of time tinkering with the Mistral
workflow service that already exists in OpenStack and after a few
patches got it integrated into my undercloud:

https://etherpad.openstack.org/p/tripleo-undercloud-workflow

One could imagine us coming up with a set of useful TripleO workflows
(something like this):

  tripleo.deploy 
  tripleo.update 
  tripleo.run_ad_hoc_whatever_on_specific_roles <>

Since Mistral (the OpenStack workflow service) can already interact w/
keystone and has a good many hooks to interact with core OpenStack
services like Swift, Heat, and Nova we might get some traction very
quickly here. Perhaps we add some new Mistral Ironic actions? Or
imagine smaller more focused Heat configuration stacks that we drive
via Mistral? Or perhaps we tie in Zaqar (which already has some
integration into os-collect-config) to run ad-hoc deployment snippets
on specific roles in an organized fashion?
This would be useful, but we don't need to wait for zaqar integration 
before we can try this. We should be able to do this once the deployment 
transport is switched to swift TempURLs. I'll be working on this soon 
and will try adding support for ad-hoc deployment snippets via 
python-tripleoclient (and later maybe ansible or mistral).

Or wrapping mistral w/
tripleoclient to allow users to more easily call TripleO specific
workflows (enhancing the user feedback like we do with our heatclient
wrapping already)?

Where all this might lead... I'm not sure. But I feel like we might
benefit by adding a few extra options to our OpenStack deployment tool
chain.



Definitely a worthy experiment, lets see how it works out.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] userdata empty when using software deployment/config in Kilo

2015-10-29 Thread Steve Baker

On 29/10/15 06:12, Gabe Black wrote:

Using my own template or the example template:
https://github.com/openstack/heat-templates/blob/master/hot/software-config/example-templates/example-deploy-sequence.yaml

results in the VM's /var/lib/cloud/instance/script/userdata being empty.

The only warnings during the cloud-init boot sequence are:
[   14.470601] cloud-init[775]: 2015-10-28 17:48:15,104 - util.py[WARNING]: 
Failed running /var/lib/cloud/instance/scripts/userdata [-]
[   15.051625] cloud-init[775]: 2015-10-28 17:48:15,685 - 
cc_scripts_user.py[WARNING]: Failed to run module scripts-user (scripts in 
/var/lib/cloud/instance/scripts)
[   15.057189] cloud-init[775]: 2015-10-28 17:48:15,690 - util.py[WARNING]: Running 
module scripts-user () failed

I believe those warnings are simply because the userdata file is empty

I googled and searched and couldn't find why it wasn't working for me.

The nova.api logs show the transfer of the files, no problem there.  It is 
really sending empty userdata and it thinks it should be doing that.

To verify I added some debug prints in 
heat/engine/resources/openstack/nova/server.py:612 in handle_create() method.  
Below is the first part of the method for reference:

 def handle_create(self):
 security_groups = self.properties.get(self.SECURITY_GROUPS)

 user_data_format = self.properties.get(self.USER_DATA_FORMAT)
 ud_content = self.properties.get(self.USER_DATA)  #<---

 if self.user_data_software_config() or self.user_data_raw(): #<---
 if uuidutils.is_uuid_like(ud_content):
 # attempt to load the userdata from software config
 ud_content = self.get_software_config(ud_content) #<--- 

I added some debug log prints after the #<--- above to see what it was getting 
for user_data, and it turns out it is empty (e.g. I don't even see the third debug 
print I put in).  Spending more time looking through the code it appears to me 
that the self.properties.get(self.USER_DATA) should be returning the uuid for the 
software config resource associated with the deployment, but I could be wrong.  
Either way, it is empty which I think is not right.

Does anyone have an idea what I might be doing wrong?  I've been struggling for 
the past couple of days on this one!  Or is deployment just not stable in Kilo? 
 Documentation seems to indicate it has been supported even before Kilo.

Thanks in advance!
Gabe


Hi Gabe

It is expected that userdata is empty, because the server resources do 
not specify any script in their user_data properties.


There is other data in the initial cloud-init package which bootstraps 
polling for deployment data. The actual deployment data comes from 
requests to the heat metadata API, not in cloud-init userdata.


An appropriately built custom image will configure 
/etc/os-collect-config.conf on boot so that it can start polling for 
deployment data from heat.


Please take a look at the documentation for this:
http://docs.openstack.org/developer/heat/template_guide/software_deployment.html

cheers

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [heat] Friday afternoon heat social

2015-10-29 Thread Steve Baker
The heat team is planning to meet this afternoon from 5pm for drinks, 
eating and chat.


The Craftsman beer bistro is 15 minutes walk from here:
https://goo.gl/maps/hviv4HK1Wor
http://craftsman-craftbeerbistro.jp/

It opens at 5pm, the food is a variety of small plates to share. It 
would be great to see you there.


cheers
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Shared code between server and client

2015-10-22 Thread Steve Baker

On 23/10/15 08:49, Jay Dobies wrote:
I'm working on moving the functionality for merging environments from 
the client into the server [1]. I've only superficially looked at 
template_utils.py (in heatclient) but I'm guessing there is stuff in 
there I will want to use server-side.


The server has a requirement on heatclient, but I'm not sure what the 
convention is for using code in it. Can I directly call into a module 
in heatclient/common from the server or is the client dependency only 
intended to be used through the client-facing APIs?


[1] https://blueprints.launchpad.net/heat/+spec/multi-environments

heat server already depends on heatclient, which was done so that some 
template parsing could live in heatclient and be shared by both (this 
isn't finished, and anyone who wants to pick it up is welcome to)


So yes, this would be a valid case for heat calling heatclient functions.

As an aside, it would be preferable if heatclient can somehow discover 
that it is interacting with a multi-env aware REST API, and fallback to 
local merging as appropriate.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] resource_registry base_url

2015-10-22 Thread Steve Baker

On 23/10/15 09:35, Jay Dobies wrote:
In looking through the environment file loading code, I keep seeing a 
check for base_url in the resource registry. It looks like a way to 
have the registry entries only be the filenames (I suppose relative 
filenames work as well) instead of having to enter the full path every 
time. The base_url would be used as the root URL for those filenames 
when loading them.


Thing is, I can't find any reference to that in the docs. I did a 
search for resource_registry, but the only thing I can find is [1] 
which doesn't talk about base_url.


Is this something that's still supported or was it "turned off" (so to 
speak) by removing the docs about it so users didn't know to use it? 
Is the syntax to just sit it side by side with the resource 
definitions, similar to:


resource_registry:
  "base_url": /home/jdob/my_templates
  "OS::Nova::Server": my_nova.yaml

Or am I just totally missing where in the docs it's talked about 
(which is also terribly possible)?


[1] 
http://docs.openstack.org/developer/heat/template_guide/environment.html?highlight=resource_registry


Thanks :)


I'm not sure, since I've never used an explicit base_url. I just put the 
environment file which defines the resource_registry in the same 
directory as my_nova.yaml and the relative paths will resolve. Is that 
an option for you?


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] core team nomination

2015-10-20 Thread Steve Baker

+1

On 21/10/15 02:38, Sergey Kraynev wrote:

I'd like to propose new candidates for heat core-team:
Rabi Mishra
Peter Razumovsky

According statistic both candidate made a big effort in Heat as
reviewers and as contributors [1][2].
They were involved in Heat community work  during last several releases and
showed good understanding of Heat code.
I think, that they are ready to became core-reviewers.

Heat-cores, please vote with +/- 1.

[1] http://stackalytics.com/report/contribution/heat-group/180
[2] http://stackalytics.com/?module=heat-group=person-day



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zaqar][cli][openstackclient] conflict in nova flavor and zaqar flavor

2015-10-13 Thread Steve Baker

On 14/10/15 10:15, Dean Troyer wrote:
On Tue, Oct 13, 2015 at 3:58 PM, Shifali Agrawal 
> 
wrote:


All above make sense, just one thing, how about using word "zaqar"
 instead of messaging? That is what all other projects are doing,
for example:


These are the old project-specific CLIs, note that the 'keystone' 
command only supports v2 auth today and will be removed entirely in 
the keystoneclient 2.0 release.


$ keystone user-create
$ heat event-list

This will create a separate namespace for the project and also
will solve the issue of `openstack messaging message post`.


One of the things I have tried very hard to do is make it so users do 
NOT need to know which API handles a given command.  For higher-layer 
projects that is less of a concern I suppose, and that was done long 
before anyone thought that 20+ APIs would be handled in a single 
command set.


Namespacing has come up and is something we need to discuss further, 
either within the 'openstack' command itself or by using additional 
top-level command names.  This is one of the topics for discussion in 
Tokyo, but has already started on the ML for those that will not be 
present.


No matter how we end up handling the namespacing issue, I will still 
strongly insist that project code names not be used.  I know some 
plugins already do this today and we can't stop anyone else from doing 
it, but it leads to the same sort of inconsistency for users that the 
original project CLIs had. It reduces the value of a single (or small 
set of) CLI for the user to deal with.



I would agree with Dean here. "messaging" is a service, not a thing the 
service provides. I'd like to think that commands can be built using a 
list of nouns, with the first noun making it sufficiently obvious of the 
general family of things you're working on. "queue" seems to fit as the 
first noun in this case, so how about:


openstack queue post
openstack queue pool flavor create
openstack queue pool flavor get
openstack queue pool flavor delete
openstack queue pool flavor update
openstack queue pool flavor list
openstack queue pool create
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Requests + urllib3 + distro packages

2015-10-12 Thread Steve Baker

On 13/10/15 02:05, Thomas Goirand wrote:


BTW, the same applies for tablib which is in a even more horrible state
that makes it impossible to package with Py3 support. But tablib could
be removed from our (build-)dependency list, if someone cares about
re-writing cliff-tablib, which IMO wouldn't be that much work. Doug, how
many beers shall I offer you for that work? :)

Regarding tablib, cliff has had its own table formatter for some time, 
and now has its own json and yaml formatters. I believe the only tablib 
formatter left is the HTML one, which likely wouldn't be missed if it 
was just dropped (or it could be simply reimplemented inside cliff).


If the cliff deb depends on cliff-tablib I would recommend removing that 
dependency and just stop packaging cliff-tablib.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [A Bug of Heat in Stable/Liberty] Stack creation fails when connecting a Nova::Server instance with a created network

2015-10-05 Thread Steve Baker

Thanks, I'm triaging this now.

On 06/10/15 11:41, Mingyu Li wrote:

Hi all,

There seems a bug in stable/liberty branch.

https://bugs.launchpad.net/heat/+bug/1503060

In an environment installed with Devstack and stable/liberty branch, 
stack creation fails when I use some templates that worked well with 
stable/kilo, such as the following:


heat_template_version: 2014-10-16

resources:
  private_net1:
type: OS::Neutron::Net
properties:
  name: demo_net_a

  private_subnet1:
type: OS::Neutron::Subnet
properties:
  network_id: { get_resource: private_net1 }
  cidr: 192.168.1.0/24 

  server1:
type: OS::Nova::Server
properties:
  name: Server1
  image: cirros-0.3.4-x86_64-uec
  flavor: m1.nano
  networks: [{"network":{ get_resource: private_net1 }}]

The response is " ERROR: Unable to find network with name 'None' ".

It seems that Heat does not find the network { get_resource: 
private_net1 }


Did someone see this before?

Best regards & Thanks,
Mingyu Li


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] -1 due to line length violation in commit messages

2015-10-05 Thread Steve Baker

On 06/10/15 06:23, Ben Nemec wrote:

On 10/05/2015 12:12 PM, Jeremy Stanley wrote:

On 2015-10-05 12:00:31 -0500 (-0500), Ben Nemec wrote:
[...]

Note that it's best to do this once the change is ready to be
approved. If you do it earlier and the committer pushes a new
patch set without fixing the commit message, it will revert the
fix made through the web interface.

Well, one workflow tweak which avoids that is to always pull the
latest state of your change from Gerrit before you start modifying
it rather than assuming what is in your filesystem is still current.
It also helps to check Gerrit before pushing a new patchset (or lurk
in an IRC channel where the openstackgerrit bot reports uploads for
that repo), making sure nobody else has updated that change while
you were editing.

That said, a lot of developers probably don't do this.


Yeah, and I'm assuming we shouldn't need to fix commit messages much for
experienced developers who know to do this, but maybe I'm being
optimistic there. :-)

What I generally do is either edit the commit message through gerrit 
just before approving the change, or leave a review comment requesting 
the eventual approver to edit the commit message


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Traditional question about Heat IRC meeting time.

2015-09-29 Thread Steve Baker

+1

On 29/09/15 22:56, Sergey Kraynev wrote:

Hi Heaters!

Previously we had constant "tradition" to change meeting time for 
involving more people from different time zones.
However last release cycle show, that two different meetings with 
07:00 and 20:00 UTC are comfortable for most of our contributors. Both 
time values are acceptable for me and I plan to visit both meetings. 
So I suggested to leave it without any changes.


What do you think about it ?

Regards,
Sergey.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Integration Test Questions

2015-09-20 Thread Steve Baker

On 20/09/15 20:24, Qiming Teng wrote:

Speaking of adding tests, we need hands on improving Heat API tests in
Tempest [1]. The current test cases there is a weird combination of API
tests, resource type tests, template tests etc. If we decide to move
functional tests back to individual projects, some test cases may need
to be deleted from tempest.

Another important reason of adding API tests into Tempest is because
the orchestration service is assessed [2] by the DefCore team using
tests in Tempest, not in-tree test cases.

The heat team has done a lot (and killed a lot) work to make the API as
stable as possible. Most of the time, there would be nothing new for
testing. The API surface tests may become nothing but waste of time if
we keep running them for every single patch.
Thanks for raising this. Wherever they live we do need a dedicated set 
of tests which ensure the REST API is fully exercised.

So... my suggestions:

- Remove unnecessary tests in Tempest;

agreed

- Stop adding API tests to Heat locally;
- Add API tests to Tempest instead, in an organized way. (refer to [3])
I would prefer an alternative approach which would result in the same 
end state:

- port heat_integrationtests to tempest-lib
- build a suite of REST API tests in heat_integrationtests
- work with defcore to identify which heat_integrationtests tests to 
move to tempest

[1]
http://git.openstack.org/cgit/openstack/tempest/tree/tempest/api/orchestration/
[2] https://review.openstack.org/#/c/216983/
[3] https://review.openstack.org/#/c/210080/


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tripleo][Heat][Nova][Ironic] Rich-network stuff and ironic hypervisor

2015-09-13 Thread Steve Baker

On 13/09/15 23:56, Sergey Kraynev wrote:

Hi folks,

Currently during implementation rich-network bp [1] (spec - [2]) we 
met with issue on Tripleo

 [3]. As temporary solution patch [4] was reverted.

According traceback mentioned in bug description current issue related 
with mac

 addresses which should be used for specific hypervisor [5] [6].
Previously in Tripleo, when we created vm without 'port-id' in 
networks parameters, it was
 handled by Nova [7], so new port got mac address from list of allowed 
addresses.


According rich-network BP, we want to use pre-created port (which we 
create in Heat code
 directly) during booting VM. Unfortunately in this case validation 
mentioned above fails due

 to different mac_addresses (for port and for hypervisor).

We discussed it with Derek, and it looks like for Tripleo  it's 
overhead work to get such mac
 addresses and pass it in Heat template. Also I personally think, that 
it's not user side issue,
i.e. we should solve it inside Heat code ourselves. So we probably 
need to ask Nova Ironic driver (because we can not ask ironic directly 
from Heat) for this information - about list

of allowed mac-addresses and then use it during creating port.

I have investigated Novaclient code, but did not met any ability to do 
it, except make to_dict() for Hypervisor object, but I am not sure, 
that it will be presented in this output.


So I'd ask Nova guys about some suggestions.
Also any thoughts are welcome.


[1] https://blueprints.launchpad.net/heat/+spec/rich-network-prop
[2] https://review.openstack.org/#/c/130093
[3] https://bugs.launchpad.net/tripleo/+bug/1494747
[4] https://review.openstack.org/#/c/217753/
[5] 
https://github.com/openstack/nova/blob/309301381039b162588e5f2d348b5b666c96bd3a/nova/network/neutronv2/api.py#L477-L488
[6] 
https://github.com/openstack/nova/blob/master/nova/virt/ironic/driver.py#L662-L678
[7] 
https://github.com/openstack/nova/blob/309301381039b162588e5f2d348b5b666c96bd3a/nova/network/neutronv2/api.py#L278


We may need to reconsider always pre-creating the port, given the above 
scenario plus comments like this[8].


One option would be to only pre-create if the template specifies 
explicit subnet or port_extra_properties, and otherwise let nova create 
the port on the specified network,


This would have implications for handling replace and rollback[9]. 
Either the server resource also needs to build resource data 
corresponding to the external_ports as well as the internal_ports, or 
the prepare_ports_for_replace needs to discover external ports too with 
a nova server get.


[8] http://lists.openstack.org/pipermail/openstack-dev/2015-July/070648.html
[9] https://review.openstack.org/#/c/221032

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Scattered thoughts on the PTL election

2015-09-13 Thread Steve Baker

On 12/09/15 04:03, Zane Bitter wrote:
The Heat project pioneered the concept of rotating the PTL for every 
development cycle, and so far all of the early (before 2013) 
developers who are still involved have served as PTL. I think this has 
been a *tremendous* success for the project, and a testament to the 
sheer depth of leadership talent that we are fortunate to have (as 
well as, it must be said, to Thierry and the release management team 
and their ability to quickly bring new people up to speed every 
cycle). We're already seeing a lot of other projects moving toward the 
PTL role having a shorter time horizon, and I suspect the main reason 
they're not moving more rapidly in that direction is that it takes 
time to build up the expectation of rotating succession and make sure 
that the leaders within each project are preparing to take their turn. 
So I like to think that we've been a good influence on the whole 
OpenStack community in this respect :)


(I'd also note that this expectation is very helpful in terms of 
spreading the workload around and reducing the amount of work that 
falls on a single person. To the extent that it is possible to be the 
PTL of the Heat project and still get some real work done, not just 
clicking on things in Launchpad - though, be warned, there is still 
quite a bit of that involved.)


However, there is one area in which we have not yet been as 
successful: so far all of the PTLs have been folks that were early 
developers of the project. IMHO it's time for that to change: we have 
built an amazing team of folks since then who are great leaders in the 
community and who now have the experience to step up. I can think of 
at least 4 excellent potential candidates just off the top of my head.




Zane is absolutely correct, I only became PTL again because we needed to 
prime the pump for successors. I too can think of many potentials who 
would be more than capable of taking this on for Mitaka and beyond.


One thing about being PTL is that the mindset and habits never leave 
you. Ongoing tasks such as bug triage, stable backports, and keeping the 
gate healthy continue into PTL retirement. In this sense I see a health 
metric of the culture of a project as being how many ex-PTLs continue to 
be engaged with it (leaving aside the many legitimate reasons people may 
have for moving on to other projects).


Obviously there is a time commitment involved - in fact Flavio's 
entire blog post[1] is great and you should definitely read that first 
- but if you are already devoting a majority of your time to the 
upstream Heat project and you think this is likely to be sustainable 
for the next 6 months, then please run for PTL!


(You may safely infer from this that I won't be running this time.)


(And neither will I :)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Backup resources and properties in the delete-path

2015-09-10 Thread Steve Baker

On 11/09/15 05:02, Zane Bitter wrote:

On 10/09/15 12:53, Steven Hardy wrote:

Hi all,

So, I've been battling with $subject for the last few days ref [1][2].

The problem I have is that out TestResource references several 
properties
in the delete (check_delete_complete) path[4], which it turns out 
doesn't
work very well if those properties refer to parameters via get_param, 
and

the parameter in the template is added/removed between updates which
fail[3].

Essentially, the confusing dance we do on update with backup stacks and
backup resources bites us, because the backed-up resource ends up 
referring

to a parameter which doesn't exist (either in
stack.Stack._delete_backup_stack on stack-delete, or in
update.StackUpdate._remove_backup_resource on stack-update.)

As far as I can tell, referencing properties in the delete path is 
the main

problem here, and it's something we don't do at all AFAICS in any other
resources - the normal pattern is only to refer to the resource_id in 
the
delete path, and possibly the resource_data (which will work OK after 
[5]

lands)

So the question is, is it *ever* valid to reference self.properties
in the delete path?


I think it's fine to say 'no'.


I know of a case where this is not no.

For a SoftwareDeployment which has work to do during DELETE the 
properties need to be accessed to get the config containing the DELETE 
work. There were convergence functional tests failing because the 
properties were not populated during delete:

https://bugs.launchpad.net/heat/+bug/1483946


If the answer is no, can we fix TestResource by e.g
storing the properties in resource_data instead?


They're already stored as self._stored_properties_data; you could just 
reference that instead. (The 'right' way would probably be to use 
"self.frozen_definition().properties(self.properties_schema, 
self.context)", but this is a test resource we're talking about.)


If we do expect to allow/support refering to properties in the delete 
path,
the question becomes how to we make it work with the backup resource 
update
mangling we do atm?  I've posted a hacky workaround for the delete 
path in

[2], but I don't yet have a solution for the failure on update in
_remove_backup_resource, is it worth expending the effort to work 
that out

or is TestResource basically doing the wrong thing?

Any ideas much appreciated, as I'd like to clarify the best path forward
before burning a bunch more time on this :)

Thanks!

Steve

[1] https://review.openstack.org/#/c/205754/
[2] https://review.openstack.org/#/c/222176/
[3] https://bugs.launchpad.net/heat/+bug/1494260
[4] 
https://github.com/openstack/heat/blob/master/heat/engine/resources/openstack/heat/test_resource.py#L209

[5] https://review.openstack.org/#/c/220986/

__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat-translator] Nominating new core reviewers

2015-08-17 Thread Steve Baker

+1

On 18/08/15 07:37, Sahdev P Zala wrote:

Hello,

I am glad to nominate Vahid Hashemian [1] and Srinivas Tadepalli [2] 
for the Heat-Translator core reviewers team.


Both of them have been providing significant contribution, development 
and review, since the beginning of this year and knows code base well.


Existing cores, please reply this email by end of this week with your 
vote +1/-1 for their addition to the team.


Review stats: 
_http://stackalytics.com/report/contribution/heat-translator/90_


[1] 
_https://review.openstack.org/#/q/reviewer:%22Vahid+Hashemian+%253Cvahidhashemian%2540us.ibm.com%253E%22,n,z_


[2] 
_https://review.openstack.org/#/q/reviewer:%22srinivas_tadepalli+%253Csrinivas.tadepalli%2540tcs.com%253E%22,n,z_


Regards,
Sahdev Zala
PTL, Heat-Translator




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Puppet] Deploying OpenStack with Puppet modules on Docker with Heat

2015-08-09 Thread Steve Baker

On 06/08/15 06:29, Dan Prince wrote:

Hi,

There is a lot of interest in getting support for container based
deployment within TripleO and many different ideas and opinions on how
to go about doing that.

One idea on the table is to use Heat to help orchestrate the deployment
of docker containers. This would work similar to our tripleo-heat
-templates implementation except that when using docker you would swap
in a nested stack template that would configure containers on
baremetal. We've even got a nice example that shows what a
containerized TripleO overcloud might look like here [1]. The approach
outlines how you might use kolla docker containers alongside of the
tripleo-heat-templates to do this sort of deployment.

This is all cool stuff but one area of concern is how we do the actual
configuration of the containers. The above implementation relies on
passing environment variables into kolla built docker containers which
then self configure all the required config files and start the
service. This sounds like a start... but creating (and maintaining)
another from scratch OpenStack configuration tool isn't high on my list
of things to spend time on. Sure there is already a kolla community
helping to build and maintain this configuration tooling (mostly
thinking config files here) but this sounds a bit like what tripleo
-image-elements initially tried to do and it turns out there are much
more capable configuration tools out there.

Since we are already using a good bit of Puppet in tripleo-heat
-templates the idea came up that we would try to configure Docker
containers using Puppet. Again, here there are several ideas in the
Puppet community with regards to how docker might best be configured
with Puppet. Keeping those in mind we've been throwing some ideas out
on an etherpad here [2] that describes using Heat for orchestration,
Puppet for configuration, and Kolla docker images for containers.

A quick outline of the approach is:

-Extend the heat-container-agent [3] that runs os-collect-config and
all the required hooks we require for deployment. This includes docker
-compute, bash scripts, and Puppet. NOTE: As described in the etherpad
I've taken to using DIB to build this container. I found this to be
faster from a TripleO development baseline.

-To create config files the heat-container-agent would run a puppet
manifest for a given role and generate a directory tree of config files
(/var/lib/etc-data for example).

-We then run a docker-compose software deployment that mounts those
configuration file(s) into a read only volume and uses them to start
the containerized service.

The approach could look something like this [4]. This nice thing about
this is that it requires no modification to OpenStack Puppet modules.
We can use those today, as-is. Additionally, although Puppet runs in
the agent container we've created a mechanism to set all the resources
to noop mode except for those that generate config files. And lastly,
we can use exactly the same role manifest for docker that we do for
baremetal. Lots of re-use here... and although we are disabling a lot
of Puppet functionality in setting all the non-config resources to noop
the Kolla containers already do some of that stuff for us (starting
services, etc.).
This sounds like a viable approach, my only suggestion would be for 
there to be an option to build a puppet-container-agent which contains 
only puppet (not the heat hook too). This could allow the 
openstack-puppet and kolla communities to collaborate quickly without 
pulling in the whole tripleo stack. Then some simple docker-compose (or 
whatever) templates could be written to bring up puppet-container-agent 
with a given manifest  hieradata, then bring up a single node kolla 
container based cloud. This would be useful for CI and local development 
of the puppet modules supporting containers.


Then heat-container-agent can be puppet-container-agent plus the heat 
hook tooling.





All that said (and trying to keep this short) we've still got a bit of
work to do around wiring up externally created config files to kolla
build docker containers. A couple of issues are:

-The external config file mechanism for Kolla containers only seems to
support a single config file. Some services (Neutron) can have multiple
files. Could we extend the external config support to use multiple
files?

-If a service has multiple files kolla may need to adjust its service
startup script to use multiple files. Perhaps a conf.d approach would
work here?

-We are missing published version of some key kolla containers. Namely
openvswitch and the neutron-openvswitch-agent for starters but I'd also
like to have a Ceilometer agent and SNMP agent container as well so we
have feature parity with the non-docker compute role.

Once we have solutions for the above I think we'll be very close to a
fully dockerized compute role with TripleO heat templates. From there
we can expand the idea to cover other roles 

Re: [openstack-dev] [app-catalog][heat] Heat template contributors repo

2015-08-09 Thread Steve Baker

On 07/08/15 06:56, Fox, Kevin M wrote:

Heat templates so far seems to be a place to dump examples for showing off how 
to use specific heat resources/features.

Are there any intentions to maintain production ready heat templates in it? 
Last I asked the answer seemed to be no.

If I misunderstood, heat-templates would be a logical place to put them then.

Historically heat-templates has avoided hosting production-ready 
templates, but this has purely been due to having the resources 
available to maintain them.


If a community emerged who were motivated to author, maintain and 
support the infrastructure which tests these templates then I think they 
would benefit from being hosted in the heat-templates repository. It 
sounds like such a community is coalescing around the app-catalog project.


Production-ready templates could end up somewhere like 
heat-templates/hot/app-catalog. If this takes off then heat-templates 
can be assigned its own core team so that more than just heat-core could 
approve these templates.




From: Zane Bitter [zbit...@redhat.com]
Sent: Thursday, August 06, 2015 11:13 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [app-catalog][heat] Heat template contributors repo

On 06/08/15 13:53, Christopher Aedo wrote:

Today during the app-catalog IRC meeting we talked about hosting Heat
templates for contributors.  Right now someone who wants to create
their own templates can easily self-host them on github, but until
they get people pointed at it, nobody will know about their work on
that template, and getting guidance and feedback from all the people
who know Heat well takes a fair amount of effort.

What do you think about us creating a new repo (app-catalog-heat
perhaps), and collectively we could encourage those interested in
contributing Heat templates to host them there?  Ideally members of
the Heat community would become reviewers of the content, and give
guidance and feedback.  It would also allow us to hook into OpenStack
CI so these templates could be tested, and contributors would have a
better sense of the utility/portability of their templates.  Over time
it could lead to much more exposure for all the useful Heat templates
people are creating.

Thoughts?

Already exists:

https://git.openstack.org/cgit/openstack/heat-templates/

- ZB

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Moving instack upstream

2015-08-09 Thread Steve Baker

On 07/08/15 00:12, Dan Prince wrote:

On Thu, 2015-07-23 at 07:40 +0100, Derek Higgins wrote:

See below

On 21/07/15 20:29, Derek Higgins wrote:

Hi All,
 Something we discussed at the summit was to switch the focus of
tripleo's deployment method to deploy using instack using images
built
with tripleo-puppet-elements. Up to now all the instack work has
been
done downstream of tripleo as part of rdo. Having parts of our
deployment story outside of upstream gives us problems mainly
because it
becomes very difficult to CI what we expect deployers to use while
we're
developing the upstream parts.

Essentially what I'm talking about here is pulling instack
-undercloud
upstream along with a few of its dependency projects (instack,
tripleo-common, tuskar-ui-extras etc..) into tripleo and using them
in
our CI in place of devtest.

Getting our CI working with instack is close to working but has
taken
longer then I expected because of various complications and
distractions
but I hope to have something over the next few days that we can use
to
replace devtest in CI, in a lot of ways this will start out by
taking a
step backwards but we should finish up in a better place where we
will
be developing (and running CI on) what we expect deployers to use.

Once I have something that works I think it makes sense to drop the
jobs
undercloud-precise-nonha and overcloud-precise-nonha, while
switching
overcloud-f21-nonha to use instack, this has a few effects that
need to
be called out

1. We will no longer be running CI on (and as a result not
supporting)
most of the the bash based elements
2. We will no longer be running CI on (and as a result not
supporting)
ubuntu

One more side effect is that I think it also means we no longer have
the capability to test arbitrary Zuul refspecs for projects like Heat,
Neutron, Nova, or Ironic in our undercloud CI jobs. We've relied on the
source-repositories element to do this for us in the undercloud and
since most of the instack stuff uses packages I think we would loose
this capability.

I'm all for testing with packages mind you... would just like to see us
build packages for any projects that have Zuul refspecs inline, create
a per job repo, and then use that to build out the resulting instack
undercloud.

This to me is the biggest loss in our initial switch to instack
undercloud for CI. Perhaps there is a middle ground here where instack
(which used to support tripleo-image-elements itself) could still
support use of the source-repositories element in one CI job until we
get our package building processes up to speed?

/me really wants 'check experimental' to give us TripleO coverage for
select undercloud projects
If Derek is receptive, I would find it useful if Delorean became a 
stackforge/openstack hosted project with better support for building 
packages from local git trees rather than remote checkouts.


With a bit of hackery I was doing this for a while, developing features 
locally on heat and other repos, then deploying an undercloud from a 
locally hosted delorean repo.


This would help getting CI working with Zuul refspecs, but it may be 
what Dan was meaning anyway when he said get our package building 
processes up to speed

Should anybody come along in the future interested in either of
these
things (and prepared to put the time in) we can pick them back up
again.
In fact the move to puppet element based images should mean we can
more
easily add in extra distros in the future.

3. While we find our feet we should remove all tripleo-ci jobs from
non
tripleo projects, once we're confident with it we can explore
adding our
jobs back into other projects again

Nothing has changed yet, I order to check we're all on the same
page
this is high level details of how I see things should proceed so
shout
now if I got anything wrong or you disagree.

Ok, I have a POC that has worked end to end in our CI environment[1],

there are a *LOT* of workarounds in there so before we can merge it I

need to clean up and remove some of those workarounds and todo that a

few things need to move around, below is a list of what has to happen

(as best I can tell)

1) Pull in tripleo-heat-template spec changes to master delorean
We had two patches in the tripleo-heat-template midstream packaging
that
havn't made it into the master packaging, these are
https://review.gerrithub.io/241056 Package firstboot and extraconfig
templates
https://review.gerrithub.io/241057 Package environments and newtork
directories

2) Fixes for instack-undercloud (I didn't push these directly incase
it
effected people on old versions of puppet modules)
https://github.com/rdo-management/instack-undercloud/pull/5

3) Add packaging for various repositories into openstack-packaging
I've pulled the packaging for 5 repositories into
https://github.com/openstack-packages
https://github.com/openstack-packages/python-ironic-inspector-client
https://github.com/openstack-packages/python-rdomanager-oscplugin

Re: [openstack-dev] [Heat] OS::Neutron::Port fails to set security group by name, no way to retrieve group ID from Neutron::SecurityGroup

2015-08-09 Thread Steve Baker

On 08/08/15 01:51, jason witkowski wrote:
Thanks for the replies guys.  The issue is that it is not working.  If 
you take a look at the pastes I linked from the first email I am using 
the get_resource function in the security group resource. I am not 
sure if it is not resolving to an appropriate value or if it is 
resolving to an appropriate value but then not assigning it to the 
port. I am happy to provide any more details or examples but I'm not 
sure what else I can do but provide the configuration examples I am 
using that are not working?  It's very possible my configurations are 
wrong but I have scoured the internet for any/all examples and it 
looks like what I have should be working but it is not.



Can you provide details of what the actual error is, plus the output of 
neutron port-show for that port?


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Proposing Kanagaraj Manickam and Ethan Lynn for heat-core

2015-07-30 Thread Steve Baker
I believe the heat project would benefit from Kanagaraj Manickam and 
Ethan Lynn having the ability to approve heat changes.


Their reviews are valuable[1][2] and numerous[3], and both have been 
submitting useful commits in a variety of areas in the heat tree.


Heat cores, please express your approval with a +1 / -1.

[1] http://stackalytics.com/?user_id=kanagaraj-manickammetric=marks
[2] http://stackalytics.com/?user_id=ethanlynnmetric=marks
[3] http://stackalytics.com/report/contribution/heat-group/90

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Show attribute is a collection of other attributes or not?

2015-07-02 Thread Steve Baker

On 03/07/15 06:03, Randall Burt wrote:

Maybe use all for all attributes in the schema and use show for the raw 
output from the service (as is done today for server and neutron stuff).
Instead of all, how about allowing a special form of {get_attr: 
[resource_name]} with no extra arguments to return a dict of all 
attributes? This would be consistent with how extra arguments traverse 
attribute data.

On Jul 2, 2015, at 12:46 PM, Steven Hardy sha...@redhat.com
  wrote:


On Thu, Jul 02, 2015 at 04:40:49PM +0300, Sergey Kraynev wrote:

   Hi Heaters.
   I don't think that my question is very huge for openstack-dev, but it
   affects a lot of Heat resourcesA
   and need collect more opinions before apply some of follow approaches.
   I recently uploaded initial approach for implementation common 'show'
   attribute [1]A
   On one of this review was raised one interesting suggestion:
   'show' attribute should return map of all resource's attributes, i.e.
   for each attr in self.attributes_schema:
   A  A outputs[attr] = A _resolve_attribute(attr)
   return outputs
   I agree, that it's more easier than separate show_resource method for each
   resource and it's the same, what returns Neutron API on show request.
   However, we already has opposite example, when OS::Nova::Server resource
   has bunch of attributes which are not similar on current 'show' attribute
   output:
   
https://github.com/openstack/heat/blob/master/heat/engine/resources/openstack/nova/server.py#L918
   I suppose, that the same situation will be and for other resources.
   So I want to ask about way, which we would like to follow?
   [1] show as collection of attributes
   [2] show as the same output for command some client A name of

I think [2] is the most useful, and most consistent with both the nova and
all neutron resources:

https://github.com/openstack/heat/blob/master/heat/engine/resources/openstack/neutron/neutron.py#L129

Another advantage of this transparent passthrough of the data returned by
the client is folks have a workaround in the event heat attributes schema
lack some new value that the client returns.  Obviously when it's added
to the attributes schema, it'll be better to use that instead.

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][tripleo]Recursive validation for easier composability

2015-06-22 Thread Steve Baker

On 23/06/15 06:49, Thomas Spatzier wrote:

From: Jay Dobies jason.dob...@redhat.com
To: openstack-dev@lists.openstack.org
Date: 22/06/2015 19:22
Subject: Re: [openstack-dev] [heat][tripleo]Recursive validation for
easier composability



On 06/22/2015 12:19 PM, Steven Hardy wrote:

Hi all,

Lately I've been giving some thought to how we might enable easier
composability, and in particular how we can make it easier for folks to
plug in deeply nested optional extra logic, then pass data in via
parameter_defaults to that nested template.

Here's an example of the use-case I'm describing:

https://review.openstack.org/#/c/193143/5/environments/cinder-

netapp-config.yaml

Here, we want to allow someone to easily turn on an optional

configuration

or feature, in this case a netapp backend for cinder.

I think the actual desired goal is bigger than just optional
configuration. I think it revolves more around choosing a nested stack
implementation for a resource type and how to manage custom parameters
for that implementation. We're getting into the territory here of having
a parent stack defining an API that nested stacks can plug into. I'd
like to have some sort of way of deriving that information instead of
having it be completely relegated to outside documentation (but I'm
getting off topic; at the end I mention how I want to do a better write
up of the issues Tuskar has faced and I'll elaborate more there).

FWIW, adding a thought from my TOSCA background where we've been looking at
something similar, namely selecting a nested templates that declares to be
matching an interfaces consumed in a parent template (that's how I
understood Jay's words above). In TOSCA, there is a more type-safe kind of
template nesting, where nested templates do not just bring new resource
types into existence depending on what parameters they expose, but there is
a strict contract on the interface a nested template must fulfil - see [1],
and especially look for substitution_mapping.

Admittedly, this is not exactly the same as Steven's original problem, but
kind of related. And IIRC, some time back there was some discussion around
introduction of some kind of interface for HOT templates. So wanted to
bring this in and give it some thought whether something like this would
make sense for Heat.

[1]
http://docs.oasis-open.org/tosca/TOSCA-Simple-Profile-YAML/v1.0/csd03/TOSCA-Simple-Profile-YAML-v1.0-csd03.html#_Toc419746122


This sounds like the dormant blueprint interface-types [2]. I still 
think it would be appropriate to support this at the heat layer even 
though Murano ended up keeping their interface implementation at the 
Murano layer.


Interfaces is slightly different to how to set parameters on deeply 
nested stacks though.


[2] https://blueprints.launchpad.net/heat/+spec/interface-types

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Heat Template Cookbook

2015-05-28 Thread Steve Baker

On 29/05/15 04:10, D'ANDREA, JOE (JOE) wrote:

[This was originally posted to openst...@lists.openstack.org by accident. 
Reposting it to openstack-dev.]

Calling all Heat neophytes, aficionados, and everyone in between ...

I've often thought it would be helpful to add a Heat Template Cookbook to the 
developer documentation.

Like any helping friendly cookbook, it would contain a variety of annotated 
recipes/templates. Ultimately, it would seek to cover use case scenarios for 
every possible ingredient/resource out there.

I've volunteered to curate and author this cookbook, alongside the HOT Guide.


This sounds great!

Cookbook-style content was how I originally conceived the HOT section of 
the OpenStack User Guide[2].  Whether the HOT Guide stays in the User 
Guide or moves back to heat developer docs it would be good to use this 
content as the starting point.


[2] http://docs.openstack.org/user-guide/hot-guide/hot.html


To help get things started, I'm seeking template contributions that:

* Are simple and illustrate basic concepts.
* Are more intermediate and build on basic concepts.
* Are more advanced and demonstrate more complex orchestration.
* Solve a common problem or answer a frequently asked template question.

Repository links welcomed!

I also seek:

* Wish lists. I wish there was a template that ...
* Questions. How do I ... ? Can I ... ?

For those wondering about the heat-templates github repo[1], yes, I'll be 
plundering the vaults. ;)

Questions/contributions most welcomed. Simply reply to this message. (Once the 
doc repository is ready, we'll also gladly accept contributions through 
proposed patches to the docs.)

IMPORTANT: Please post *links* (to gists, repos, pastes, etc.) instead of 
attachments or embedded items.

I think the main areas of focus should be networking patterns, 
clustering/groups and software config. For the latter there is a decent 
start in the HOT Guide[3]


[3] 
http://docs.openstack.org/user-guide/hot-guide/hot_software_deployment.html


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Puppet] Package updates strategy

2015-05-27 Thread Steve Baker

On 28/05/15 10:54, Richard Raseley wrote:

Zane Bitter wrote:

Steve is working on a patch to allow package-based updates of overcloud
nodes[1] using the distro's package manager (yum in the case of RDO, but
conceivable apt in others). Note we're talking exclusively about minor
updates, not version-to-version upgrades here.

Dan mentioned at the summit that this approach fails to take into
account the complex ballet of service restarts required to update
OpenStack services. (/me shakes fist at OpenStack services.) And
furthermore, that the Puppet manifests already encode the necessary
relationships to do this properly. (Thanks Puppeteers!) Indeed we'd be
doing the Wrong Thing by Puppet if we changed this stuff from under it.

The problem of course is that neither Puppet nor yum/apt has a view of
the entire system. Yum doesn't know about the relationships between
services and Puppet doesn't know about all of the _other_ packages that
they depend on.

One solution proposed was to do a yum update first but specifically
exclude any packages that Puppet knows about (the --excludes flag
appears sufficient for this); then follow that up with another Puppet
run using ensure - latest.

My only concern with this approach is how do we collect and maintain the 
excludes list. Other than that it sounds reasonable.

A problem with that approach is that it still fails to restart services
which have had libraries updated but have not themselves been updated.
That's no worse than the pure yum approach though. We might need an
additional way to just manually trigger a restart of services.


Maybe this could be handled at the packaging stage by reving the package 
version when there is a known fix in a low-level library, thus 
triggering a service restart in the puppet phase.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [heat] New alternate meeting time

2015-05-22 Thread Steve Baker
Heat's current meeting times are Wednesdays 2000 UTC, and 1200 UTC on 
alternate weeks.


Since I can't attend the 1200 UTC time I would like to suggest the new 
alternate time of 0700 UTC.


http://www.timeanddate.com/worldclock/meetingdetails.html?year=2015month=5day=27hour=7min=0sec=0p1=22p2=1038p3=33p4=166p5=676p6=136

This time looks bad for the Americas, but should be reasonable for 
everybody else in the world, so I'm hoping for a decent turnout.


The next heat meeting is the alternate time, so I'll see you on 
Wednesday 27th at 0700 UTC.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release] heat-cfntools 1.3.0

2015-05-11 Thread Steve Baker

We are chuffed to announce the release of:

heat-cfntools 1.3.0: Tools required to be installed on Heat
provisioned cloud instances

For more details, please see the git log history below and:

http://launchpad.net/heat-cfntools/+milestone/1.3.0

Please report issues through launchpad:

http://bugs.launchpad.net/heat-cfntools

Changes in heat-cfntools 1.2.8..1.3.0
-

02acffb README changes to make release_notes.py happy
57f8ae8 Ported tests from mox3 to mock to support Python = 3.3
f879612 Python 3 compatibility
a7ffb71 Support dnf when specified or yum is missing
9862bd7 Fix RST syntax errors/warnings in README.rst
d96f73c Fixes cfn-hup hooks functionality
16a9a83 Workflow documentation is now in infra-manual

Diffstat (except docs and test files)
-

CONTRIBUTING.rst   |   7 +-
README.rst |  14 +-
heat_cfntools/cfntools/cfn_helper.py   | 151 +-
requirements.txt   |   1 +
test-requirements.txt  |   2 +-
tox.ini|   2 +-
8 files changed, 610 insertions(+), 528 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index 3e6b445..531eb32 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -4,0 +5 @@ psutil=1.1.1,2.0.0
+six=1.9.0
diff --git a/test-requirements.txt b/test-requirements.txt
index 3890c0a..5d3b372 100644
--- a/test-requirements.txt
+++ b/test-requirements.txt
@@ -4 +4 @@ hacking=0.8.0,0.9
-mox3=0.7.0
+mock=1.0


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release] python-heatclient 0.6.0

2015-05-11 Thread Steve Baker

We are chuffed to announce the release of:

python-heatclient 0.6.0: OpenStack Orchestration API Client Library

For more details, please see the git log history below and:

http://launchpad.net/python-heatclient/+milestone/0.6.0

Please report issues through launchpad:

http://bugs.launchpad.net/python-heatclient

Changes in python-heatclient 0.5.0..0.6.0
-

c2422d6 Updated from global requirements
cf982a3 Make hook type optional for hook-clear
ffa0f0f Refactor common event code into event_utils module
53c46f0 Add hook-poll function to check if a stack has pending hooks
7896760 Work around pypy testing issue
5f085c0 Implement --limit option for event-list with --nested-depth
e6b3fbb event-list marker, only slice when needed
ac00237 Add option to show hidden stacks in stack-list
fa88aa9 Add an option to generate template based on resource type
dba5cc4 Updated from global requirements
fb19cf5 Implement --marker for event-list --nested-depth
0622581 Fix sort column for event-list with nested-depth
6d7610a Unit tests for stack actions
0eb7f78 Add --nested-depth option to event-list

Diffstat (except docs and test files)
-

heatclient/common/event_utils.py | 128 ++
heatclient/common/utils.py | 9 +
heatclient/v1/resource_types.py | 5 +-
heatclient/v1/shell.py | 163 +--
requirements.txt | 6 +-
tox.ini | 7 +-
12 files changed, 1086 insertions(+), 241 deletions(-)

Requirements updates


diff --git a/requirements.txt b/requirements.txt
index a290a65..b316798 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -6 +6 @@ Babel=1.3
-pbr=0.6,!=0.7,1.0
+pbr=0.11,2.0
@@ -13 +13 @@ oslo.utils=1.4.0 # Apache-2.0
-python-keystoneclient=1.1.0
+python-keystoneclient=1.3.0
@@ -16 +16 @@ PyYAML=3.1.0
-requests=2.2.0,!=2.4.0
+requests=2.5.2

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Liberty Summit Topics etherpad

2015-05-11 Thread Steve Baker

On 29/04/15 07:43, James Slagle wrote:

On Mon, Apr 6, 2015 at 3:11 PM, James Slagle james.sla...@gmail.com wrote:

I've created an etherpad for for TripleO to track the topics we'd like
to discuss at the Liberty Summit:
https://etherpad.openstack.org/p/tripleo-liberty-proposed-sessions

It's also linked from the main Design Summit Planning wiki page:
https://wiki.openstack.org/wiki/Design_Summit/Planning

If you have something you'd like to propose to discuss, please add it
to the etherpad.

TripleO has 2 fishbowl sessions and 2 working sessions at the Summit,
as well as an all day contributor's meetup on Friday.  I'd like to
finalize the topics for the sessions over the next day or 2. We can
continue to refine as needed, but I'd like to get the summaries out
there so that folks can start planning what sessions they want to
attend.

My thinking right now is that we devote one fishbowl session to a
discussion around tripleo-heat-templates. Particularly around refining
the interfaces and what we can further do to enable Docker integration
for a containerized Overcloud. We could also discuss making the
template implementations more composable at the individual service
level, and plans around deprecating the elements based templates.

For the second fishbowl session, we could make it testing/CI focused.
We could devote some time to talking about diskimage-builder testing,
and TripleO CI as it relates to quintupleo, the puppet modules, and
possibly using the infra clouds. Depending on time and interest, we
could also discuss if and how we might move forward with a devtest
alternative that was more production oriented.

For the working sessions, I don't think we need as much of a defined
summary. But I suspect we could pick a few things to focus on at each
session: tripleo-heat-templates, HA, network architecture,
diskimage-builder testing.

Let me know any feedback/opinions, and I'll get the schedule updated
on sched.org this week. Thanks.


There is also this Heat session which is very much TripleO themed:
http://libertydesignsummit.sched.org/event/02f197c0beadad5eb354ab7518dd9363#.VVE4sHU4Z4s

I'd like this to show up as a TripleO session too, and I attempted to 
add TripleO to the Also appears in... field but it didn't stick.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][all] Architecture Diagrams in ascii art?

2015-05-11 Thread Steve Baker

On 12/05/15 09:57, Joe Gordon wrote:
When learning about how a project works one of the first things I look 
for is a brief architecture description along with a diagram. For most 
OpenStack projects, all I can find is a bunch of random third party 
slides and diagrams.


Most Individual OpenStack projects have either no architecture diagram 
or ascii art. Searching for 'OpenStack X architecture' where X is any 
of the OpenStack projects turns up pretty sad results. For example 
heat [0] an Keystone [1] have no diagram. Nova on the other hand does 
have a diagram, but its ascii art [2]. I don't think ascii art makes 
for great user facing documentation (for any kind of user).


So how can we do better then ascii art architecture diagrams?


How about ascii source diagrams?

[0] http://docs.openstack.org/developer/heat/architecture.html
[1] http://docs.openstack.org/developer/keystone/architecture.html
[2] http://docs.openstack.org/developer/nova/devref/architecture.html



These are all sphinx generated documents, so we could use something like 
blockdiag to generate all manner of diagrams

https://pypi.python.org/pypi/sphinxcontrib-blockdiag
http://blockdiag.com/en/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat] This week's meeting canceled

2015-05-11 Thread Steve Baker
We're very close to the summit, and the alt meeting time is scheduled 
for when I'd like to be sleeping, so I propose that we don't have a Heat 
meeting this week.


Please mail the list if you have any summit preparation topics which 
need discussion.


I'll set up a poll for a new alt meeting time for the next one on the 27th.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat] Design summit session etherpads

2015-05-11 Thread Steve Baker
The Heat design summit schedule has taken shape [1] and there are now 
etherpads populated with our planning content [2].


If you are down as a Driver for one of the sessions (or if you have an 
interest in the topic) then feel free to modify the etherpad in 
preparation for the session.


[1] http://libertydesignsummit.sched.org/type/design+summit/Heat
[2] https://wiki.openstack.org/wiki/Summit/Liberty/Etherpads#Heat


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   3   >